PYMNTS. AI firms agree to kill switch, but experts are skeptical

Recently in Seoul, global artificial intelligence companies reached a consensus to implement an AI ‘kill switch’ policy. It aims to stop the development of advanced AI models that will essentially ‘do harm’ to humanity. Certain thresholds are set.

But the so-called ‘kill switch’ has experts feeling a little bit skeptical about how it will work, and whether or not it would work at all.

PYMNTS covered the news and reached out to Vaclav Vincalek, virtual CTO and founder of 555vCTO.com, for his thoughts on the AI kill switch.

Companies will push threshold boundaries of an AI ‘kill switch’

The article goes on to say that proponents of the kill switch see it as a necessary tool to safeguard against dangers of AI.

Vincalek elaborated on the operational challenges of this kind of policy. 

“The way this kill switch would function requires all AI companies to explicitly define risk parameters and assess their models against these criteria. Additionally, they would need to produce auditable reports to verify compliance. Despite government regulations and legal backing, I foresee companies pushing the boundaries as their AI systems approach the risk threshold,” Vincalek told PYMNTS

“Even with government regulations and legal weight behind the agreed upon ‘kill switch,’ I can see companies continuing to push the thresholds if their AI systems approach that ‘risky’ line,” he added.

Vincalek wasn’t the only one skeptical of its feasibility.

AI ‘kill switch’ is an inaccurate term

Other experts in the article criticized the inaccuracy of the term AI ‘kill switch’, calling it misleading. One expert said it makes it sound like companies are required to pull the plug on AI developments as soon as they hit that threshold. 

There were other concerns that companies might agree to it, but not adhere to the terms of the ‘kill switch’ if and when it came time to flip it.

And yet another expert highlighted the complexities in defining and applying the criteria for the kill switch, asking who gets to decide what is a risk, and where the thresholds should be set.

Connect with 555vCTO.com to see if your product has technical feasibility

You don’t need to be building AI to benefit from the services of a fractional CTO. The experts at 555vCTO.com can help you assess the technical feasibility of your product (whatever it might be). Let us help you assess how quickly we can get your idea off the ground, what resources are available, and (you may not want to hear this) whether or not you should even build the product. It’s a tough question, but one every entrepreneur needs to answer before time and money get wasted.

Previous
Previous

Canadian HR Reporter. Firing over ‘keystroke simulation’ highlights systemic issues, says tech expert

Next
Next

PYMNTS. Tech companies betting big on AI’s future still need to heed caution