Is AI conscious?(And if so, is it laughing at us?)
People love using the word AI. Isn’t it strange that there’s still so much disagreement about what it is?
On September 22, 2021, the UK government published a document titled National AI Strategy. Interestingly, the subtitle was Our ten-year plan to make Britain a global AI superpower.
On February 9th, 2022, Ilya Sutskever from OpenAI posted on Twitter: "it may be that today's large neural networks are slightly conscious."
On March 2, 2022, Wired magazine published an essay: Europe Is in Danger of Using the Wrong Definition of AI.
This is just a small sample of content about the topic of AI and the elusive term Artificial Intelligence.
You are led to believe that AI is something mysterious which will become so powerful and overtake us. Some believe we are headed to a future where the computer will decide – and it will simply say 'no'.
In reality, there is an algorithm running on a computer, trying to perform a task which it was designed to do. Yes, the algorithm might be very difficult to develop. The computing power required is much larger than what you might use to power your personal laptop. And the task might be too difficult for one individual person to resolve. But that still doesn't mean we should use terms like intelligent or smart to describe the algorithm. It is still a machine and it takes many smart, intelligent people to build it.
I invite you to watch this video describing how to develop an algorithm where a computer can recognize handwritten numbers. The concept is simple to understand, but it shows how much math you have to know and how hard you have to think about it.
The real problem comes when we build a system where we have only a vague idea why it behaves the way it does and start using it in a context it wasn't designed to be used in, in the first place. When we move from complicated to complex, we can't guarantee that a certain set of inputs will produce predictable outputs.
Academics and lawmakers continue to argue about what constitutes AI and how to classify any particular system and its level of intelligence to determine which regulations it should fit under. While they do that, we should focus on a simpler definition. Does the machine do what it is supposed to do? How can we tell if it does correctly? What is the recourse when we determine that there is an error?
And then we should move to more fundamental questions: Do we even want a machine to be present and make a decision for us? Are we ok with the negative impact of these decisions? Do we have the right to know how the decision has been made?
I would argue that answering these questions is more important then to build machines that would absolve us from taking responsibilities from our own actions and makes us slaves to our own laziness. Because, we know that one day AI will be really good at enforcing this recurrent pattern and making fun of us.