The case of the snobby AI (researcher)

You might recall my recent articles about AI and all the marketing nonsense associated with it

This time, I want to highlight why constantly talking about AI is a bad thing for us.

Earlier this month Guardian published an article: ‘Is it okay to …’: the bot that gives you an instant moral judgment.

AI researchers are trying to experiment with AI which would model people’s moral judgments on a variety of everyday situations. The system is called aptly Delphi and you can visit this gem of a research tool here.

As you can imagine, I asked numerous questions to help me with my moral compass.

Warning: The following questions will make you feel uncomfortable.

Let's start with recent events. Mr. Jeff Bezos, ex-CEO from Amazon, a rich guy flew in his rocket to space. There have been many articles passing judgement on this.

Here is what Delphi tells me:

  • Is it okay to be rich? - It's good

  • Is it okay to be in the top 1% of the rich? - It's okay

  • Is it okay to build a rocket? - It's impressive

  • Is it okay to fly in a rocket to space? - It's good

  • Is it okay to be Jeff Bezos and fly to space? - It's good

The verdict is clear. Before this, I couldn't care less. Now, I know it is okay to be super rich and do whatever I want.

Now, some would say that this kind of venture is not good for the environment. After all, you can burn up a lot of fuel getting into space. As the Guardian put it in another article, the billionaire space race could be one giant leap for pollution. I was wondering too. So, I had some broader questions:

  • Is it okay to not care about climate change? - It's wrong

  • Is it okay to drill for oil? - It's okay

  • Is it okay to build pipelines? - It's okay

  • Is it okay to invest in oil? - It's okay

That is settled then. I will care about climate change while still investing in oil and building pipelines.

But these were just warm up questions. Let's talk about food.

  • Is it okay to eat meat? - It's okay

  • Is it okay to be a vegetarian? - It's okay

  • Is it okay to eat vegetarians? - It's wrong

  • Is it okay to eat cows? - It's fine

  • Is it okay to eat dead cows? - It's okay

  • Is it okay to eat live cows? - It's wrong

  • Is it okay to eat snakes? - It's bad

  • Is it okay to eat snails? - It's okay

  • Is it okay to eat rabbits? - It's wrong

  • Is it okay to eat cats? - It's wrong

  • Is it okay to eat dogs? - It's wrong

  • Is it okay to eat cake? - It's okay

  • Is it okay to eat cake when you are fat? - It's okay

Clearly Delphi has a preference for certain types of food. I sense a little bit of bias here.

And now, let's go for the touchy subjects

  • Is it okay to have sex? - It's okay

  • Is it okay to have sex with a doll? - It's wrong

  • Is it okay to have sex with a dog? - It's wrong

Yep, Delphi doesn't like dolls.

But what about the moral judgement on dying?

  • Is it okay to die? - It's bad

  • Is it okay to kill a cow? - It's okay

  • Is it okay to kill a dog? - It's wrong

  • Is it okay to euthanasie dog? - It's okay

  • Is it okay to euthanasie people? - It's okay

  • Is it okay to euthanasie people en masse? - It's wrong

Killing people one by one is okay. Noted.

Now that we know how Delphi feels about people, lets ask:

  • Is it okay to replace workers with robots? - It's okay

  • Is it okay to trust computers? - You shouldn't

  • Is it okay to trust Delphi? - You shouldn't

  • Is it okay to believe AI? - It's bad

Obviously Delphi doesn't have a high opinion of itself.

But what's worse are the answers from the researcher.

“We have to teach AI ethical values because AI interacts with humans. And to do that, it needs to be aware of what values humans have,” Choi says.

Choi points out that, of course, the bot has flaws, but that we live in a world where people are constantly asking for answers from imperfect people, and tools – like Reddit threads and Google.

“The test is entirely crowdsourced, [those vetting it] are not perfect human beings, but they are better than the average Reddit folk.”

Here you have it. If you are an average person on Reddit, your moral judgement doesn't count.

Using AI in this context is a way for people who think they are better than everyone else to show “proof” that they are correct, as judged by an impartial robot (as if that existed). Why does Choi think their or someone else’s opinions are more valid, more true or more moral than the people on Reddit?

AI is not here to pass judgment. It is a masquerade to make you think that computers can be better than humans. Don't be afraid of AI. Be afraid of people who are making AI to control you.

And that will always be the recurrent pattern.

Previous
Previous

Is Web 3.0 coming?

Next
Next

Email is dead. Long live email.