Another CEO bites the dust
Another day, another CEO gone. No, I am talking about Mr. Altman of OpenAI, nor am I talking about Mr. Zhao at Binance. The CEO in question is Mr. Kyle Vogt from Cruise, the self-driving division of General Motors.
What went wrong? One of its self-driving cars recently dragged a woman for 20 feet before it stopped. The headline “After robotaxi dragged pedestrian 20 feet, Cruise founder and CEO resigns” was worth a click to read more. Once you start reading the article, you find out that a woman was crossing the street and was initially hit by a different car, which fled the scene of the accident, and was subsequently hit by the self-driving car, which (eventually) stopped.
It was not clear whether the woman just walked into the traffic or was using a pedestrian crossing, but the report goes on and on about the malfunction of the self-driving car and dedicates very little to the reprehensible behavior of the actual driver who first hit the pedestrian and fled the scene.
Various government agencies opened investigations into the incident, grounded the fleet of self-driving cars, and the CEO of Cruise resigned. The fact that another Cruise car hit a firetruck in August certainly did not help matters.
You might remember that there are other companies working hard on self-driving cars like Waymo by Alphabet (Google), Uber, and Lyft. Amazon is trying with Zoox, too. And perhaps because of Twitter aka X, we haven’t heard much from Elon Musk about the self-driving Tesla, which has been just “a year away” since 2018.
Building software that can perform better than humans at driving cars is proving to be very difficult. The number of new situations the cars are encountering all the time is hard to imagine. Just picture your daily commute. Most of the time you drive the same, but how often is there a new situation on the road that you haven’t encountered before and had to solve with quick thinking? Perhaps you may have even had to break the rules of the road to do so.
But you are not a machine. Machines obey rules that are clearly defined and measured. However, when it comes to self-driving cars, we don't measure the successful kilometers or miles the car has driven without an accident, rather we measure them by failure, or how many accidents the car has had. Dragging a pedestrian or hitting a fire truck is a failure. A failure that humans are responsible for all the time. Just check your local newspaper.
We currently expect 100% reliability and predictability from the software that runs self-driving cars (a software which we don't call AI). From another software, which is called AI and is unreliable and unpredictable, we expect undefined miracles. The scrutiny that goes into evaluating the functionality of the self-driving system is tremendous, and rightly so. But for some reason, we are happy to release production systems that are still in their infancy and not even close to being production ready - yes, I am talking about you, Microsoft. Rarely do you hear about a CEO getting fired for malfunctioning AI.
There is lots of talk about the potential threat of AI to humanity (including the most recent about Q* from OpenAI) and lots of talk about how to regulate it. Perhaps the same people can observe and follow the example of those who are building self-driving cars. Try not to measure how often ChatGPT can answer some idiotic question, but how often it is hallucinating and delivers complete nonsense. That would be a recurrent pattern worth striving for.