Shocking, positively shocking
We are almost at the end of the year 2025 and Mr. Musk, CEO of Tesla, must be shocked. And I mean positively shocked. Why? Because he proclaimed that by the end of this year, “... self-driving software would be able to drive Tesla vehicles without human supervision.”
What Tesla delivered so far is the next version of its Full Self-Driving (Supervised) (FSD) software update, which was released just a few days ago. The Internet lit up with these headlines:
Elon Musk Sets Self-Driving Tesla Robotaxi Countdown To Three Weeks
Tesla unsupervised FSD milestone 'very close,' Piper Sandler says
Tesla's Full Self-Driving Blew Me Away. I Still Wouldn't Buy It
The new software gets lots of positive feedback, but there is always the cautionary note from reviewers. “The car does the safest thing in most situations, most of the time. Sometimes, however, it will get it way, way wrong. Trouble is, because you don’t know how it really works, you probably won’t see these moments coming. This means it requires constant vigilance, something that untrained drivers facing misleading marketing just aren’t equipped for.”
You also get reports from 'influencers' like this one on YouTube: 'Tesla's New Self Driving Update Feels illegal'.
Before we get to the few excerpts from the video, please note that a simple definition for Fully Autonomous vehicle is 'hands-free and eyes-free'.
Here are the nuggets:
“The system has evolved and it essentially stops nagging you for holding your phone.” - Yes, while previous versions were monitoring that you are paying attention to the road and not checking your phone, this no longer applies.
“It [holding a phone and texting] feels wrong and in most places it is still illegal. But Elon Musk says this new behavior depends entirely on context, whatever that means.” - Well, if the CEO says that within the right context you can do whatever you want, why not?
“If you have to have two people side by side, texting in their car, would you rather them be on FSD or off of FSD?” Weird question to begin with but to support the argument that it is ok to text while using FSD, here is another nugget:
“It's not exactly Tesla's responsibility to stop people from texting and driving... if I don't wear my seatbelt in my car and I get a ticket for that or I go flying out of the car and hurt somebody else because my body went, you know, flinging into their vehicle, the manufacturer, Ford isn't responsible.” His justification speech is that it’s ok to text while using FSD goes on and on, which was briefly interrupted by “Oh, we slipped on some ice there and the car changed its mind” - I think the author just made his point.
While it is interesting to read and listen to what others have to say, it is also useful to provide space for what Tesla has to say about the safety of its cars.
Here is the official website for the Full Self-Driving (Supervised) Vehicle Safety Report - https://www.tesla.com/en_ca/fsd/safety
First, it shows you a video with many examples where Tesla was able to avoid an accident. There is a counter of how many miles were driven - we are in the billions now. It then goes to calculate the potential number of lives could be saved and injuries avoided. That is accompanied by a section about the methodology used to calculate the numbers and the Frequently Asked Questions.
A side note: One reads the assumption about saved lives from the drivers perspective. Nobody ever talks about the pedestrians.
It fits nicely into the saying 'there are lies, damn lies and statistics.’ Based on your view, you can find it either highly accurate or very much misleading. The one interesting answer is to the question as to why Tesla reports collisions under different reporting rules than the other car companies. Here is an explanation: “The consequence of the SGO’s reporting scheme suggests higher absolute collisions among Tesla vehicles than for every vehicle manufacturer combined; however, the reality is that Tesla reports more collisions simply because we have a large, active and fully connected customer vehicle fleet.”
While we debate what level of accuracy is acceptable when using technology like ChatGPT with its disclaimer 'ChatGPT can make mistakes. Check important info', we implicitly accept that when getting into a car we might get into an accident. Getting into a self-driving car, you get the explicit number that on average you might go three years without crashing. The question is - what number will make you feel safe?
So here we are at the end of 2025. We don't have self-driving Teslas. We don't have self-driving robotaxis. What we have is a shocked Mr. Musk and a recurrent pattern of unfulfilled promises. I'll report back next year about the progress.