AI troubles at Apple’s kingdom
Last year, Apple made a big announcement at its Worldwide Developers Conference. It introduced AI. Not Artificial Intelligence, but Apple Intelligence. As every other tech company was announcing their pivot to AI, Apple felt like it had to do something too.
It is not that Apple hasn't been working on new technologies. Its own line of M series of Systems on Chips (SoC) is loaded with processors which makes running AI faster and faster with every new version.
But it looks like the AI algorithms are not behaving as Apple expected.
You might remember that during the Developers conference, Apple announced a partnership with OpenAI and that it will use the technology to capture the imagination of its customers. What was apparent during these announcements was the absence of talking much about Artificial Intelligence - instead the term Apple Intelligence was used, but more importantly, OpenAI wasn't even part of the announcement. Its mention was in the fine print in the footnote on the page which didn't even get printed. For OpenAI it sounded like a major and very costly deal.
The list of announcements was long and it portrayed the panacea of user experience.
Then the reality settled in.
Whatever Apple wanted to do to keep up with its brand image is not working as expected or hoped. It would appear that these two words 'expectations' and 'hope' in high dosages are always present when discussing AI. It can be expected from OpenAI - a startup which lives on hype in order to attract more investment. One can directly experience it now when using Google Search, where Gemini is trying to be helpful with 'AI overview'. Unfortunately that approach is not helping Google's business model. There are no ads (at least for now) and users are not clicking on links to go to websites where they would see more ads. Google hopes that one day it can roll it into a new product. The expectations are high.
However, Apple is not a startup, nor can it run these tests. Apple is in a business where the product has to work for millions and millions of people and it has to work reliably. And add to it the brand promise of privacy and securitywhich adds extra constraints when implementing new features into the product.
Side note: Apple's decision to focus on privacy and security, and their decision to be very explicit about it, is one example of the strategic choice the company made. For many companies, when describing their strategy, they use these good sounding phrases - 'the customer is at the center of what we are doing', 'we hire the best people', etc. Do you know a company which explicitly differentiates itself with the opposite? However, when you state that privacy and security is your differentiator, and then you can't introduce a new product because of that, you know you made a choice because it hurts. You might notice I didn't use the words 'good' or 'bad' choice. That always comes up later when you start counting money in your bank account.
What Apple was promising last year was supposed to be introduced this year, but will arrive - if at all - next year.
Through this new product development, Apple is running into every weakness of the AI - the Large Language Models (LLMs). The output is unpredictable. For the same question, you might get a different, maybe very similar answer, but different. Also, you can't guarantee the accuracy of the answer. Based on their demo the Apple Intelligence will be Powerful, Intuitive, Integrated, Personal and Private. It is the last three which create the headache for Apple. In order to deliver Powerful, the AI has to have access to every single aspect of the system. It has to have access to your contacts, calendar, emails, messages, notes, and credit card, before Siri (btw: another product having its own issues) can say 'Should I call your friend about next week's birthday or should I just buy her a gift?' Apple also promised that the AI functionality will be available for developers to build millions of useful apps.
How to do that and not break your brand promise and violate user privacy is an almost unsolvable problem for now. The bonus is the security part, where so far every single LLM is susceptible to hacking.
Another data point for your consideration comes from a new article about challenges for LLMs: Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End. In the piece, AI researchers question if the current approach to keep scaling the LLMs with bigger and faster computers will get us to the (questionable) Artificial General Intelligence (AGI).
The tech world was, is and always will be the world of beautiful dreams, daring hopes and high expectations. The art is to recognize which dreams will survive the collision with reality. May that art become your recurrent pattern.