Mission Impossible
If wishes were fishes we'd all cast nets.
The president of the United States issued an Executive Order titled Preventing woke AI in the Federal Government where it states 'Artificial intelligence (AI) will play a critical role in how Americans of all ages learn new skills, consume information, and navigate their daily lives. Americans will require reliable outputs from AI, but when ideological biases or social agendas are built into AI models, they can distort the quality and accuracy of the output.'
It is also stated in the document that Federal Agencies procure AI - specifically Large Language Models (LLMs) - technology based on 'Unbiased AI Principles' such as 'Truth-seeking' and 'Ideological Neutrality'.
While I leave others to discuss the meaning of these words - unbiased, truth and neutral; from a technological point of view, this is impossible to implement. The technology as designed and implemented has no capability to deliver on that requirement.
Some of my dear readers might recall my TEDx talk In AI We Trust – But Should We? which I wrote back in September, 2024 and delivered in January, 2025. I was concerned that by the time it appeared on YouTube the content would be obsolete and the world would have moved on. Quite the contrary. Relevant and accurate as ever.
Here are some of the key points for your consideration:
LLMs are trained on a large volume of content, which is impossible to verify for accuracy. All the AI companies and their content suppliers are crawling the Internet and indiscriminately collecting information to be used for the model’s training. Example? My post about 'vegetative electron microscopy'.
Nobody - especially people building these models - knows how these models work. These models are black boxes which produce a probabilistic answer to any question.
As the name suggests - Large Language Models - these are language models, not knowledge models. As such, these models are not capable by definition of producing a truth or a lie.
Nobody knows how to make these models forget. As new, accurate information appears and is entered into the model, there is no known mechanism to make the model 'forget' or replace the old information with something new. While humans forget, we don't know why or how we do it.
And that's where the impossibility of compliance with the order lies.
In order for the model to be compliant, it has to be trained only on information which is unbiased, true and neutral. That can be achieved either by vetting every piece of information with unbiased people (don't exist) or a machine, but that's a Catch 22. We don't know how to build one.
Next, there has to be a standard against which you can assess the system compliance. People at the Office of Management and Budget will be responsible for issuing guidance on how to procure a system like that. If the assumption is that there will be a standardized test, that effort is already doomed. You might recall another post 'Teaching an old monkey new tricks' documenting how OpenAI cheated on a math test by Epoch AI.
By disclosing the test, you allow all the LLM companies to train their models on the 'correct' answers. Another option will be to create a black box test where these models will be tested against unknown questions with undisclosed answers. Mission impossible.
The real conundrum for all the AI companies vying for the lucrative government contracts will be that there will have to be at least one person in each company, who will have to sign off on the guarantee that the system is compliant. Read any of the terms and conditions from these vendors. They explicitly state that they are responsible for nothing.
I used all the above to build my case that this is not the purpose of technology - solving issues which only people can solve by dialogue and debate. Sadly, OpenAI already demonstrated that it can make its technology do whatever it wants with zero transparency and accountability. It built into the system the ability to provide answers to uncomfortable questions or even pretend that certain things or people don't exist.
Perhaps the biggest irony of all is that we got to this point exactly because of all the companies like OpenAI. Maybe the next versions of ChatGPT will be the 'woke' and 'non-woke' editions. Can't wait for Mr. Altman's product announcement.
The recurrent pattern? In the pursuit of truth, the technology will achieve exactly the opposite. But maybe, just maybe we will end up with technology which we can trust.