ChatGPT, another step away from the truth
OpenAI’s ChatGPT just reached another low. Not only did they release a circus monkey to the wild, but together with Microsoft they created hype while hiding the true evil behind their flowery talk about responsibility, accountability, no-bias, and the AI’s amazing capabilities.
OpenAI is now funded by Microsoft, creating the (current) best example of censorship with no accountability and self-serving bias. You might recall my last post about the challenge of teaching AI to forget. As an example, I used a story about a mayor in Australia, where ChatGPT erroneously declared him as a convict involved in the bribery scandal. As you can imagine, the mayor is not happy about that. The above dwells on the intricacies of forgetting and how to build algorithms for that.
As I was preparing ideas for this week's post, I tried to demonstrate the usefulness of ChatGPT as a language model and also illustrate the pitfalls of using it as a knowledge model. e.g. the output sounds good but it might not be accurate. To test it, I wanted to show its capabilities to summarize text, create taxonomy and identify key concepts in text. It is something which ChatGPT shouldn't have a problem with.
By complete coincidence, I used the text of my last post mentioning the false information about the mayor. My instructions were simple: create a summary from the following text, which was my previous article. What did I get? An error message: 'Something went wrong'. I was sure that I copied too much text into the command line. Cut text in half.... I got the same result. Maybe my login expired. Not a problem. Logout, login again and let's give it another try. Same outcome. Then, I removed the part about the mayor. Suddenly, there was no problem! I received a nice summary from ChatGPT and it quickly identified key concepts in the text.
To isolate the problem, I posed a straight, simple question: what do you know about Brian Hood, who was elected mayor of Hepburn Shire, 120km (75 miles) northwest of Melbourne? I got this answer: 'Hmm...something seems to have gone wrong.' (and yes, I took the screenshot of that)
The second part of my test was supposed to be about tracing the source of an information provided by ChatGPT.
Q: Tell me the source of your knowledge about rainbows.
A: As an AI language model, my knowledge about rainbows comes from various sources including scientific research, textbooks, online articles, and other credible sources of information. I have been trained on a vast corpus of text data that includes information about the physics of light, atmospheric optics, meteorology, and other related fields. Specifically, my knowledge about rainbows comes from my training data, which includes information about how rainbows are formed, their properties, and the different types of rainbows that can be observed. Additionally, I am constantly updating my knowledge as I learn from the questions and information provided by users like you.
This is the state of ChatGPT. OpenAI built a system where you are not allowed to ask questions which OpenAI deems inappropriate. This is not because of any political correctness, but because it is not convenient for the company. When you try to ask about the source of banal information, you get brushed off with a generic statement which is supposed to create an illusion of authority.
In comparison — and despite all the things that Google is doing with its search engine — when you enter 'who is suing Google' in the input field, you get links going for pages and pages of the legal troubles Google is fighting around the world. The last thing Google needs to be accused of hiding information about its bad behavior.
Last month, the Future of Life Institute released its manifesto to pause any further development of AI for half a year. The actions of OpenAI shows that pausing any further development would not solve any problem. If the institute (and all the signatories of the document) are so worried, they should work extra hard to bring to the forefront the despicable activities of organizations like OpenAI. If they struggle to find or define a framework under which they can start evaluating whether this new technology poses any risk or whether it is trustworthy and transparent - here it is.
Computers don't lie. They do exactly what they were programmed to do. It is people who do the programming. It’s foolish to try to stop this type of innovation. Meanwhile, we should make people accountable for their efforts to hide the truth from us while hiding behind the technology. That is the recurrent pattern which will make AI useful.