OpenAI is anything but open

Back in the glory days, a few people got together and created a vision for a bright future. AI would be built for the benefit of humanity. Created as an open source model, and everyone could see how it worked. The idea was to prevent the bad guys from building anything bad, because the good guys would have better AI.

That's how OpenAI was created.

From that vision, we are left only with the name. There is nothing open about OpenAI. Most likely the word open will end up in the same pile with the 'Don’t be evil' motto.

Like any good high-profile startup, it is going through growing pains. Just on a bigger scale. It is finding that the 'other' guys are building similar products, and, hence, it has to protect its secret sauce. It is also finding that it is becoming — on a scale of billions — very expensive to innovate, build and push its models to production when expenses are rising faster than revenue.

One could almost have predicted that it would go that way. Plans for Utopias where we live happily ever after and drink from the same stream with Bambi are here only to give children hope. But business is business. OpenAI has to make money and raise even more money to stay alive.

Another area where the word open no longer applies (if it ever did) is transparency. My dear readers might remember the post ChatGPT, another step away from the truth where I wrote about ChatGPT being prevented from answering questions about a mayor from Australia. The issue then was that ChatGPT — wrongly — suggested that the mayor was a convict involved in a bribery scandal. That was in April 2023.

You would think that the good people at OpenAI would work hard at fixing a problem like this. Yes, they did. They kept expanding the list of forbidden names. Some people are on that list because their lawyers talked to OpenAI. Others — we have no idea why, and more are on that registry because…of a glitch?!?!

Glitch, you say?

The funny thing is that if your name is the same as someone on that list, you too won't appear in the ChatGPT answers. At least we know that ChatGPT doesn't discriminate, it treats all people with the same name equally. It denies any knowledge about them.

With every occurrence like this, OpenAI is building a brand of 'can't be trusted.'Under the chat box, it has the disclaimer: 'ChatGPT can make mistakes. Check important info.' We now know it filters out uncomfortable questions.

It goes without saying that the list is secret (the famous 'security by obscurity'protocol, which, of course, never works) and I am sure that there is a clause somewhere in their terms of use that says trying to uncover that list through reverse engineering is prohibited.That's why the company is still named OpenAI.

Fortunately for Microsoft, it made a deal with OpenAI to include its technology in its own products.

Do the same restrictions apply to MS CoPilot? In the last edition of Recurrent Patterns — MS Copilot. Flying straight into the mountain, I described how Microsoft is unleashing its AI agents on the corporate world, where a 'constellation of agents' will create miracles for any company.

Just imagine that the same technology will be refusing to answer or process requests based on (uncomfortable) names or events or ... Or maybe, after you get fired from a company the AI will refer to you as 'You-Know-Who, He-Who-Must-Not-Be-Named.'

OpenAI is painting itself into a tighter and tighter corner. With every new version released, its AI is trying to do more and more, but we are now getting used to this new technology, and the 'wow' is being replaced by our ever-rising expectations. The monkey's tricks are getting boring, and it has a more difficult time learning new tricks.

What's worse, OpenAI (and makers of other Large Language Models) have a hard time implementing security. Here is a description of how to bypass the name checking filter, which I am sure OpenAI can fix for now. And, as expected, AI hacking is getting mainstream.

For your curious mind, here a primer on securing your LLM applications against prompt injection attacks, where you can learn the basics.

The bonus is that you can start practicing on any website with a ChatBot or other entry field with the invitation, 'How can I help you today?'

The recurrent pattern? Think twice before you name your new company Open, For The People or Trust Us. It might be hard to keep the brand’s promise.

Next
Next

MS Copilot. Flying straight into the mountain