Kodex - an AI Utopia

A couple weeks ago, I got contacted on LinkedIn by people from Kodex to join their network.

What is Kodex building? It is, 'Kodex Network, the Bias-Free Moderation and AI Oversight.'

How does it work? 

'Experience efficient, decentralized content moderation with our AI-human hybrid system. Powered by KODA tokens, it ensures cost-effective management and a bias-free online community environment.'

The short whitepaper, dubbed Litepaper, provides an overview of the project, the reason for its inception, and the solution to the problem.

The overall goal is to create a nirvana where people can enjoy their interaction and be protected by bias-free AI trained by well-meaning humans. The idea is to deliver moderation free of anything bad. All that is done under the governance of Decentralized Autonomous Organization (DAO) where members of the DAO vote and set community guidelines.

Why would anyone try to build something like this?

Because current moderation systems are centralized in the hands of a few tech behemoths where employees decide what's acceptable and what is not.

The idealism is commendable, but their story lacks coherence and misplaced faith in technology.

The stated vision, 'Bringing order and freedom back to harmony,' is odd.

Tell me, in which period of humanity was order and freedom ever in harmony?

Which civilization mastered this utopia?

Kodex thinks it has.

It says 'Kodex is at the vanguard of a balanced digital society where freedom, order, and AI harmonically coexist.' The omnipresent, transcendent AI, of course, supported by democratic process.

The design of the technology is well thought through, since the founders come with a high level of tech pedigree.

The AI models will be contributed by developers from around the world and are designed to self-improve. Blockchain will be used for accountability and trust, while Kodex guarantees security and transparency.

While AI and LLMs in particular are advancing at a rapid pace, human moderators will be required for the foreseeable future to moderate what AI might miss.

People will be essential for making sure that AI aligns with human values. Also to help people with managing the volume to be verified, there will be other AI systems to do that.

Side note on the security claim: I would be extremely careful with the word 'guarantee.' Also for fun, you can read the research paper, Improved Techniques for Optimization-Based Jailbreaking on Large Language Models  where the attacks achieved a nearly 100%  success rate...

The irony in all this is that Kodex declares itself the guardian of moderation under democratic principles. But only its members will decide for everyone else what is acceptable language and what is not.

In case you are not familiar with the DAO concept, as with everything designed by humans, it has its shortcomings. An example — everyone is equal but some are more equal than the others. Each person or entity’s voting power is based on the number of tokens they own.

And since in technology we trust, there will be numerous AI systems watching each other, together with humans.

It will be these people who — properly trained by other unbiased humans, based on the unbiased rules voted by the members of DAO — who ensure unbiased moderation for the greater good where order and freedom are in harmony. Does this sound biased?

If the founders of Kodex are so bent on delivering orderly freedom to people, why don't they design a system where each person can decide on their own what is acceptable or not for them.

The recurrent pattern? Why do we always think that we know what's good for everyone else? Now we are adding to the delusion that AI will fix our bad social habits.

Previous
Previous

AI Agents, the future of the Internet

Next
Next

The AI delusion of The New York Times