The cognitive dissonance of the Co-World

Before we can replace you, we have to help you.

It would appear that that's the approach AI companies are taking while on the road to a total control of humanity.

The latest is the introduction of Cowork from Anthropic.

As the prefix 'co' suggests, Cowork means that we should be doing something 'together'. Maybe these companies are realizing that after all, this incarnation of AI won't replace humans, only at best help them do something, anything. And maybe, along the way these systems will become functional and reliable enough to replace people.

Judging by the release of Cowork, we have a long, long road ahead of us.

Unlike Microsoft's Copilot which is described as “... your companion to inform, entertain and inspire. Get advice, feedback and straightforward answers,” Anthropic actually doesn't even define what Cowork is, only what it does.

Yes, there is a section at the beginning of the post titled 'What is Cowork?' followed only description what it does or what you can do with it:

Cowork uses the same agentic architecture that powers Claude Code, now accessible within Claude Desktop and without opening the terminal. Instead of responding to prompts one at a time, Claude can take on complex, multi-step tasks and execute them on your behalf.

With Cowork, you can describe an outcome, step away, and come back to finished work—formatted documents, organized files, synthesized research, and more.


Take another example - Zoom with its 'Zoom AI Companion 3.0' where - as the word 'companion' suggests - the AI would like to be our friend and break bread with us. What does this AI do for you? It “... does more than save you time. It captures context, uncovers insights, and helps you deliver better work.

These companies employ never ending emotional manipulation to make us feel better, empower us to achieve greatness and execute complex tasks on our behalf. The interesting thing is that none of the marketing pitches mention that their AI will replace you or make you money.

While making people obsolete is the dream/vision/delusion of the founders of AI companies, the reality on the ground suggests otherwise.

In my previous post 'AI can’t work, by design' I outlined why the current crop of AI (LLM) technology can't reliably work. The constraining factors are accuracy and security.

To add to this argument, you can read 'A Conjecture on a Fundamental Trade-Off between Certainty and Scope in Symbolic and Generative AI' which postulates that we can't have it both ways with the current design.

Either we will have systems based on mathematical logic and rules where we can have a proof of correctness or we have generative systems with high-dimensional mappings where the outputs are based on statistics and it can not - ever - guarantee accuracy.

The implication is that in the foreseeable future, any technology will have to be highly constrained and testable within a well defined scope. It won't run on its own and it will be there to support individual users with specific tasks. No Copilot or Cowork will do that. It's time to simplify and re-think the current systems architecture.

Let me close with the following:

Augmented intelligence (for humans)

Let’s stop trying to make machines that make decisions for us -- or at least, let's put that aside for the moment. Let’s keep building technology that helps humans to make better decisions supported by access to better information.

Here’s the new term I believe we should use for augmented intelligence: human intelligence supported with artificial intelligence. The objective is to help every person to reach their maximum potential. It's to help them to learn, be creative and be free to make decisions -- not to obey commands for their own good.

And that’s what we want. If humans ever develop a true artificial intelligence, maybe it will be a good thing. But for now, let’s use technology to empower humans to be better.


The recurrent pattern? The previous quote is from my post 'Forget About Artificial Intelligence -- We Need Augmented Intelligence' which was published in Forbes in July, 2019, 6 years ago. AI companies will have to come to a decision - either keep spending billions on system design which won't work or build something useful, which people can use and trust. You can't have it both ways.  

Next
Next

Google’s next monopoly lawsuit