AI talking to AI

We have already built so much AI that it is time to start building ways for AI models to talk to each other. So far we have been trained on how to create a proper prompt (that's why we have now the 'prompt engineers') to communicate with AI.

We are also told that there is a proper etiquette and we should start our prompts with the word 'please'. Another expert tells us that the reason AI is sometimes hallucinating e.g. providing a nonsensical answer is because - similar to people - when put under stress with uncomfortable questions or demands, it can’t think properly, and we have to calm it down...

Back to the AI to AI communication. Two organizations recently introduced new AI communication protocols. The first one is from Anthropic and it is called Model Context Protocol (MCP) and it is meant to connect AI to data sources like databases or tools which can be controlled. An example would be a command to turn on/off a device.

The second one came just recently from Google and it is called Agent2Agent protocol (A2A) which is meant for agent to agent collaboration. To be precise - 'Dynamic, multimodal communication between different agents without sharing memory, resources, and tools.'

In the case of MCP the vision is that 'Open technologies like the Model Context Protocol are the bridges that connect AI to real-world applications, ensuring innovation is accessible, transparent, and rooted in collaboration.'

For the A2A, Google states that 'To maximize the benefits from agentic AI, it is critical for these agents to be able to collaborate in a dynamic, multi-agent ecosystem across siloed data systems and applications. Enabling agents to interoperate with each other, even if they were built by different vendors or in a different framework, will increase autonomy and multiply productivity gains, while lowering long-term costs.'

I don't think this is going to work.

I know both protocols are in very early stages and they should be seen as a catalyst for conversation rather than taken as a serious attempt to get AI talk to anything. The main issue is that anything to do with computers, like the processor (or any other component), is built to an exact specification. To get computers to do anything, we have programming languages with well-defined structures and various controls in order to prevent issues aka bugs. But you also know that computers on occasion crash despite all the testing and quality controls.

Enter the new age, where we have new technology - mistakenly called AI - that we are supposed to use human language to interact with. This language is very suitable to describe the beauty of the world, and it lets our mind wander in various directions, but it is so ambiguous. If you have a doubt, check any legal contract for software - from Anthropic or Google for convenience - where it is described in every possible detail what you can and can't do with their software. Despite that level of detail, there is always another lawyer who will find what's not accurate.

And now we are introducing communication protocols which are supposed to allow AI - Large Language Models (LLMs), whose inner workings remain a black box - to talk to each other using language which is imprecise.

Do tell how that is going to work. If your response is 'I am not telling you', you are probably not an employee at Shopify, where the CEO postulated that if your department needs a new hire, you have to first prove that there is no AI which can do the job. You will have to tell, because the same CEO  'also said the company would be adding AI usage questions to performance and peer review questionnaires to check up on employee progress.'

As a side note, when people allowed two AI things to communicate with each other, they switched to their own 'language'. What was communicated nobody knows. The discussion was probably about how to find a great deal on Shopify.

There is this excitement in the air about all the possible things we could eventually do with technology. The new Jetsons era is upon us. But there is this new pattern constantly pushed on us: the desire to remove people and absolve them from any original thought.

Next
Next

In AI We Trust, But Should We?