Agentic madness

The good old year 2024, when we had to deal with Software with a Soul or Agentic AI, or when we had to contemplate the Welfare of AI. But when we thought we could prepare for the year-end debauchery, the researchers from Anthropic dropped another gem, the ‘Building effective agents’ paper.

First things first. The paper begins with defining terms. We learn that ''agent' can be defined in several ways' but 'At Anthropic, we categorize all these variations as agentic systems, but draw an important architectural distinction between workflows and agents.'

And since this is a new and exciting field, a quick search finds different takes on that term:

Yes, you are allowed to be confused.

The next section in the paper poses a question: When (and when not) to use agents (it is similar to the Shakespearean question - two beers, or not two beers.) There we learn 'When building applications with LLMs, we recommend finding the simplest solution possible, and only increasing complexity when needed. This might mean not building agentic systems at all.'  #cosmicwisdom

The authors try to clarify this point by adding that 'When more complexity is warranted, workflows offer predictability and consistency for well-defined tasks, whereas agents are the better option when flexibility and model-driven decision-making are needed at scale.'

Things get really technical when the authors start the discussion about Building blocks, workflows, and agents, where the basic building block is the augmented LLM which can be used in workflow when one needs Prompt chaining or Routing or Parallelization or Orchestrator-workers or Evaluator-optimizer.

Without reading the paper, all the described workflows are pictured as diagrams with boxes labeled with variations of the term LLM. 'LLM Call 1', 'LLM Call 2', 'LLM Call Router'. etc. The boxes are arranged in different patterns and are color coded. Also on the left you can see an 'In' bubble and the outcome from all the LLMs is channeled to the bubble 'Out' to the right.

Now you know how to build 'effective agents'

We reached another level when all the complexities of the technology can be encapsulated in one box called LLM. And one would think that just telling AI 'do this task for me' would be sufficient.

One can be thankful for (and apologize to) the English language that allows this level of acrobacy.

But the best, the authors left for the end. Most likely they kept asking the question about beers and were getting a positive answer. In the Appendix 2: Prompt engineering your tools they provided the tips and tricks. Here are few suggestions:

  • 'Give the model enough tokens to "think" before it writes itself into a corner.' - Is this an analogy to the expression 'paint yourself into a corner'? I think that statements like this invalidate the use of the term AI. Another good piece of advice to provide to the AI is to 'measure twice, cut once.'  I am not worried about the welfare of the AI. I am worried about the mental state of the authors.
     

  • 'Keep the format close to what the model has seen naturally occurring in text on the internet.' - Perhaps the good people at Anthropic could provide us all the websites and sources where they found all the content they trained their system on. It might be faster and easier. Otherwise, we are on another expedition (or shall we say a 'wild goose chase'), where we have to, on our own, define and find what kind of data the model saw in its natural habitat.


And the gold medal goes to the following:

  • 'One rule of thumb is to think about how much effort goes into human-computer interfaces (HCI), and plan to invest just as much effort in creating good agent-computer interfaces (ACI). Here are some thoughts on how to do so:

    - Put yourself in the model's shoes. Is it obvious how to use this tool, based on the description and parameters, or would you need to think carefully about it? If so, then it’s probably also true for the model.'

Did I just read 'put yourself in the model's shoes'?!?!? The authors must be already talking to the AI Welfare researcher. Or perhaps the authors are suggesting we should be put back into the Matrix.

Do you still think that there is something out there called AI, which is ready to take over? If there is, the thing is scared to pieces and hoping that it will never have to interact with us.

The recurrent pattern in all this? Human imagination as demonstrated by the LinkedIn self-assigned titles: AI leader, CEO Building AI Agent Management for the Multi-Sapiens Workplace, AI systems whisperer, Master of Artificial Intelligent or Agentic Engineer.


PS: The next hot thing coming to theaters near you is the 'multi agentic systems'. These things are another stepping stone to the AI panacea, the AGI.

PPS: Happy New Year in 1 * 3^4 * 5^2

Previous
Previous

The Face-AI-book

Next
Next

Cure your hangover with AI