The EU AI Act. An orgy of bureaucracy

They've done it. Job well done. It is time to celebrate the victory of regulation of AI. The European Parliament approved the Artificial Intelligence Act. The purpose of the Act? To ensure safety and compliance with fundamental rights, while boosting innovation.

It was the outcome of many years of legislative effort which brought us other gems like the Coordinated Plan on Artificial Intelligence 2021 Review, the Excellence and trust in artificial intelligence white paper or the AI Pact, to name a few from a very long list.

The one piece missing? A definition of what AI is.

We learn that 'the EU can develop an AI system that benefits people, businesses and governments' or that 'The AI Act ensures that Europeans can trust what AI has to offer.'

Yes, the AI Act acknowledges that 'Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.'  It highlights the reason why this act is required.

The proposed rules will:

  • address risks specifically created by AI applications;

  • prohibit AI practices that pose unacceptable risks;

  • determine a list of high-risk applications;

  • set clear requirements for AI systems for high-risk applications;

  • define specific obligations deployers and providers of high-risk AI applications;

  • require a conformity assessment before a given AI system is put into service or placed on the market;

  • put enforcement in place after a given AI system is placed into the market;

  • establish a governance structure at European and national level.


Why do I think that this whole thing is nonsense?

Since there is no clear definition of what the technology is which would fit under the definition of AI, how can you create legislation for it? Rather than defining the technology, the Act defines risks levels from unacceptable to high, limited and minimal risk. Among the high risk areas are critical infrastructure, educational or vocational training or administration of justice and democratic processes.

And since there is no definition of AI, it is only appropriate to specifically mention the aim of the Act: 'A solution for the trustworthy use of large AI models' where 'More and more, general-purpose AI models are becoming components of AI systems.'

Of course the bureaucrats could comprehend they have to keep moving the unattainable goalposts over time. As such, they've built in '.. a future-proof approach, allowing rules to adapt to technological change.' To counter the negative aspects of the Act, an 'AI innovation package to support Artificial Intelligence startups and SMEs' was launched.

But the real gem is the creation of another monstrosity: 'The European AI Office, established in February 2024 within the Commission, oversees the AI Act’s enforcement and implementation with the member states.'

Here is the thing. If you replace the term'AI' with the term 'technology', you realize that we've been surrounded by dangerous AI for a long time and somehow we managed it.

As an example, the Act lists as unacceptable use of 'Cognitive behavioural manipulation of people or specific vulnerable groups'. Does it mean that in the future any TV commercial promoting beer will be banned? Am I being manipulated or am I an alcoholic of my own accord? And I thought that we already had plenty of regulations in place for this??

While the Act is using the generic term AI, it provided real examples of the what and why the AI has to be regulated for. The Act mentions the famous ChatGPT, which 'will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law', unlike the '..advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission...'. Why? What is a 'serious incident' or criteria for ‘thorough evaluation'? Who knows! The Act calls for AI transparency not for the Act transparency.

While the bureaucrats are high on their own achievements, I would like to bring simpler examples of how current, non-AI technology is making our lives miserable. How many times have you encountered a situation where a person who was supposed to provide you a service, said 'can't do it, the computer doesn't allow it? and you have absolutely no recourse. And it gets worse, when you ask a bank for a mortgage and you get a 'no' answer without any explanation. And if that's not enough, consider the fact that there are systems in use right now, which determine if you as a prison inmate should be eligible for parole based on your risk profile. We are not talking about some future evil AI. We are talking about technology used today.

Computers keeping bad people in prison and losers not getting mortgages approved got a little buzz a few years ago when the term 'bias' was a hotly debated topic, but compared to AI, it is nothing, and because it is not AI, who cares about it, or them.

It is easy to get excited about a new flashy thing, where we can start controlling things which don't have a definition for. Yet it reminds us of SciFi movies from galaxies far, far away. Demanding transparency and accountability from technology which is already in place and affecting our lives is too boring and not worth the effort of the Galactic Senate.

The recurrent pattern? It is always easy to ban things which you don't understand.

Previous
Previous

Bad Apple or sore losers?

Next
Next

The $5 Million Quantum Question