Unpacking the EU AI Act

(don't worry, won't be boring)

In Partnership with:

like.. honestly. Every time.

And it’s 108 pages long 🫠 (melt).

Enjoy the full read, or just read the below.

I promise I’ll cover the essentials and it will take you 3 minutes tops.

Soooo, first things first.

A bit of context

Just to remind you - The EU is THE largest trade block of the world. When they speak, everyone else listens.

With that comes power - and the ability to set standards for the world to adopt and follow.

For example: In 2016 the EU adopted the GDPR.

Set as a regulation rather than a directive, this became the standard to harmonise privacy laws globally.

The AI Act could do the same and ensure fairness, safety, and transparency in the use of AI.

Which is why in March 2024, the European Parliament officially adopted the AI Act.

Inside the Act 

The EU AI Act classifies AI systems based on their risk on society and has broadly determined 4 categories.

Let’s unpack each.

Unacceptable Risk

This includes all AI systems that are considered a clear threat to safety, livelihoods, and the rights of people. These systems will be banned, and their use will be prohibited. Examples include:

  • Social scoring systems

  • Predictive Policing (a system that predicts criminal behavior)

  • Manipulative AI

High Risk

The EU AI Act mainly concerns high-risk AI systems (even if these are not the majority of systems nowadays).

Small businesses in sectors like HR tech, fintech, or health tech should be particularly attentive to these regulations.

The act specifically looks at AI systems used in critical infrastructure, education, safety components of products, essential services, law enforcement, migration. But they will also include systems used for recruitment, HR, credit scoring, medical devices, diagnostics, and more.

This category will be highly regulated and subject to strict obligations, including:

  • adequate assessment for risk & mitigation

  • high quality of datasets & security measures

  • logging & reporting of activity and results

  • detailed documentation & human oversight

Limited Risk

This category refers to the risks associated with a lack of transparency in AI usage and is probably the category most relevant to you. It includes Chatbots, AI-Generated Content, Customer Service Solutions, Deepfakes, etc.

The Act introduces specific transparency obligations to ensure humans (us) are informed when AI (the robots) is used.

You know, to foster trust and stuff.

ohhhh dear.

Blame the EU, not me.

Providers will have to state when interaction is held with AI and ensure that AI-generated content is identifiable. Adherence to existing data protection laws like GDPR is also mandatory.

If in doubt, just add a simple note that you used AI (if you did).

Minimal or No Risk

This allows for systems like AI-enabled video games or spam filters that present minimal or no risk to society. For example when Netflix recommends a movie based on all the others you have watched.

The vast majority of systems currently used in the EU fall into this category. No specific regulations or obligations apply - but ethical use is strongly promoted.

What does this mean for you? 

By now, you probably have realized that most (and the heaviest) regulations wouldn’t befall you if you did not sit in the high-risk category. But once in use, the act will apply to us all.

There are certain things you can do to make sure you keep on top of regulations and stay compliant.

  • Conduct regular audits to check for compliance gaps.

  • Make sure to stay transparent about the use of AI.

  • Stay informed about regulatory changes. Make sure to check reliable sources.

  • Consult with legal and compliance experts.

  • Use AI tools that are pre-certified for compliance.

  • And most importantly, foster a compliance culture within your organization.

Transparency, Trustworthiness, Accountability - this is the mantra of the EU AI Act.

What if you don’t comply? 

The penalties for non-compliance are again structured in tiers, with the highest fines reserved for the most serious violations. The maximum penalties are:

  • Up to 35 Million (or 7% of total annual turnover) for non-compliance concerning prohibited AI.

  • Up to 15 Million (or 3% of total annual turnover) for non-compliance with other requirements.

  • Up to 7.5 Million (or 1% of total annual turnover) for providing incorrect, incomplete, or misleading information.

What does it cost to be compliant?

However, the act does consider the size of the company, and the regulation has lower thresholds for small-scale providers, start-ups, and SMEs to protect their economic viability.

The same goes for penalties, the emphasis being on encouraging compliance rather than punitive measures, especially for SMEs.

In summary, the EU AI Act is a major step toward regulating AI to ensure it's used ethically and responsibly. For business owners, understanding these regulations is crucial to navigate the evolving AI landscape successfully.

Stay informed, ensure compliance, and leverage AI responsibly to benefit your business.

If you are not a member of the AI Leadership Forum, join today and become part of the next cohort of applicants!

Did you enjoy today's newsletter?

Login or Subscribe to participate in polls.

This is all for this week.

If you have any specific questions around today’s issue, email me under [email protected].

For more infos about us, check out our website here.

See you next week!