The new rules of the game: EU AI Act

+NEWS: Grok AI to be available in Teslas from next week; OpenAI's Windsurf acquisition falls through.

TL;DR

The EU just released new guidance under its AI Act, outlining which AI practices are banned (like social scoring and real-time facial recognition), how AI systems are categorized by risk, and what’s required for compliance. Most businesses using AI will fall into “limited risk” but need to be transparent about AI use. A new voluntary GPAI Code of Practice also helps developers and users of general-purpose AI (like ChatGPT) stay compliant—covering transparency, safety, and copyright. The law kicks in August 2025, but now’s the time to audit your tools, stay informed, and adopt best practices.

There is a new playbook…

and if you’re using or thinking about using AI in your business, you’ll want to tune in.

The EU has just dropped fresh guidance on its AI Act, and it’s a big deal. Alongside that, a new Code of Practice for General Purpose AI (GPAI) has been released—think of it as a guidebook for staying on the right side of regulation if you’re working with powerful AI tools.

So, what’s changing and how does it affect you?

Let’s break it down in simple terms:

What is not allowed:

Some AI practices are now flat-out banned because they’re considered too risky. These include:

  • Subliminal tricks (AI that manipulates without your knowledge)

  • Exploiting the vulnerable (targeting children, elderly, or people with disabilities)

  • Social scoring (rating people based on their behavior—big no-no)

  • Real-time facial recognition in public (with some exceptions for law enforcement)

If your business isn’t dabbling in these areas, you’re probably safe—but knowing this sets the baseline for ethical AI use.

Understanding risk categories

The EU has grouped AI into four risk levels:

  1. Unacceptable Risk – banned outright

  2. High Risk – heavy regulation (think CV scanners or credit scoring tools)

  3. Limited Risk – must be transparent (like telling users a chatbot is AI)

  4. Minimal/No Risk – like spam filters or recommendation engines

Most businesses will fall into the limited or minimal risk categories, but if you’re offering services in HR, fintech, or healthcare, pay close attention—you may be in “high-risk” territory.

Compliance if you are in a high risk category

For high-risk systems, expect to meet requirements like:

  • Clear documentation

  • Risk assessments

  • Human oversight

  • Regular reporting

Luckily, the EU knows this can be tough—especially for smaller companies. So, they’re offering support through regulatory sandboxes (safe environments to test your tools) and reduced obligations for SMEs.

What’s the GPAI Code of Practice?

If you use tools like ChatGPT or other foundation models, here’s where things get interesting.

The GPAI Code of Practice is a voluntary guide that helps developers and businesses stay compliant with upcoming rules (taking effect August 2025).

Key points include:

  • Transparency: Know where the training data came from, what the model can do, and how it should be used.

  • Safety & Risk Management: Especially important for powerful models—think continuous testing, external audits, and reporting serious issues.

  • Copyright Compliance: Don’t use copyrighted content without permission. Avoid scraping data behind paywalls and ensure your model doesn’t reproduce protected works.

Why should you care? Because signing onto the Code can make your compliance path easier, with fewer audits and less red tape down the line.

What This Means for Your Business

Even if you're not building AI yourself, if you're using it, these changes matter:

  • Check if your tools comply (especially if you’re in HR, finance, or healthcare).

  • Be transparent—let users know when AI is involved.

  • Work with providers who follow the Code (it’s a green flag).

  • Prepare for simple documentation of how AI is used in your company.

Keep learning—regulations are evolving fast, and staying informed gives you a competitive edge

By August 2025, the AI Act will be in full swing. But don’t wait.

If you’re building or integrating AI into your workflow, now is the time to:

✅ Audit what you’re using
✅ Clarify your risk level
✅ Choose responsible tools
✅ Document everything

And remember—ethical AI use isn’t just a legal checkbox. It’s a brand asset.

This Week in AI

Did you enjoy today's newsletter?

Login or Subscribe to participate in polls.

This is all for this week.

If you have any specific questions around today’s issue, email me under [email protected].

For more infos about us, check out our website here.

See you next week!