EU AI Act Essentials: What every AI leader needs to know

Did you know that breaches of the EU AI Act can result in penalties of €35M or 7% of worldwide turnover? ​Yes, it matters even if you’re outside the EU!

EU AI Act Essentials: What Every AI Leader Needs to Know

Insights from a private seminar with John Harper, founder of Cambridge Machine Learning and Principal AI Data Scientist at Microsoft

We recently hosted an exclusive session on the EU AI Act with John Harper, who has been conducting deep dives on the legislation with both large enterprises and startups. The session brought together AI leaders from all over, including the UK, USA, Spain, France, and South Africa.  This underscored a critical point: the EU AI Act doesn’t care where your headquarters are located.

If you’re offering AI services to users in the EU, you’re subject to the Act’s requirements and potential fines, regardless of whether you’re based in Silicon Valley, Johannesburg, or anywhere else. This extraterritorial reach makes understanding the Act essential for virtually every AI leader today.

The Stakes

Bans on “unacceptable risk” applications have been in force since August 2024, and the compliance requirements are escalating rapidly. By August 2026, even limited-risk applications will need to meet transparency and disclosure requirements. Penalties can reach up to €35 million or 7% of worldwide annual turnover.

Getting Your Definitions Right

John emphasized that much of the confusion around the Act stems from misunderstanding key definitions. Here are the critical ones:

Provider vs. Deployer: This distinction took John “an embarrassingly long time” to master, and it’s crucial. A provider is any organization that develops an AI system or puts one on the market under their brand, even if they’re using someone else’s model under the hood. Building a travel recommendation app using OpenAI’s API? You’re a provider.

A deployer is an organization using an AI system under its own authority for non-personal use. Using GitHub Copilot for your development team? You’re a deployer of that technology.

You become a provider if you rebrand an AI system as your own, substantially modify an existing system, or change its intended purpose, particularly if you move it from a low-risk to a high-risk application.

General Purpose AI Models vs. Systems: The model is the underlying technology (like GPT-4). The system is the service built on top of it (like ChatGPT). Most organizations in John’s experience are building general purpose AI systems, not models.

Understanding Risk Levels

The Act categorizes AI applications into four risk levels:

Unacceptable Risk: Banned outright. This includes harmful manipulation (exploiting vulnerabilities of elderly or mentally ill users), social scoring systems, and certain biometric identification uses. If you’re in this category, there’s no compliance pathway. Just don’t do it.

High Risk: Applications affecting critical infrastructure, educational access, employment decisions, credit worthiness, or law enforcement. Think Tesla Autopilot or medical diagnostic assistants. These face extensive compliance requirements including risk management systems, data governance, technical documentation, human oversight, and regular robustness testing.

Limited Risk: Most of the attendees were likely in this category. This includes chatbots, emotion recognition, biometric categorization, and synthetic content generation. The requirements center on transparency and disclosure.

Minimal Risk: Spam filters, AI in video games, navigation tools. No specific AI Act duties, though general EU law still applies.

Your Likely Obligations

If you are among the many organizations operating chatbots or generating synthetic content, here’s what you need to know:

By August 2026, you must:

  • Clearly disclose when users are interacting with AI (before the first interaction)
  • Label synthetic content (images, video, audio) in a machine-readable way
  • Disclose any use of emotion recognition or biometric categorization
  • Ensure staff have basic AI literacy (understanding how the AI works, its stochastic nature, potential for hallucinations, and error handling)

These aren’t technically onerous requirements. A simple “You’re interacting with AI on this page” suffices for disclosure. The challenge is ensuring consistency across all touchpoints and documenting your compliance.

What About High Risk Cases?

If your AI is making decisions about employment, creditworthiness, educational access, or safety-critical operations, the compliance burden is substantial:

  • Risk management systems that identify problems in near-real time (John recommended OpenAI’s moderation API as one tool)
  • Data governance with documented lineage, quality assurance, representative datasets that manage bias across demographics
  • Technical documentation sufficient for auditable review
  • Human oversight as an intermediary for consequential decisions
  • Robustness testing including red teaming to document failure modes and mitigations

This isn’t checkbox compliance. It requires integrated systems, ongoing monitoring, and cultural change.

Some Nuances That Matter

Several subtleties emerged from John’s presentation:

  1. White-labeling makes you a provider: Offering an existing AI service under your brand, even unchanged, subjects you to provider obligations.
  2. Purpose matters more than technology: Moving an AI application to a higher-risk use case (even if it's the same technical system) changes your compliance requirements.
  3. Importers and distributors have roles too: If you’re an EU entity bringing a non-EU AI system to market, you’re an importer with specific obligations. This affects partnership and go-to-market strategies.
  4. Staff AI literacy isn’t optional: Even customer service representatives and technical staff who don’t touch the AI directly need basic training. This is about organizational capability, not just data science teams.
  5. Documentation is evidence: When enforcement comes (and John emphasized it’s “when” not “if”), auditors will want to see logs proving you’ve been doing compliance work, not just that you started recently.

What AI Leaders Are Wrestling With: from the Q&A

Following John's presentation, we had a Q&A discussion. The questions from AI leaders from enterprise backgrounds including healthcare, banking, industrials and various startups highlighted areas where the Act’s implementation remains ambiguous or potentially problematic.

What Constitutes “Selling into the EU”?

One of the first questions cut to the heart of extraterritorial application: Does “selling into the EU” mean having a minimum of one user in the EU? Must they be a paying user? Or does simply offering your service in the EU, before any actual sale or use, trigger compliance obligations?

This isn’t a theoretical distinction. For B2B SaaS companies with global offerings, the difference between “having EU customers” and “being available to EU customers” could determine whether they need to implement compliance frameworks immediately or can defer until EU market entry becomes strategic.

The discussion clarified that if your service is accessible and marketed to EU users, you’re likely within scope, regardless of whether you’ve actually closed any EU deals yet. This “available to” interpretation means many companies already have obligations they may not have recognized.

Who determines what is harmful?

The categories of prohibited AI applications prompted significant debate around their subjectivity. Terms like “harmful AI-based manipulation and deception” raised obvious questions: Who determines what’s harmful? Who defines manipulation? Do companies bear the risk of getting this wrong, or must government prove violations before fines are applied?

One particularly interesting edge case emerged: “What if I score AI engineers in my platform based on their helpful interaction with others, is that social scoring?” The boundary between legitimate reputation systems and prohibited social scoring isn’t clearly delineated in the Act.

John’s response emphasized that companies must assess their own risk and products. They must make a case for their classification and be prepared to provide evidence and justification when asked. Enforcement authorities can then question that justification.

His practical advice: Use the ​EU AI Act Compliance Checker tool​, tick the boxes, and prepare to demonstrate your reasoning if questioned. Document your classification decisions now, because you’ll need to defend them later.

How many humans-in-the-loop?

The requirement for human oversight in high-risk applications generated considerable confusion about prescriptiveness. Taking John’s cancer assessment example, one participant asked: Must every single model recommendation be overseen by a human specialist? Or would random auditing of responses suffice? How prescriptive is the Act versus how much is left open to interpretation and potential litigation?

This question crystallizes a broader tension: The Act establishes principles (human oversight must exist) without always specifying mechanisms (what that oversight must look like in practice). This leaves organizations in the uncomfortable position of designing compliance approaches that could later be deemed insufficient.

John acknowledged that the Act provides frameworks rather than detailed specifications in many areas. Organizations need to demonstrate that human oversight is meaningful and effective for their specific context; not just a checkbox exercise. But exactly what “meaningful” means may only become clear through enforcement actions and case law over time.

But we didn't build the underlying AI model...?

A question that resonated across multiple participants concerned the AI supply chain: “The majority of AI wrappers use providers like OpenAI to perform tasks in their models. Though one can control what you do in your own application, you have no control over these large providers. How does the EU Act assign responsibility? Will you as Provider assume full risk even if it’s out of your control?”

This gets at a fundamental architectural reality: Most AI applications today are compositions of services. You might use OpenAI for language processing, a third-party vector database, various APIs, and your own custom logic. Where does your responsibility end and your suppliers’ begin?

The clarification from the discussion: There is a difference between provider roles at different layers, but if you offer a service, you still have obligations to provide controls as a service provider. Your obligations may be less extensive than those of the underlying model provider, but you are still liable under the Act.

One participant from a banking background noted that this mirrors existing practices in regulated industries: “You must understand the components you are using.” Banks maintain libraries of approved tools and pre-approved architectural patterns. The suggestion emerged that the AI industry may move toward similar pre-approved use pattern: ”a good business for someone,” as they noted.

This also raised concerns about emerging architectures. Questions about MCP (Model Context Protocol) servers and data routing illustrated the complexity: If your application sends data to MCP servers and you don’t know where those servers are located, are you creating compliance breaches?

What about my custom ML solution?

An important question addressed organizations that have developed their own ML solutions without relying on existing LLMs, training instead on proprietary or client data. How do the same rules apply?

The consensus: The same rules apply depending on your risk classification and role the underlying technology matters less than what you’re doing with it and who you’re serving. A custom-trained model for credit scoring faces the same high-risk compliance requirements as one built on top of a foundation model. The Act is use-case and risk-based, not technology-specific. It applies to ML and to AI, as both involve predictions and non-deterministic outputs.

This was reassuring for some B2B-only companies trying to understand their risk profile, but it also meant they couldn’t assume their custom approach exempted them from compliance considerations.

Will this affect European competitiveness?

What effect will this have on the competitiveness of European startups? Will this create a structural headwind for any company that starts with a European market versus an American one?

One participant shared a cautionary example from Telefónica’s attempt to utilize aggregated social and movement data from their mobile network in the Netherlands. While not illegal, it was deemed socially unacceptable when introduced, illustrating how cultural and social customs can impact companies beyond pure legal compliance.

Another participant, newly appointed as head of AI at a major company with products already in production, noted that regulatory processes “do slow things down. Legal, gating processes, manual steps waiting on legal reviews.”

The discussion didn’t reach a clear conclusion on this point, but the concern was palpable: European companies may face compliance burdens earlier in their lifecycle than competitors in other markets, potentially affecting fundraising, speed to market, and global competitiveness.

Which sectors will be most impacted?

When asked which sectors are likely to face the most immediate challenges under the Act, there was quick consensus: healthcare, pharmaceuticals, financial services, education, and HR top the list.

These sectors face high-risk classifications for many AI applications; medical diagnostics, drug discovery processes, credit and trading decisions, admission and grading systems, and hiring and performance management. The compliance requirements for these use cases are extensive, and many organizations in these sectors are realizing they need to retrofit governance frameworks onto existing AI deployments.

Practical Next Steps

John’s recommendations for AI leaders preparing for board and executive conversations:

Immediate actions:

  • Determine your role (provider, deployer, or both) for each AI application
  • Classify each system’s risk level using the EU AI Act Compliance Checker tool
  • Identify low-hanging fruit: disclosure statements, staff training plans, synthetic content labeling
  • Document your classification reasoning now: you’ll need to defend it later

Strategic positioning:

  • Frame compliance as competitive advantage and trust-building, not just risk mitigation
  • Start now on the August 2026 requirements; don’t wait until deadline pressure
  • Document everything: your data, your decisions, your risk assessments, your mitigations
  • Understand your supply chain: map out which components you control and which you don’t

For board conversations:

  • Lead with extraterritorial reach: this isn’t optional if you serve EU users
  • Quantify the exposure: can be as high as €35M or 7% of revenue per violation
  • Present compliance as a trust differentiator in go-to-market strategy
  • Be transparent about areas of ambiguity and how you’re managing them

Tools and Resources

John highlighted two essential resources:

Both are official EU resources, not third-party interpretations.