What AI Leaders Need to Know about AI Risk

Peter Dyson is Head of Cyber Risk Analytics and Modelling at Kovrr, where he leads research and innovation in cyber risk quantification. ​He has led landmark research including the Fortune 1000 Cyber Risk Report, providing critical benchmarks for understanding cyber risk materiality at enterprise scale.  

Peter delivered an expert seminar to our group of AI leaders recently. His practical briefing cut through the noise and clarified what AI risk really is, why it matters now, and how incidents unfold.

Want the language and confidence to facilitate a conversation on AI risk inside your organization? Peter shared the right questions to ask, how to set expectations, and how to guide your organization responsibly when it comes to AI risk.

Three types of AI risk exposure

When your CFO asks "where's our AI risk?", use Peter Dyson's framework:

Direct AI - "AI we've officially adopted"

Shadow AI - "AI employees are using without our knowledge"

Silent AI - "How AI makes our existing risks worse"

The money conversation

Your board thinks in dollars and probabilities.

Instead of: "We have significant AI risk exposure" Say: "Our AI risk exposure could exceed $10 million with a 5% probability this year"

Peter's approach: Build scenarios, assign costs, run simulations. Then you can say things like "These five controls will reduce our average annual risk by $3 million for an investment of $500k."

Risk Categories - The MIT taxonomy

When listing what could go wrong, consider using ​MIT's AI Risk Repository​ categories:

  • Discrimination and toxicity (bias in hiring algorithms, offensive outputs)
  • Privacy and security (data leaks, unauthorized access)
  • Misinformation (false outputs presented as fact)
  • Malicious use (your AI being used for fraud or attacks)
  • System failures (AI making wrong decisions, being unavailable)

How Attacks Happen- The MITRE ATLAS framework

When someone asks "but how would that actually happen?", reference ​MITRE ATLAS​:

  • Model extraction (stealing your AI's intelligence)
  • Data poisoning (corrupting training data)
  • Prompt injection (tricking AI into doing unintended things)
  • Inference attacks (extracting training data from outputs)

Example: "The Copilot vulnerability was a prompt injection attack via email that led to data exfiltration."

Control frameworks your compliance team knows

These acronyms get immediate recognition:

  • EU AI Act - Mandatory if you operate in Europe
  • ISO/IEC 23053 - International certification standard
  • NIST AI RMF - US government framework
  • NIST 800 - Cybersecurity controls being extended for AI

The risk management options

Every risk discussion ends with "what do we do about it?" You have three answers:

Accept - "The risk is below our threshold"

Transfer - "Insurance or vendor liability covers this"

Mitigate - "These specific controls reduce the risk"

Most AI risks need mitigation. Be ready with specifics.

Phrases to avoid

Don't say:

  • "AI could destroy our business" (too vague, too dramatic)
  • "We need to control all AI use" (impossible and counterproductive)
  • "AI risk is unknowable" (it's not - we can quantify it)

Phrases that work

Do say:

  • "Our AI risk tolerance is X dollars per year"
  • "We've identified our top 5 AI risk scenarios"
  • "This control reduces our annualized risk by X%"
  • "Our exposure comes from these connected systems"

Stakeholder-specific language

To the CFO: "AI incidents at peer companies averaged $15 million in total costs including remediation and fines"

To Legal: "We're tracking copyright, privacy, and liability exposure from both our AI use and third-party AI"

To the CISO: "AI amplifies our existing cyber risks and introduces new attack vectors like prompt injection"

To the Board: "We can reduce AI risk to acceptable levels with targeted controls costing X, preventing estimated losses of Y"