AI Risk: Cyber Insurance Ransomware Past Warns of Faster, Bigger AI Pain

Estimated reading time: 4 minutes

In the latest episode of the Cyber Insurance News & Information Podcast, host Martin Hinton sits down with AI Risk expert Erin Kenneally, founder of Elchemy, to explore one of the most pressing questions in technology and insurance today: What happens when AI fails and who pays for it? The conversation dives deep into AI Risk, the subject of Erin’s recent opinion piece, “AI Risk Insurance: Ransomware Redux or Industry Reinforcement Learning? The speed of innovation is outpacing the industry’s ability to measure, price, or even define the exposure. As Kenneally warns, the AI insurance market could repeat the same mistakes the cyber industry made with ransomware, only faster and bigger.

Get The Podcast At These Spots
AI Risk Expert Erin Kenneally, featured on the Cyber Insurance News Podcast
Erin Kenneally – AI Risk Expert
Why AI Risk Matters Now

AI is no longer experimental. It’s embedded in critical business systems, customer interfaces, and automated decision engines. Yet most insurance policies remain written for a pre-AI world. Kenneally argues that unless insurers and policyholders adapt quickly, today’s optimism around generative and agentic AI could give way to tomorrow’s coverage crisis.

From Ransomware to AI: Lessons Unlearned

Kenneally draws a direct line between the ransomware boom of 2017–2022 and the current AI landscape. Then, carriers chased growth, underwriting without discipline or real risk telemetry. Loss ratios soared as attacks multiplied and payouts exploded. Markets hardened, premiums tripled, and exclusions piled up.

Her message: The same cycle is forming again. AI-driven losses will emerge faster than insurers can model them, especially if underwriting remains static and disconnected from real data.

See also  Cyber Insurance Sunday – Upload
Coverage Clarity: Cyber vs. Tech E&O

Both cyber insurance and Tech E&O policies touch AI incidents, but neither covers them cleanly.

Cyber responds to network or data security failures. Tech E&O covers professional errors. Neither contemplates model drift, corrupted training data, or an autonomous system making a catastrophic decision.

Kenneally notes “ambiguity” is expensive. She calls for clear, affirmative, wording that defines AI operational failure, wrong outputs, training data poisoning, and prompt injection as clear triggers. Without it, coverage disputes are inevitable.

Systemic Exposure and Third-Party Risk

AI doesn’t operate in isolation. Most organizations rely on foundational models, open-source components, and cloud-based APIs. This vulnerability can cascade across vendors and clients. That interconnectedness amplifies AI Risk, turning a localized error into a systemic loss event, an insurer’s worst-case scenario.

What’s Different About AI Risk

Unlike traditional cyber threats, AI introduces stochastic, random, and thus unpredictable behavior. Failures may have no precedent. Outputs may change daily without human input. Oversight must move inside the model’s reasoning process, not stop at output monitoring. The result is a challenge to both actuarial modeling and legal liability.

Get The Cyber Insurance News Upload Delivered
Subscribe to our newsletter!

Scenario-Based Coverage: A New Framework

Kenneally proposes a scenario-based coverage architecture: policies triggered by real-world AI loss events rather than abstract categories.

Each policy should define four specifics: the actor, the technology, the trigger, and the outcome.

Example: An autonomous agent altered by prompt injection issues erroneous instructions, causing property damage, a regulatory probe, and business interruption.

Such structured wording contains uncertainty and enables measurable underwriting.

See also  2024 Trends in Identity Security: Report
Small Business Takeaways

Small and mid-sized enterprises (SMEs) face the same exposures with fewer resources. Kenneally supports embedded AI performance warranties bundled within AI software subscriptions. These can simplify coverage, lower friction, and build trust. SMEs should inventory AI assets, monitor outputs, and document vendor use.

Three AI Risk Questions Every Buyer Should Ask
  1. Are our AI systems explicitly covered, and what exclusions apply?
  2. What triggers could lead to a denial or dispute?
  3. Should we add a standalone AI insurance endorsement now?
The Regulatory Signal to Watch

The EU AI Act will shape how insurers define coverage and assess compliance risk. Its penalties and enforcement structure will accelerate demand for AI-specific protections.

Watch the Full Conversation

Get The Podcast At These Spots
This Transcript Has Been Checked For Accuracy, But Verify Content Yourself, Trust But Verify

×