Martin Hinton (00:05) Welcome to the Cyber Insurance News and Information Podcast. I'm your host and executive editor of Cyber Insurance News, Martin Hinton. And today we've got, we're going to dive into the weeds. We're going to get ⁓ into the nitty gritty with Erin Keneally. Erin, who recently wrote our first guest opinion piece for the website, has shared with us something we've titled AI risk insurance, ransomware redux, or industry reinforcement learning. And in it, she argues that unless insurers, buyers, and policymakers make a scenario-based coverage, dynamic modeling, and real telemetry, ⁓ the AI insurance market will repeat ransomware's painful cycle, only faster and bigger. So there's a lot of words there and things like telemetry and dynamic modeling. Erin, thanks so much for joining us. Take us again through the thesis of the piece. What's the idea that you set up at the beginning of this that we're going to discuss today? Erin Kenneally (01:02) Sure, thanks for having me, Martin. ⁓ You know, I think in a nutshell, the motivation was ⁓ really kind of seeing a pattern developing in the burgeoning AI risk insurance space ⁓ that, you know, I think is similar to what I saw happen with the ransomware insurance ⁓ realm, which is essentially you've got this. you know, of a train coming down the tracks crisis as it were that's precipitated by a lack of historical data from a coverage perspective, ambiguous language, a challenging learning curve from the standpoint of underwriting losses and appropriately pricing the risk. then I, you know, a lack of, I think, associated with that kind of a lack of disciplined ⁓ underwriting in ⁓ quantifying the risk itself, assessing the risk at the front end, then, you know, kind of managing the risk ⁓ along the way. ⁓ The other aspect is this notion, I think that that is underserved, it doesn't really get talked ⁓ about a lot, which is opacity from a by side policyholder perspective, I think there's a fair amount of frustration in terms of knowing, hey, is my risk covered and is it adequately covered and is it price appropriate? Am I paying too much? And then all of this kind of, as we saw happen with ransomware, it's likening to lead to coverage clarity issues from a claims perspective, just like we saw happen with silent cyber, this ambiguity led to unclear. policy triggers, it complicated ⁓ claims and led to denials. ⁓ And from my perspective, know, full disclosure, I'm a big data measurement modeling type ⁓ person in terms of where I focus. ⁓ But in general, just exposure measurement and management issues ⁓ from an insurance perspective. And then ultimately, what we're talking about is distorted prices. and carriers sitting on unpriced risk. Martin Hinton (03:25) So to sum it up, ransomware mistake was costly and we're worried now that there's going to be an AI coverage risk that is also costly and we're here to, I don't know, wave a flag or be the canary in the coal mine. So thank you for that. Before we dive into the details and what you've observed in the past and what you're worried about in the present and the future, how do you come to this space? Tell us a little about your background and your career to date that makes you ⁓ an expert in all this. Erin Kenneally (03:55) Sure, ⁓ it definitely was not by design, definitely took ⁓ an emergent ⁓ approach in terms of following what was opportunities as well as my interest. ⁓ Without going into the ⁓ career origin story, I guess I would start with time period post-graduation ended up ⁓ graduating from undergrad and serendipitously had heard about this program at George Washington University in forensic science, which definitely ⁓ piqued my interest at the time. I think it was the only, the second program in the US to offer a graduate degree. ⁓ So ended up ⁓ getting my grad degree in forensic science. ⁓ We had talked about this before. You know, initially I thought it was kind of cool to explore the concept of forensic pathology as a career, only to learn that those jobs were few and far between. ⁓ know, compensation wasn't super attractive ⁓ and, you know, ultimately digging in dead bodies is not something that I wanted to pursue ⁓ from a career perspective. So, ⁓ Plan B was go back to law school. I had always been intrigued by the law, did that, finished law school, headed out west ⁓ with clearly the intention, or I should say I had the intention of going to work ⁓ in a white shoe firm, writing briefs for attorneys ⁓ on the golf course, so to speak. ⁓ really kept an open mind and trying to determine where I could apply my law degree ⁓ and various formal background. And again, serendipitously, I ended up landing a job at the San Diego Supercomputer Center, which is a center that's part of UC San Diego ⁓ system, and ⁓ ended up working as a legal advisor ⁓ for an assistant for a pretty well-known ⁓ computer scientist slash physicist, Tsutomu Shimomura. And the reason I bring this up is he is infamous or famous for his help in apprehending the then high profile hacker, Kevin Mitnick. So it was kind of a big deal at the time. This would have been the mid 2000s. ⁓ John Markoff actually wrote a series of articles on the whole caper. But long story short, Mitnick hacks into the supercomputer center. It takes off Shimomura. He ends up helping the FBI in this caper. They track him down and arrest him and he ends up pleading and serving time. So that was really kind of the foray into my work at the intersection of technology law policy. I ended up then advising a number of other similarly situated research, I would call it investigative research efforts there at the Supercomputer Center. ⁓ This led to founding my own applied R &D and advisory company, Elchemy. ⁓ Some projects that kind of spawned from there was ⁓ early on, we built an identity theft management system for law enforcement. ⁓ And I think another kind of key development or activity there was ⁓ I ended up co-leading this Homeland Security funded project that produced the Menlo report. So for folks who are researchers in cybersecurity, they will have heard of this and essentially was the cybersecurity equivalent to the Belmont report without getting into details there. Belmont was later codified into, I think it's 23 CFR 24, which is essentially the federal regulation that requires any institution getting federal funding to have an institutional review board, so an ethical review board for the work that they're doing. ⁓ We spun this effort up because the work that I was doing with some of the researchers and some other people increasingly learned that they're straddling this line between legal and illegal activity, especially with regards to the Wiretap Act and Computer Fraud and Abuse Act. And so it was all about advising them to stay on the right side of the law. Ended up then going to DHS, you know, full time instead of as a contractor, specifically the science and technology directorate. Now, most people think, you know, this was the cybersecurity division. Most people here at DHS cybersecurity think of CISA CISA was the ops arm for cybersecurity. I was in the S &T arm. I think the relevance there is we were in a... few people realize this, but we were at the time the functional equivalent of a VC within the federal government. So what we would do is go out ⁓ and meet with what was called the HSE, Homeland Security Enterprise, ⁓ and really just identify, look, where do we have gaps from a technology perspective? And then I'd go out and fund the build out and tech transfer of ⁓ solutions in those spaces. So I had, yeah. Martin Hinton (09:27) I was gonna say, would a comparison within the federal government, would that be like a DARPA type thing where you're identifying things that need government support in order to get out of the infant phase, because they're not so clearly valuable to normal private investors? Erin Kenneally (09:36) Exactly. Bingo. think DARPA is a little bit more pure researchy, but absolutely the concept very similar, ⁓ very similar models in that regard. ⁓ And I found it surprisingly gratifying. ⁓ I say surprisingly because I wasn't super, I'm not a govey type to begin with, but the guy who ran the division, credited Doug Mon, he really put together a good team and let us do our thing. So I had cybersecurity data infrastructure, cyber risk economics, and then data privacy. so I funded research and development in those areas, transitioned then to the private sector. And I've had various roles in various cyber and AI risk ⁓ measurement modeling control companies. yeah, so that brings us to today, which, you I have always kind of followed this mission-based systemic problem solving approach and I think this AI risk arena is certainly fits the bill in terms of tapping into what resonates with me ⁓ in terms of what needs to be solved, what's not being addressed well and where is there opportunity ⁓ for innovation both from an insurance sell side ⁓ as well as complimentary. on the buy side where there needs to be this better convergence of controls and then ⁓ back end indemnity. Martin Hinton (11:23) Well, thank you for all that because it sets us up perfectly into the first segment, as we call it in the rundown that I've shared with you. ⁓ And that's titled Coverage Clarity, Cyber versus Tech E &O. And the goal of this little part is to identify the fault lines, know, silent AI, ambiguity. So what happened with ransomware? Let's come back to that. What happened with ransomware that is in today's market with regard to AI risk? the sort of canary in the coal mine, if you will. Erin Kenneally (11:56) Sure, I think it's helpful to kind of autopsy the timeline that ⁓ for the evolution of ransomware in parallel with what was occurring on the insurance side of the equation. I would think about it starting with kind of pre 2017 or 18. you know, folks are familiar with that, that timeframe from an insurance standpoint, the cyber line of business was really printing money. Right. So like loss ratios were low margins were like, you know, exponentially higher than other lines of business. was growing rapidly. Coverage was, you know, I guess pretty broad and shallow. was a soft market. Um, underwriting, you know, as I had previously mentioned, underwriting was pretty undisciplined. And, and, and when I say that, um, it was really disconnected from, you know, the technical evaluation of the risk and, and, and the risk controls themselves. So then we moved to 2017 slash 18 to like 22. Um, we saw the ransomware attacks, you know, grow exponentially and, know, there's myriad stats out there. Uh, but. figure 10 to the 3 % growth rate. ⁓ Ransomware itself as a threat evolved from just encryption to this double extortion, which really expanded the loss surface as it were. ⁓ This then resulted in a a systemic increase in both the frequency and the severity. And I think, you know, a tendency to that is just the unpredictability. ⁓ We kind of came up on this underwriting loss crisis, loss ratios soared, you know, anywhere from starting at the in the heydays, you know, mid 30s to high 70s, 80s, 90s. And, you know, certainly some carriers exited the market and then payouts rose. of 10 to 3, you know, percentile. So I've seen statistics, you know, up to, you 3000 % increases in payouts. Market corrected, started to harden. We saw huge premium hikes, double, triple premium rate hikes, stricter terms. The coverages started to narrow. we start to see sublimits and exclusions, non-renewals became pretty, I don't wanna say prolific, but we're definitely pronounced. ⁓ Lowering of limits, co-insurance, ⁓ higher retentions, and I would say a ⁓ penetration gap, right? So the industry struggles as it is with the penetration gap, but I think with the non-renewals, ⁓ that didn't help. And then the... One of the good parts here from my perspective, and I would often say ransomware is the best thing to happen to cyber insurance is this, from an underwriting discipline perspective, really started requiring certain loss controls at the front end. And I think that's definitely helped. And so now we're at this stabilization phase. We've got plateauing rates. We've got this sort of pre-breach tech stack risk control. stack and more sophisticated models as a result of these years of developing incidents and near-miss data. Martin Hinton (15:24) So cyber insurance has come through its adolescence. that the parallel? So translating that passage of time and those experiences into the modern concerns about AI, what do you see now as the landscape with regard to AI? Because one of the things that one of the questions here, and I'll throw it out there, is when you think about AI and agentic AI, what policy is going to Erin Kenneally (15:29) I so, I think it's good to say that. Martin Hinton (15:53) react to that as wordings exist now? Is it cyber, Tech E&O and ⁓ mean neither, both? What do you think? Erin Kenneally (16:00) Yeah, I would say both. ⁓ Tech E&O, know, probably a little bit more. I think both will occasionally respond and I think both are necessary, but insufficient because I think both are riddled with ambiguity in the wording. know, cyber, so specifically, cyber is going to respond to, and this is again, kind of the, one of the originations of a some analysis I had done ⁓ beforehand, which was kind of a gap analysis of coverage controls and measurement gaps in the industry. Cyber is going to ⁓ respond to events involving network security, data loss, and certain systems failures. ⁓ But most policies aren't going to cover the cost of having to retrain models. ⁓ or validate data if there's some sort of incident involving ⁓ training data compromise or ⁓ integrity ⁓ or even incident response or litigation costs when there's no security failure or a privacy breach that stems from the AI integrity compromise. just if it's, there's gotta be a line of sight and ⁓ clarity. And I think, you when I talk to carriers and I certainly haven't canvassed all of them, you get kind of one of two responses. One is, yeah, nothing to see here. This AI stuff is covered. It's covered under cyber and tech, you know, don't worry about it. And then there's another camp who, you know, if they're candid, they're going to be like, they know they're sitting on on price risk. They're just hoping they don't, you know, take it in the shorts in terms of claims. And so that's kind of the world we're living in right now. It is a little bit of a wait and see approach, which is somewhat understandable, but I do think that we can get ahead of it a little bit better. And, you know, my response to the former folks who say, hey, look, we've got this, not a problem here, you know. Even if you were to stipulate that all of these AI risks are in fact covered under existing tech E&O and cyber policies, again, you're sitting, there's this huge gap from a control requirement perspective because this pre-breach tech stack that has found to be, that the industry is settled on, we'll leave it at that, ⁓ doesn't address. these new emerging AI risks. And then also, even if those coverages, even if AI risk is covered, ⁓ it's not being measured and modeled appropriately. So again, we're talking about this unpriced risk that they're setting up. Martin Hinton (18:54) So when it comes to the modern wordings in policies, what AI triggers should be in there? If someone watching this now is like, I don't even know, or I'm wondering myself, what modern wordings do we need for the AI reality? Erin Kenneally (19:11) Yeah, I I think, and this is a moving target. The key is looking for affirmative language, whether it's an inclusion or exclusion around AI risks, and in particular, right? So I think this notion of AI operational or model failure needs to be explicit. ⁓ I did this, I had mentioned before, I did this kind of gap analysis of ⁓ cyber and EO. ⁓ coverages ⁓ versus kind of leading AI ⁓ risks. so the AI operational model failures, one, another was wrong output. So misinformation, hallucinations, discriminatory actions based on those is another thing that is going to require affirmative language. I would say training data poisoning or contamination that is a cause of loss for. you know, something downstream needs to be clarified. Big one is this autonomous decision making that results in loss. And here we're talking about agentic AI. And then I would say the last ⁓ that, you know, stands out would be prompt injection, right? Because this is a new attack vector with AI. You know, it's more... ⁓ likened to a phishing or a social engineering attack, which people kind of don't realize. ⁓ This is when someone ⁓ in the prompt itself gets the model to perform or disclose, ⁓ perform certain ways or disclose sensitive information or whatnot. ⁓ It's more like phishing ⁓ than it is like an SQL injection. ⁓ And so, In any case, the risk surface has exponentially increased because of this ability to interact with these models through human language. so I think the possible, ⁓ as I said, risk surface is huge. So clarity is definitely needed in that regard as to whether or not it's covered. Martin Hinton (21:25) I mean, I think broadly in this space, whether it's the AI situation, the attack vectors, the number of places that you can enter a system and wreak havoc, whether it's ransomware or whatever it might be, is something I think it's hard to comprehend. You if you've got 50,000 employees, there's at least 50,000 doors, right? I mean, every one of those is an exploitable entry point. You touch on agentic, and the reason I bring up people is one of the things that I've had conversations with people about is that Regardless of how AI is used, you need to have the ability to draw a line between the event or the result and a human being who's responsible in a company situation and a liability situation. Is that something that you envision needing specific wording around? So someone introduces an agent to AI for customer service or whatever it might be, and it makes a mistake that that mistake is acted on and creates a problem. then there's an argument for liability over is that where we're sort of looking at it, even if you employ these to reduce your staff, there still needs to be a human being at the vice president level who goes, yeah, well, that's all under my umbrella. Just like the buck stops here once upon a time, there needs to be a place where the AI agentic buck stops. Erin Kenneally (22:41) Yeah, mean, there'd be a tendency to say, let's blame the machine. Ultimately, that's not going to work, right? Even with agentic, there is a person who is, I don't want to say programming, but setting up that system. and has some level of knowledge of the known knowns, known unknowns, and unknown unknowns, right? And so the way our system of liability is set up, there's gotta be a human company at the end of the equation ⁓ that's responsible for the outcomes. Martin Hinton (23:17) Yeah. So when it comes to sort of the biggest silent gaps in policies as they exist now, what do people need to be mindful of? Erin Kenneally (23:25) Yeah, mean, think honestly, if you look at it, I mean, there's a lot of gaps. I think the third party liability gap is huge ⁓ for the following reason. Just the sheer reach of how AI is being embedded in our existing SaaS infrastructure, in our B2B to C ⁓ transactions ⁓ in the commercial world. ⁓ and the lack of liability clarity that we somewhat touched upon in your previous question. ⁓ And I think from a coverage perspective, AI as a cause of loss is seldom explicitly mentioned in the current coverages. So I think that sets us up for third party liability being a big concern. moving forward in the AI risk space. Martin Hinton (24:26) Yeah, I mean, and so to put it in layman's terms or Marty's terms, as I like to call them, that's if you've got some sort of supply chain where there's someone outside of your core business environment doing something on behalf of you, they don't have AI policies in place. And maybe they're uploading some proprietary information to a LLM or whatever it might be. That's the ability to discipline how it's used across all of those. ⁓ I guess, you know, cohorts is what we're talking about, that everyone understands where the rules are and what's allowed and what's not allowed. Erin Kenneally (25:05) Absolutely. And you know, there's so much dependency now on these foundational models. Most companies are not rolling their own models or they're using some sort of combination. And then you're talking about, know, like the open source components of these AI applications is, non-trivial as well. And so we've recently seen ⁓ incidents where ⁓ the package manager programs get infected by malware and spread rapidly. so it could get, the aggregation can get pretty large, pretty fast. And so everything is really getting more tightly coupled. that is, I always say, look, with every capability slash efficiency that we gain with any technology, and certainly it's the case with AI, there absolutely always is a flip side of that coin with regards to risk. ⁓ you know, I think about things from the standpoint of how do we develop better, you know, optics and visibility of the risk. And ⁓ frankly, it's, you know, I'm of the mindset because we don't have a lot of data right now is, you know, follow the capabilities because you're, going to be able to proxying what those risks look like on the flip side. Martin Hinton (26:25) Yeah, I mean, you touch on a really good point. And I think that, you it's always, we're always, you know, obsessed with our moment. And, you know, I guess that maybe it's not a joke, but no one died in traffic accidents before cars, right? So these ideas that, you know, we introduced brilliant new technologies and obviously the digital economy broadly is, has made for great efficiencies in so many ways, but like anything, nothing is all good. or all bad and you need to be able to adapt to the consequences of a new thing. again, sometimes it can feel a bit doomsday-ish in this space where it's like, what are we going to do? And certainly around AI, people start to talk about agentic AI and if you've watched Terminator too many times. But it is important. One of the things we're moving into is sort of the idea that there are parallels with ransomware and some of them transfer, some of them don't. And is there... Single ransomware error underwriting blind spot what you know before I go on part of me Just to wrap up that last bit of our conversation if you were gonna advising now on Policy what there are three big endorsements or is there you know? One or two three things that you would absolutely want to be clarified that go in first that and I know you've touched on this But maybe you could sum up what it is that you should be looking for if you find yourself You know redoing your policy whether it's Tech E&O, you know and sock or cyber that you should be like, ⁓ let's make sure these are a priority that we get the language right, that sort of thing. Erin Kenneally (27:59) Yeah, I mean, think it really, to put it simply, boils down to specificity, right? So from the standpoint of the scope of your coverage, ⁓ the definition of artificial intelligence, like surprisingly, well, maybe it's not surprising because, you know, take it outside of the realm of insurance, you've got, you know, you can have raging debates amongst scientists and business professionals and whatnot about what is AI. Well, it becomes ⁓ exponentially important to define it ⁓ from a coverage standpoint. I had actually copied down. had found there's a carrier that ⁓ purports to have is offering this absolute exclusion, right? And pardon me, let me just read through it because as you listen, it really kind of gives you an idea of like just how specific you need to be. So artificial intelligence is defined as any machine based system that for explicit or implicit objectives infers from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions. that can influence physical or virtual environments, including without limitation any system that can emulate the structure and characteristics of input data in order to generate derived synthetic content, including images, video, audio, text, and other video. digital content, right? So if you see that run for the hills, if you think that you're going to get anything covered under your policy ⁓ for your AI risk. ⁓ yeah, that, you know, the point is you got to be specific. I think secondly, specifying ⁓ the AI driven loss events as distinct triggers, not just like, you know, a coverage for a security or professional error, you know, this existing kind of abstract concept. And that again specifying enumerating exclusions, targeting some of these silent gaps. I had mentioned some of those those areas that where there's gaps in cyber and tech, know. Martin Hinton (30:14) You remind me with the use of the word specificity, one of my favorite quotes is from Mark Twain and it goes, the difference between the right word and the wrong word is no small matter. It's the difference between the lightning and the lightning bug. So yeah, I mean, when you were reading the AI sort of what it is or descriptions of it, but that's where we are, right? People want to... Erin Kenneally (30:32) Yeah. Martin Hinton (30:42) protect themselves and be clear at the same time, perhaps with so many words, not being as clear as you'd hope to be. So you're moving along. When you look at the ransomware era, is there a single underwriting blind spot that's being repeated with AI right now? Erin Kenneally (30:48) That's right. Yeah, and I'm going to expose my again in full disclosure. I'm certainly biased in this regard based on the boulder that I've been pushing up the hill for a while with regards to getting internal risk signals embedded in cyber underwriting. So two things come to mind. One is, you know, this industry reliance on static outdated questionnaires and checklists. I mean, I think they can provide a limited directional ⁓ help, but given the dynamism and emergent risk attendant to AI risk, it just falls flat. mean, frankly, I think it falls flat just with cyber and now we're dealing with AI, which is even worse. ⁓ And the second I would say is not requiring fit for purpose risk controls in the underwriting process, right before... ⁓ and before claims start coming in. ⁓ I had done, as I mentioned, this gap analysis and that pre-breach tech stack just leaves huge gaps in terms of where the actual risk is and what ⁓ the current tech stack, ⁓ which is underwritten against and embedded in a lot of these questionnaires, looks for. Martin Hinton (32:20) Yeah. So I mean, we, we, it's a human condition. We tend to do things the way we did them yesterday until the way we did things yesterday is, revealed as inadequate. So we're in that sort of scenario where it's sort of, we're rinsing and repeating and you know, we're playing catch up. Is that a pretty silly, maybe way to put it, but, that's the way it kind of occurred to me. Erin Kenneally (32:42) No, and you're, you're spot on. And I think part of the problem too is, you know, you've got this, let's take cyber for example, and I hope it gets better with AI. We'll see, but you know, it's largely a market for lemons, which is to say, you know, there's a lot of cruft out there and just being able to discern, you know, what products, what controls, ⁓ are more efficacious than others in thwarting attacks and whatnot. It's not easy to do. there's no ⁓ objective published benchmarks out there. ⁓ So from a buyer's standpoint and from an insurance seller's standpoint in terms of what they're requiring and their underwriting, it's not an easy task. ⁓ Martin Hinton (33:30) Yeah, again, these are complex problems and the analogies help with looking back to the past, know, that the history doesn't repeat, but it rhymes kind of a theme, but it's AIs new. So where does this analogy break down? What's genuinely new about AI risk that ⁓ you're seeing or concerned about? Erin Kenneally (33:54) Yeah, I think there's, ⁓ gosh, three to four ⁓ major differentiators with AI. The first one that most people would, you know, they've heard at this point in time is just how stochastic and random this technology is, which leads to kind of emergent risks and scenarios and then unpredictable failures, right? you know, rarely does an AI malfunction necessarily follow a historical pattern. I we certainly haven't collected that data. ⁓ And then these systems can produce novel and unforeseeable failures ⁓ that may not even have been present in the training data, which is cause for concern. And then this makes actuarial pricing extremely difficult. ⁓ From a cybersecurity standpoint, you know, that discipline itself is steeped in determinism, right? So the stochastic nature of AI challenges cybersecurity as well. So like this notion of, you know, software code execution paths, right? Those from AI perspective, those are learned, they're emergent. It's not coded into the software. And I'm going to quote something that I'd seen. I think it's spot on. I'm sorry, I'm blanking on where I read it. ⁓ from an attribution standpoint, but it was something like oversight and security got to have to move from inside, have to move inside the reasoning process itself to track how a model interprets instructions, forms, plans, and acts in context, right? So instead of just ⁓ checking AI's outputs after the facts and relying on that for monitoring safeguards, but we need to move that. that control plane into the AI's actual decision making process, which Martin Hinton (35:51) I mean, what you're describing there is not a new idea, right? mean, in the context of the AI creates the problem, a liability that you then have to pay for in some capacity, whether it's through insurance or out of pocket. You treat AI in that environment almost like an adversary and moving into their sort of space and their mindset is about understanding the way the AI sees things so that you can anticipate problems. And I mean, again, it's a bit like a... you know, empathy or understanding your enemy that these things date through, you know, centuries of military history, sort of disciplines and doctrines and that sort of thing. And you need to know who you're dealing with in order to understand the potential that it can have for problem. Is that a fair way to think about it? Erin Kenneally (36:40) Yeah, no, you're spot on. It's, you know, it definitely is a double edged sword. And what you're describing is often ⁓ defined as AI safety, right? So AI security is, you know, related to securing the AI models and AI systems. And safety is more of securing us from the AI, the unknowns, the stochastic nature of the AI itself. So yeah, definitely. So, you know, Getting back to the point, guess, the differences with distinction, would say the stochastic nature, ⁓ the potentially correlated and systemic impacts from AI because it's more interconnected and as I mentioned before, tightly coupled and this kind of like we enter the somewhat of a new world with this notion of AI agents and AI agent swarms and whatnot. The speed and scope ⁓ that harm can occur, I think, is another challenge and differentiator. If you look at insurance ⁓ policy development and just the issuance cycles, it's completely misaligned with how quickly this technology can change and how it can cross physical, legal, financial ⁓ boundaries and have implications for systemic risk. And then the last thing is just the opacity, right? black box, we've all heard this black box nature of the models themselves, as well as the systems that are derived from the models. determining, you know, what's the root cause and precise cause of failures makes it difficult from the standpoint of, you know, legal and contractual assignment of a liability and drafting contracts and resolving claims and what. Martin Hinton (38:39) Yeah. mean, in the early patterns in the claims involving this sort of thing that suggests that there's a, like there was with cyber and ransomware, a sort of hardening in the market. Erin Kenneally (38:51) Yeah, mean, I would look for ⁓ spikes in ambiguous claims and denials, ⁓ you getting back to what I had mentioned before, just disputes over what, you know, AI incidents are covered in the wake of the silent AI situation that is kind of running rampant right now. ⁓ I would also look for claims that would simultaneously hit multiple lines of business. So cyber and E&O and EPL and ⁓ D&O, right? Because that again is kind of signaling that ambiguity. ⁓ Looking for more denials based on exclusions. ⁓ Scrutiny on claims certainly is another signal. And then we'll start to see sublimates and co-insurance when ⁓ in the wake of claims coming in and insurers realizing that they've got unpriced risk on their books. Martin Hinton (39:55) Is there in this in this realm, is there any is there one market or one market signal or one metric that you're watching that you think maybe people are or you think is maybe more an indicator of what's to come than others? Erin Kenneally (40:09) ⁓ I don't know that I would have a ⁓ metric as much as the tendency to kind of miss the big picture forest view. And what I mean by that is, so a combination of ⁓ the supply and the demand, right? So insurers and policyholders operating with this significant uninsured AI exposure and at the same time, ⁓ innovation being stifled by policyholders' unwillingness to adopt the AI tech because they don't have a financial backstop. One thing that I would say is an interesting manifestation of this on the buy side is this difference between what we hear companies say they want and what they're actually doing, right? So the difference between their stated preferences and their actions. So for example, Geneva came out with a study recently and they surveyed companies ⁓ and they said something like nine out of 10 companies said they wanted AI insurance and they would pay 15 to 20 % more for it. But I don't know that that's happening. And I don't know that that's happening because of the supply. or just, hey, let's wait and see, and we're gonna kind of roll with our cyber and tech, know, and then if something bad happens, we'll deal with it. ⁓ That's unclear. Martin Hinton (41:42) Yeah. So to try and create sort of a concrete example, you touched on this a second ago, this sort of across the physical, financial, legal domains. Can you now hypothesize a single sort of AI loss scenario that touches all of those parts? know, like, I guess we're going to write a Netflix series now. What's a scenario that begins the Netflix series on the AI? insurance failures. Erin Kenneally (42:12) Yeah. So again, let me, the caveat here is I don't spend a lot of time thinking about this. And frankly, there are myriad people who would be much better at concocting the what-ifs, but having said that kind of keying on the plausible ⁓ qualifier here, ⁓ I'll anchor off of, you know, kind of AI technology that's either currently existing in the marketplace or just about to kind of go to market. You could imagine a commercial kind of last mile ⁓ delivery operator, whether, again, you see this with Amazon and I think even Walmart is doing this, ⁓ is deploying a coordinated autonomous drone swarm. And what could happen is a combination of malicious GPS spoofing, ⁓ of the logistics, along with some sort of emergent coordination failure that derives from the models and the infra of the ⁓ the agents worm themselves. And then maybe add to that this kind of a weak geofencing ⁓ capability to kind of wrangle them in could cause. a lot of the drones to collide with, let's say, an electrical substation, destroying transformers, maybe starting fires, maybe causing a blackout, perhaps people die, critical services could get halted, lots of financial losses and certainly lawsuits. But again, there's like a gazillion scenarios one could think of. Getting to kind of first principles, I do think that an AI cat scenario is going to involve a combination of at least two of the three kind of underlying ⁓ elements of the risk, which is to say a malicious attack, a mistake or misalignment by the model or the AI system itself, and then some sort of misuse or by the human users. think two of those three gives us an AI cat scenario. Martin Hinton (44:30) It's interesting to think about. ⁓ I guess if you turn the keys over, you've to trust who you're turning them over to is the simplest way to put it, like, and think of. I mean, do you think in this space with regard to AI, there's any regulatory initiatives that are likely to shape wordings within the next year? ⁓ What do you think about that part? And one of the things that the audience probably already knows is that there is regulations, but they're spread around. And in America, they're state by state. and maybe it's a little different in the EU where you've got the EU governing it but that leaves the UK outside of it. What about that sort of thing? The clarity that might come from up on high, good or bad, but it's there. Erin Kenneally (45:16) Yeah, there's certainly a patchwork. ⁓ I think the most obvious would be the EU AI Act would be the most immediate ⁓ driver of those ⁓ coverage shapings. And it's because one is, mean, it's already parts of it are already in effect. It's got non-trivial penalties, right, which is going to motivate folks to want insurance to cover that. The liability is pretty clearly defined ⁓ and there's actual enforcement. So those are all elements, know, kind of arguing for that one ⁓ to be the most likely. you know, the carriers, insurers, you know, need this concrete wording with regards to ⁓ the liability frameworks. to price and cover. they're certainly attracted to that type of, or those elements as well. Martin Hinton (46:17) All well, so everyone loves the happy ending. So we're going to sort of transition now to the last part of our conversation is the proactive playbook. Let's call it, you know, do this, don't do that. If you're in the sort of a carrier broker or buyer scenario, what are your, you know, checklists? What do you need to be looking for? I know we've touched on some of this, but drill down on the things that we can do, because there are a lot of things that we can do and we've touched on language and certain issues and the complexity of it. That doesn't mean there's an impossibility that we're confronted with, So take me through some of those ideas about, you know, what's the phrase, coverage architecture that you might advise for. Erin Kenneally (47:00) Yeah, I think the big one is ⁓ this notion of scenario based coverages. I know there's some of the big ⁓ reinsurers and carriers have ⁓ definitely hinted at this as well, if not been very explicit. So what does it mean? Essentially, ⁓ coverage based, a scenario based coverage means the policy is structured to respond to specific predefined real world incidents rather than as currently exist in like cyber, like abstract or general categories like data breach or system error. So a scenario, again, think of it as this composite of defining, and this gets to the specificity, right? Who are the actors? What's the technology involved? What's the trigger event? And what's the defined outcome? So the coverage triggers that are tied to just, you know, not just the what, but the how. And, you know, I think that's going to be key. And what this does, if you kind of think about it from a, you know, an architectural standpoint is it's the insurance industry's attempt to achieve certainty amidst all this uncertainty and these, you know, unknown unknowns by bounding this potentially boundless set of outcomes within the definition of the coverage, right? So you're effectively kind of trying to enumerate the known knowns within this scenario-based coverage ⁓ itself. let's be more concrete. Let's take model error. right. And a model error is some sort of a failure in the AI system, the algorithm or the performance of the model itself, right? It could be ⁓ the model inaccurately ⁓ predicts outcomes because it's been overfit or there's some sort of model drift or whatever it may be. so you would want, so an example of a coverage would be, scenario-based coverage would be, okay, coverage applies if an autonomous AI agent due to prompt injection or model drift sends erroneous instructions that result in again, you can change some of these around, that results in physical property damage, regulatory investigation, and business interruption. So you can see the components there. You've got the actor's name, the technology name, the trigger event, and the defined outcome. And so I think taking that approach... ⁓ you know, again, you can do this for the, the model errors. can imagine wording for training data for agentic AI risk. ⁓ and the like, ⁓ is really a good way to move forward and, and provide the needed coverage for companies to, know, to innovate with AI and feel like they have a financial backstop while at the same time, you know, realizing that, you know, Carriers need to make money. They're not in this for the feel-good nature of it. So they can't lose their shorts in this. So they've got to mitigate the risk through how they're wording those coverages. Martin Hinton (50:22) One of the things that AI was unleashed, certainly in the LLM sort of category, and everyone could use it. from small companies to large enterprises, particularly at the large enterprises, ⁓ being on the cutting edge and appearing like you're adopting new technology effectively to improve bottom line is people rush in and they want to quarter to quarter prove their value in that respect for other reasons. When they look at sort of the smaller and medium sized enterprises, companies that are maybe, in America, the small and medium sized enterprises are a huge, enormous portion of the economy. We've seen this with cyber. They don't have the same resources for cybersecurity and maybe the insurance policy, they're dealing with a local broker who is not a specialist in this. policy wording is not where it should be given the risks that a particular company have, whether it's a dry cleaner or a dance studio, they have lots of sensitive data and information. And I wonder whether or not you might touch on that for a small business owner who might be thinking, ⁓ A, they often make the mistake of thinking that it won't happen to them. They adopt sort of a teenager mentality when in fact they are as vulnerable as anyone. And I wonder whether in this scope of AI, cybersecurity, cyber insurance, where you see their concerns and what message you might offer them about how to talk to their brokers and insurers about where their coverage stands. I'll just say this. One of the things you touched on a second ago is that there's a real static nature to some of the underwriting questionnaires for cyber. And that does. It portrays the reality of how dynamic and complex the risk is. It's not Florida where you get hurricanes seven or eight months a year or wildfires in California certain months of the year. There is this ⁓ really, really complex, challenging threat. And you throw in the nation state element of sort of ransomware. It's back for huge benefit to nations across the globe. There's no geographic boundaries. All of that means that anyone can be a victim. ⁓ Jaguar and Marks and Spencer is in the huge hacks there. But the small and medium sized enterprises, know, business with $100 million or less in revenue, which sounds like a lot of money, but there's a lot of them. ⁓ What do think they need to be considering in this space? Erin Kenneally (52:59) Yeah, you know, one of the more active ⁓ areas with regards to, you know, kind of proactively embracing the AI risk that I've seen in the market is ⁓ this notion of AI performance warranties. And I do think that it ⁓ can be fit for purpose for an SMB policyholder. ⁓ it's dependent on, I think what What's going to drive that is having that performance warranty bundled or embedded within the AI product subscription itself. ⁓ If that occurs, think you're going to be able to afford the premiums become more affordable. You're going to. address the presumably less sophisticated IT capabilities of ⁓ small to medium sized businesses. It simplifies, you know, procurement and reduces claims friction ⁓ at the back end. from the standpoint, you know, a lot of times like, let's say with cyber, it's no touch ⁓ policy issuance, right? They just take a couple pieces of information and because the limits are so low, just issue those to try and increase penetration there. But I think from an AI perspective, I would, I'm in favor of this approach, this embedded insurance approach in that regard. I do think that this notion of having kind of bundled services with the, as part of the, product itself ⁓ will be helpful that provides things like asset inventory. Martin Hinton (54:47) Could you just pull the string on that a little bit? What does that mean? Embedded services, know, the bundle part. Explain how that manifests for people who maybe don't know what those phrases and terms mean. Erin Kenneally (55:00) Yeah, so mean, basically, would you buy a risk AI risk control capability, some sort of guardrail or, ⁓ you know, it wouldn't be a CI CD, that would be more for enterprises. But let's just use the term guardrail. ⁓ That's going to help, you know, reduce ⁓ the risk of prompt injections and ⁓ output from the model itself. Let's just keep it simple. But what the performance warranty, it just accompanies the subscription to the AI product. So it's not a separate transaction, right? It's just part and parcel to that. And I do think I've often talked about this convergence between cybersecurity and insurance and what happened with cyber is they kind of ran on parallel tracks, right? You had cyber insurance and then cybersecurity. And within the past, you know, whatever five years, you see this huge market of cybersecurity vendors trying to hack their wares and get endorsements and whatnot from cyber insurance. I think we can do better with AI ⁓ and start ⁓ implementing kind of, or converging again, I'm getting to this embedded converging concept of the control with the insurance itself. Now the performance warranty. Is insurance backed, but it's not a true indemnity product, but I don't, you know, again, for a lot of small businesses, I'm not sure that that's necessary, especially because, you know, given the unknown nature of AI risk, premiums are likely likely to be appreciably higher and you don't want to price the small businesses out of the market. like, you know, a product is going to be whatever two to 4 % of your coverage limit. And you know, you could imagine policy limits for an SMB being anywhere from I don't know, 2,500 to 50K ⁓ based on premiums of 500K to 2K or something like that. mean, there's, the numbers can vary. And then again, I think just having some qualifying controls is huge. Just writing without, know, agnostic to having some sort of control plane doesn't need to be as... Martin Hinton (57:05) Yeah. Erin Kenneally (57:19) ⁓ in depth and sophisticated as an enterprise, but basic documentation about what's your AI asset inventory, including what SaaS products are you using that have AI embedded in them? Are you monitoring outputs? you reviewing? There's just some basic blocking and tackling that can be required there. Martin Hinton (57:40) you touched on as you were just there is something I was going to ask about is the idea that, you know, if you have a company with, I don't know, 15 people or 25 people, how you're using AI is only one part of it. How your vendors might be or anyone in your supply chain might be is something that you need to be, you know, considering as well, right? Yeah. Yeah. So we've been talking about an hour. So as we knew we wouldn't, we haven't gotten through everything But I want to just sort of transition to Erin Kenneally (57:59) Yep, for sure. Martin Hinton (58:09) sort of if you will, takeaway. ⁓ So if you're a buyer, what are three questions you should be asking your broker about your AI coverage and that sort of thing? It doesn't have to be three, but whatever it is, what are some things that you need to really make sure you ask about? Erin Kenneally (58:28) Yeah, I would definitely ask your broker whether your key AI systems are covered, ⁓ either explicitly or implicitly, and what exclusions might apply to those scenarios or those technologies. ⁓ What are specific triggers, I guess, or scenarios that would be likely to lead to a ⁓ claim denial or a dispute? And then just, you know, ask, look, do we need to add ⁓ some sort of endorsement or standalone AI coverage right now, given, you know, where we are and what our ⁓ exposure ⁓ is looking to be. Martin Hinton (59:12) Okay, all right, next question. I guess this is our version of Quickfire. ⁓ Architectural choices, is there something that's happening now that you fear with AI that's gonna be uninsurable later? So just remove it from the architecture now. Erin Kenneally (59:25) Well, I mean, this might sound obvious, but like just this whole notion of implementing a black box undocumented AI where you don't, can't explain, you know, where your assets are. There's no oversight. There's no audit trail. There's no documentation. There's no clear accountability. I mean, that's that that's a recipe for not only disaster from an operational perspective, but certainly from an insurance perspective. I think there was something. Gosh, I mean, the stats are all over the place, but it's kind of amazing how ⁓ just lack of kind of governance still exists in the AI. I mean, it's surprising, but it's not. Martin Hinton (1:00:06) Well, you you make a great point. I did an interview recently and a person made a reference to the phrase shadow AI and people who might drop a company report into a chat at GPT and get a quick summary before they go to a meeting. And not even considering where that's gone and is there proprietary IP or nevermind yours, but if you're at a consulting firm or a financial institution, is there a ⁓ The problems you create by not having, if you will, back to that phrase, guardrails and an understanding of how it's being utilized. Don't go into things willy-nilly. Be purposeful is a broad piece of advice that has existed for a long, time and it applies here, it would seem. Erin Kenneally (1:00:54) For sure, and I also think, you know, taking the, and again, this might sound obvious, but taking the Dr. No approach, which is to say banning the use of AI as a recipe for disaster, because you know folks are gonna be using it. So you might as well embrace it and create opportunities for people to use it, the technology safely. Martin Hinton (1:01:14) Yeah, well, we saw that very early on. got this blanket ban on a lot of big enterprises. And then they realized, wait, no, that's not going to work. It's like saying no to electricity. We have to figure out a way for this to be part of what we do without jeopardizing us being part of everything. And so yeah, it's good. So when it comes to lowering premiums, is there ever any evidence of something that actually lowers premiums or improves terms when it comes to renewals? Erin Kenneally (1:01:41) Look, I think this is a very nascent moving target. It's going to require more data on impacts of various failure modes and control classes and whatnot. ⁓ So I don't want to like put a draw a line in the sand. But having said that, I look that the bar is low, right? So just being able to, you know, again, demonstrate AI, your AI governance capabilities. You have visibility into your assets, you're monitoring them, you have some sort of incident response plan and are kind of guard railing your third party. ⁓ interactions and documentation is huge. So yeah, I would definitely start there. this whole notion of lowering premiums, ⁓ controls that will lower premiums is, I laugh because it's a big sort of game going on in the cyber realm right now. And it's just, I think it's the wrong. It's certainly the right question, but carriers are not in the position unless they've got years of proof of efficacy of controls and actuaries. I would pay more attention to, hey, can we get better coverage as opposed to lowering a premium? Because ultimately, if you price that out, it's probably worth a hell of a lot more than a reduced premium. Martin Hinton (1:02:59) Yeah. Yeah, yeah, yeah. protect yourself and then you can decide whether to lower the fence later when you realize maybe you built it too high to begin with, ⁓ So in wrapping up, as I promised, ⁓ is there anything that we didn't get to that you thought we would that you think is important that people know or is there anything we discussed that you'd like to touch on again or clarify or anything like that? Erin Kenneally (1:03:17) Right. You know, I guess the last thing I would say, and I know we covered a lot, ⁓ is it's, think people, myself included, perhaps underestimate how difficult it is to determine if and to what extent AI is and will be the cause of some of these losses because it's so embedded. ⁓ And so from a provenance perspective, root cause analysis, from a coverage and claim perspective, I think it's easier said than done in certain circumstances. ⁓ So it kind of gets to that silence slash shadow ⁓ component. you know, again, where we benefit from having such low friction in terms of the AI performing, you know, tasks and making decisions for us unbeknownst to us. The flip side of that in terms of knowing, okay, is this harm or loss or damage a result of a failed AI or some sort of malicious attack on the AI or some sort of mistaken use of the AI? ⁓ is, you I think can be non-trivial. Martin Hinton (1:04:50) Yeah, that's a very good point. Well, we'll end it there. Erin thank you so much. First of all, thank you for reaching out and taking advantage of our invitation to write the opinion piece. For those of you watching, in all the show notes, wherever you might be listening to or watching this, you can find Erin's information, whether it's LinkedIn link and the link to the article that she wrote for us. So again, thank you so very much for taking the time to write that. It really is. ⁓ It is complicated stuff, but it's worth understanding. as all of us move into the future with AI as a present part of it, it would seem every business operation, understanding the downside of this amazing new possibility, it's a good thing. And it's not pessimistic. It's just smart business. ⁓ So, Erin Kenneally again, thank you so very much for joining us. ⁓ Erin Kenneally (1:05:38) Correct. Absolutely. Martin Hinton (1:05:45) Everyone else, if you've got a question or comment, please, you can leave it down. I will try to answer and when I can't, I will pass it on to Erin and we'll see what we can do for you to get an answer. again, please. Yeah, thank you so much, ⁓ everyone. Thank you so much for watching. I'm Martin Hinton and this is the Cyber Insurance News and Information Podcast. Thank you very much for taking the time to listen or watch today and wherever you are, enjoy the rest of your day. Thank you. Erin Kenneally (1:05:55) Thank you, Martin. Keep up the great work.