Martin Hinton (00:05) All right, welcome to the Cyber Insurance News and Information Podcast. I'm the executive editor of Cyber Insurance News and your host today, Martin Hinton. With us today is Marshall Sorensen, and he is a solutions architect at Myriad 360, which is a company based out of Atlanta. And what we're going to talk about today is non-human identities. And some people watching may be instantly aware of what that means, and some may be wondering what that could possibly mean. Well, whether you know or not, it matters to you. If you use a computer, if you're on the internet, you're concerned about cybersecurity from a personal or a professional or a company-wide point of view, this is something you need to understand with regard to security issues and also liability issues that come along down the road after a breach or something like that. So Marshall, I won't waste much more time talking about something that you know way more about. ⁓ So let's dive right in. Give me a little quick thumbnail sketch and we'll touch on this again. Non-human identity. What are we talking about? Marshall Sorenson (01:04) Yeah, thanks, Martin. And I think, you know, not to be too cliche, but I'm going to kind of give you the Webster's dictionary definition of a non-human identity. And then we'll break this down. So don't worry if it doesn't make sense at first, but effectively a non-human identity is a digital entity or credential that represents ⁓ machines, applications, automated processes, or other services used in IT infrastructure. So. That's a very broad definition, I know, but this is coming from kind of the authorities on the subject of, you know, talk this out and come up with kind of the broadest and most concise definition of what a non-human identity is. So I think the key word there is that it is digital. That's what we're here to talk about. It is not a human. is a, it is something that exists within cyberspace. So that is, you know, it is, you can think of it as analogous to a person, you know, a person could perform. processes, they can perform services, but this is just within the digital realm versus the human one, as it were. Martin Hinton (02:07) So to parse out the Webster definition, which I'm a huge fan of, it gives us a lot of room to go. In my daily operations, in my daily operations, I'm mixing up human and non-human identities, right? In my daily existence or my computer operations within my day, what, where are my encounters with these things? I mean, are there a lot of them? I mean, I have my notes here. So I, as I joked earlier, I'm asking questions I know they answer to, but that's the job of a journalist. So how many? Marshall Sorenson (02:35) Yeah. Martin Hinton (02:36) How many of these are there? mean, do we encounter them a lot? What role do they play in our digital lives, whether professionally or personally? What do they do? Marshall Sorenson (02:44) Yeah. Yeah, the short answer is they're everywhere. So if you look back through history, and I'm talking recent history, a report came out in 2024 by a company called Imperva and their ⁓ well-known cybersecurity company that said over half of the internet traffic. worldwide is automated. It's all robots. It is not people generating this traffic and this information. It is non-human identities or processes, services, things like that. So whether you're aware of it or not, these things are everywhere in your everyday lives. For example, do you own something like a ring camera? Most people might own something like that. And that right there is a perfect example. of a non-human identity. There is not a person behind that camera who is looking out for your package delivery person and saying, all right, I see that person. I'm going to send Martin a notification. That is a ⁓ robot, an automated process that is doing that. If you have ever... use a service that offers to organize your Google Calendar. And you say, ⁓ yes, I want to give you permission to come in, read my events, see what's going on, and organize them, summarize them, send me an email about what's on my calendar for the week. There again, that's a non-human identity. So they're really everywhere. And I know I'm just touching on the sort of consumer side of the business, or the consumer side of the world, but in business, they're everywhere as well. ⁓ know, cloud services, automations, APIs, ⁓ even your industrial, your manufacturing, they're all over the place. So again, whether people realize it or not, there are far more ⁓ robots and non-human identities than there are actually human ones. Depending on who you ask, ⁓ some of the latest reports say up to... ⁓ to 45 robots or non-humans for every person that exists. And that number, that gap is only getting wider as time goes on. Martin Hinton (04:46) So am I right to say that what we're going to wind up talking about is the fact that there's an enormous number of these and there are issues around whether we need all that exists now, whether we keep creating them and without turning them off, they create liability issues and security issues within networks and systems that are making companies and individuals vulnerable. Is that the sort of broader ideas that these things touch? It sounds like every part of what we do online when you click Marshall Sorenson (04:54) Yes. Martin Hinton (05:15) you know, give access to your Google Calendar, to this, whatever it might be, and that we don't quite appreciate that these non-human identities and their non-human fingers are sort of everywhere in our digital existence. Marshall Sorenson (05:18) Mm-hmm. Yeah, it's very true. think, you know, breaking down that Webster's definition a little bit more, it's it's it's not that they are very complex concepts to put your head around. They're not very different from a person, but they're easy to create. They're easy to spin up. so what you find is, although they are similar to people, there's a lot more of them. It is a massive scale unlike anything we've ever dealt with before. And I think that's where the problem comes into play is that people just aren't aware of how quickly this can grow. And when it gets to that size, how much more difficult it can be for organizations to manage. Think about the average sort of operational costs for a ⁓ company that has a hundred people. It can be very difficult. It can be, you have to manage things like payroll, accounting, human resources, even just basic job functions. Now, if we take that 45 number, that 45 multiplier, now all of a sudden you're dealing with 4500 non-human identities that, you know, do you have a good grasp around that? Most people don't. And I think that's where we really encounter the problems today is how do you deal with that scale is where people find a lot of difficulty. Martin Hinton (06:44) It sounds like the human condition to add and not subtract whenever we look to solve a problem is at play here. We don't take away any non-human identities. Let's add a few more and that'll solve the problem. Is that part of what we're encountering? Marshall Sorenson (06:57) If you see a breach in your sandcastle, you don't scrape out sand, you add on a little bit more when you're building one of those. So it's exactly the same. You tend to add more to solve a problem versus trying to take a look at maybe how you got into the situation in the first place. Martin Hinton (07:18) So let's step back for a second. Myriad360, what you do, the role you play, the business you're in is dealing with this issue. how did you get, tell me, give me a little short biography. How did you get to this spot? you know, mean, non-human identities, it causes all these sci-fi sort of mindset in my head and that sort of thing. I wonder whether you could tell me what... Marshall Sorenson (07:45) It does. Martin Hinton (07:45) a little bit about the environment and the space and a little bit about the services that Myriad 360 provides. Marshall Sorenson (07:52) Yeah, of course. So Mary 360 is a global systems integrator. We're actually headquartered out of New York. I am based in Atlanta. ⁓ So we provide services, we provide IT ⁓ infrastructure integration worldwide in 165 countries I think we're up to. So we have a global presence ⁓ that includes anything from modern data center, ⁓ cloud and networking services, all the way over to the cybersecurity group where I find myself a part of. we really try to cover the entire spectrum of IT needs for companies, be they physical or virtual. So within that cybersecurity group, have experts such as myself. I've really specialized on identity over the course of my career. And that spans back from me being a software engineer. And I think this is an important transition. Early in my career, when I was a software engineer and sort of learning the ropes, One thing that was always, maybe not always, but like typically prioritized was speed, time to market. And I think you'll find that's still true today. You know, I'm not the oldest in my career, but you'll find that people tend to stress, we need to get this out now. We need to do, we need to build value ⁓ at the moment immediately. ⁓ And that was true then, and it's true now. What that, how that relates to non-human identities is that as a young software developer, I was never, I never really concerned myself ⁓ with what I was creating in terms of automations or non-human identities or things like that. I was concerned with, can we get it out now? Can we build this value for the clients that I worked with at the time? Can we build this automation, this process, the service, ⁓ what have you? and and deliberate. You know, there was never a thought, at least in my sort of young software developer mind, about how do we make this secure, scalable? How do we make sure that if we need to off board these things, how do we decommission them properly? And so what you'll find is a lot of people work in that same sort of manner. ⁓ And, and I guess if I had to go back to my past self, I would say, no, you need to think about it a little bit more holistically. And I think, you know, there tends to be that sort of economics conversation with, with respect to non-human identities that people tend to push towards, you know, time to market time to value is, a very common metric. And they don't really think about, you know, the broader implications of, of setting up these services or these, these non-human identities. And so now what I find in my current role and what ⁓ I think Myriad360 does very well is exactly identifying those sort of broader implications. How does this going to affect the organization at large? Some developer who builds out a feature or even a department that helps push out some new service, ⁓ they might not have their entire... picture in front of them when they go to build these things. ⁓ That's where we come in and we can say. yes, you're doing these things the right way. Maybe you need to adjust here. Maybe you need to make some changes, introduce some new tools to help manage it and paint a broader and more clear picture of how these things have been done. So that's where we really excel. And I think that also extends to sort of the wider landscape as well. I talked about physical versus virtual. So we really try to cover the entire gamut of these implications. ⁓ secret that the IT world is very complex but we try to make those complex things very simple for the people that we work with. Martin Hinton (11:41) So I know we've sort of touched on it and you used the ring camera analogy, but you just sort of teased the question I want to ask. How would I encounter a non-human identity? Can you give me a specific example of a way you might interact with one with particular platforms and devices? How does it work? Marshall Sorenson (12:00) Yeah, sure. We'll use the ring example because I think it's something that most people would be accustomed to. ⁓ When you set up a ring camera, typically, you have to allow the camera permissions to access your account, right? You, you know, unless you're buying a very specialized piece of hardware, a ring camera always requires you to set up an account, you know, with ⁓ ring or Amazon, whoever owns it these days. ⁓ And when you actually go to set up the physical camera itself, it'll ask you, hey, can I have permission to use your account ⁓ for certain functions, for object detection, for sending notifications? And naturally, you say yes, because I want all these nice features. I want the benefits of having a ring camera, allowing me to alert my phone ⁓ when somebody is nearby. And in this case, where you encounter that non-human identity is the camera itself, right? A camera is not a person. That's not a groundbreaking statement, but it is using your, it is effectively leveraging your permissions to perform a service on your behalf. What we're gonna say, sending a push notification to your phone. Again, I said earlier in the call that it is not a person sitting behind that ring camera and saying, there's the package delivery. We're to send a notification to Martin and tell him this package is here. This is something that they built the code, the logic into the camera. And then it is using your permissions to perform a service on your behalf. This is a non-human identity that is doing this. So it comes up very, very frequently in people's day-to-day lives. Martin Hinton (13:41) I mean, it seems to me that if you're asking the question of yourself now, how often do I deal with these? What you said a minute ago about it asks you a question. If you get asked a question while setting up a new piece of equipment or new software or whatever it might be, that question is coming invariably from a non-human identity. that the idea? that that a sort of? Marshall Sorenson (13:52) Mm-hmm. That's the idea. And I think that's the best distinction between what people might think of as a traditional automation and a non-human identity. It really lies in the permissions. Once you give something permission to perform a function that accesses a privileged system, such as an account or something that would otherwise be controlled with access, would be gated. ⁓ Once you give that permission, once you delegate that permission, that I think is the threshold that you cross when it goes from being a simple automation, a simple service into a non-human identity is when you say, yes, I will give access to my data, to my calendar. my ring account, what have you. That I think is the specific point where it transitions. So, and that question can be phrased in a lot of different ways. It could be, you allow us access to this account? If you use AI chat bots, it might say, hey, I need to, if you're an employee in a company, it says, ⁓ and you're using AI, I might say, hey, need to get access to the company data folder. And you say, yeah, I'll allow that. ⁓ At that point you have created a non-human identity because you are allowing it permissions You're giving it permissions to access a privileged system But notice how quickly that happened, you know, you just said yeah I want the productivity gains where I talked about earlier time to market people are saying I want this quick I want this now I'm not gonna think about you know, what permissions it might be asking for and I'm not gonna really slow down to think You know in the AI chatbot example. Yes. I wanted to give it company access access to company data because I wanted to analyze some reports. ⁓ You're not thinking, how am I going to decommission this? How am I eventually going to have it plan for its eventual death, so to speak? ⁓ You're just thinking, I needed to analyze these reports right now. So it's very easy to create them, as I said. But then there's also a problem where you typically don't scrutinize what permissions it asks for. And that's, going back to the problem statement. It's the scale and the sort of, we'll call it ⁓ difficulty, the obfuscation of their purpose that makes this such a challenge for modern organizations. Martin Hinton (16:29) You, mean, the ease with which they're created, it sounds like we're all the mothers and fathers of many non-human identities with the click of a mouse and the not reading of terms of service. Marshall Sorenson (16:39) Yeah. It's true. Yeah, it's something that anybody can really, I don't wanna use the word make the mistake of, when I was regaling you with my tale of being a software engineer, it's something I never really thought of. I said, of course I'll create this service and of course I'll give it permission because we're trying to ship out this feature. We're trying to unlock the productivity that. a non-human identity or a service or some process might allow us. you're not really, you know, that's more the conversation instead of most people don't stop and ask how could this ⁓ negatively impact our security? How could this negatively impact our operations? So, you know, that's where I try to maybe shift the mindset a little bit is that you have to think about these things. Otherwise, you're going to be kind of on the wrong end of the equation. Martin Hinton (17:34) Is there a distinction between automation and identity? Just to kind of drill down on that part of it. Marshall Sorenson (17:41) Yeah, and I think I touched on this briefly. It really has to do with permissions. Once it touches something that ⁓ you need to explicitly allow access for, that's where it becomes a digital identity, non-human identity. Because I'll ⁓ sort of transition. have, ⁓ in the manufacturing world, you have ⁓ plants and you know. operations, processes that are automated, know, a factory that produces, well, let's say ring cameras, you know, there's going to be ⁓ automations on the manufacturing line, you know, to assemble the motherboard. to put in the camera hardware to assemble the final housing for that camera. These are all going to be automated steps, maybe with some human intervention, but there's not really permission required to do this. You're wanting it to run automatically. It doesn't have to stop and ask, hey, I need access to ⁓ some data. Can you give that to me? It is running effectively without oversight. ⁓ because it's not having to access anything privileged. It's just simple, you know, put these parts together and, you know, create a final product. That's, that's an automation as most people might think of it. you know, and I think that's, again, it's, these things exist without, and they have existed without permissions for a long time. But as our world grows more and more interconnected, as data systems, you know, converge and, and, you know, spread apart. You need to have access that is given and given freely and really not, not, not to say not monitored, but ⁓ you need access that is given once and you don't have to really keep continuing to give that access. Because again, you're wanting this to run automatically, you're wanting it to run as a service. And so you don't stop to think, do we need to get permission every time? 99.9 % of the time, yes, it is going to use these same permissions. So we'll just go ahead and grant this access on a continuous basis. So again, I think it boils down to the permissions. Does it have permissions to access something that otherwise it wouldn't be able to? I think is the very core of the distinction between an automation and a non-human identity. Now they obviously play with one another. There's automations that use non-human identities and non-human identities that use automations, but ⁓ there are two distinct entities and I think, yeah, again, it boils down to permissions. Martin Hinton (20:27) I mean, it just occurs to me that we are conditioned to click. Yes, give access or whatever it might be for this, whether it's the zoom getting into our calendar or, I mean, this is, this is when you log on with Google, right? I mean, this is the, is that one of things we're talking about here? You use Google to log on? Am I right about that? Marshall Sorenson (20:39) Mm-hmm. 100%. And when was the last time you were performing some of those, one of those functions where you said, I'm going to log in with Google and allow such and such a service will use the calendar ⁓ summary application, you know, to access my Google account. When you say, yes, I want to give permissions, inevitably, invariably, there will be a list of bullet points below that allow button saying it can access your ⁓ username it can access your calendar obviously. ⁓ But sometimes, not always, but sometimes there are a lot more bullet points than just those two, than just what you might think is the basic permission set needed to perform a function. And that's dangerous. That's where people really get into trouble, I think, where I try to caution them and say, don't just blindly click allow or yes or confirm. read the permissions. know it's probably going to slow you down and you don't want to. You want to start being productive. I talked about time to market earlier. But what you might find, if people haven't done their due diligence, maybe that service is asking, hey, can I read all of your emails? Can I get access to this folder full of sensitive data? Can I actually make changes in your environment? Can I perform otherwise administrative functions. That might be buried in the fine print. Martin Hinton (22:14) So. So you see access to my calendar and well down the list you're describing is also remap your hard drive. Marshall Sorenson (22:25) Exactly, you know, and you know, it could be it could be very very insidious, you know, very I'm not I'm not accusing anybody but like that's where you know, the problems start to occur is when you start to give too many permissions like more than you think is the minimum necessary to perform a simple function and and that's where people kind of skip over. Martin Hinton (22:44) The, the, the, the, the fifth grader mind I possess has me thinking that I'm a, I'm a super and someone needs a key to a specific door. And instead of just giving him that key, I give them the whole spool of keys, the keys to everything, the keys to the kingdom. And now I, I give them, you know, I mean, they can get into any door in the building as opposed to just the one I want them to go into check the electrical panel or whatever it might be. Yeah. Marshall Sorenson (23:08) Right. Right. Yeah, how did they end up in the teacher's lounge? know, well, you gave them the key. Martin Hinton (23:15) You gave them the key and they used it because you gave them the key and that permission, the giving is the permission, the accepting is the permission that gives them the non-human identity, the ability to do things perhaps you wouldn't want. I want to move on, but one of the things we talked about when we put in the setup call for this when we were discussing it was the train ticket analogy or the MTA's turnstile analogy to use the New York City example. Marshall Sorenson (23:25) Yes. Mm-hmm. Martin Hinton (23:41) I wonder if you could take me through that example because it really helped, you know, kind of comprehend some of what we're talking about. So I know we're sort of sidetracking here, but could you just take me through that example again? Marshall Sorenson (23:54) Of course, now Martin, I'm a big analogy guy. So ⁓ any analogies that we can use to help clarify these things, I think is going to be helpful. So if you've been through New York or really any subway system, ⁓ what you find is you buy a ticket, you tap it or swipe it on the reader, and the turnstile lets you through. Simple process, right? Well, there's a lot that goes on underneath the surface. Going back through history. I'll say a hundred years ago, if you were to hop on a train, there would be someone of a personality akin to a conductor, somebody who is going to physically check your ticket either after you've gotten on the train or before and says, okay, I see that Martin Hinton has provided me with his ticket. It allows him to go from point A to point B. It's valid for today's date and other bits of information. That person is responsible. for allowing you into the train, into the system. ⁓ That is performing that permissions check. But as populations grow, I mean, what was the population of New York City in 1926 versus today? It's a little bit different. It's grown a little bit. ⁓ That human doesn't really scale all that well. You need to have an automated way of doing that. You can't have a person checking every single ticket every single day at every single station. You need to have a way to automate that process. And that's exactly where the turnstile comes in. So at some point in history, what the MTA did is they said, well, obviously we can't have a person at every subway entrance checking every single ticket. We need to have an automated way of doing this and enter the automated turnstile. ⁓ That is where, if you've been paying attention, that's where we cross into the realm of a non-human identity. The MTA has said, I want to give permission to this turnstile, this ticket station, to validate the permissions of this person, to validate that this fare, this subway fare, is accurate, that it's valid, that it allows them to come into the system, into the train. ⁓ They are not there checking every single ticket. They have to trust that the systems that are located within that turnstile, and maybe they communicate with some central servers, ⁓ that they're there to validate these permission sets. That it's not a human doing this again. It is somebody that they've delegated their permission to ⁓ to this turnstile so that they can check the ticket. So that, think, is a perfect example of a non-human identity. It's somebody that is, at some point, the MTA said, hey, we can't do this all on our own. We have to have an automated turnstile that can help ⁓ check all this stuff as we grow and as we scale. So we're just going to trust that these turnstiles can do it for us. And maybe we check in on it every now and again for, you know, ⁓ know, gross exceptions. ⁓ But ultimately, we're gonna delegate our trust and our permission to this ticket station to do it because it's just easier. Martin Hinton (27:07) And sort of drawing out, I drill down a little bit on the creation of liability and security issues that can create liability if you're not watching them, is the idea that you've got all these turnstiles now and you remove the human identity from the process of checking the fare and confirming a ticket's been purchased or a Metro card once upon a time and now Omni or once upon a time before that, a token. The idea now is that you've given permission for all these machines to check, you remove humans from the process, and whether or not they're able to properly regulate making sure people are paying their fare to ride the train or the bus is where the security flaw comes in, right? Because one of the things recently in New York City is people not paying for the bus, people not paying for the subway, and they've implemented all these things from increased police patrols to new styles of turnstiles with Marshall Sorenson (27:34) Mm-hmm. Mm-hmm. Martin Hinton (28:03) kind of a brutalist architecture to them. This idea is that that's where you've granted the permission and those are all the sorts of add-ons we've been discussing to help address the downside of all these non-human identities and that you didn't look for secure first, you look for a frictionless environment to get people on the subway quickly. Marshall Sorenson (28:07) Right. Exactly, that's exactly right. I mean you're trying to get you're trying to grow ridership. You're trying to increase your your revenue from fares and to do that you're Again without accusing anybody you're you're taking shortcuts. You're trying to get this done quickly You're trying to build something that can that could be available now versus taking some time to think about the sort of security implications. And then as you said, you're having to tack on things later. You're having to add police patrols and introduce new types of turnstiles when if you had addressed the core problem, then you might not be in this situation in the first place. Martin Hinton (29:03) And this is one of the dilemmas. And again, you've done a good job of it saying, know, the situation we find ourselves in with these things, it's very easy point fingers, but there's a very good reason we've embraced these for the purposes of efficiency and speed and that sort of thing. And invariably, sometimes things come along that you realize are unintended consequences or downsides of the improvement you made. And I guess that gets me to sort of the next question, the governance, the governance of all these things. And I love the... Marshall Sorenson (29:26) Mm-hmm. Martin Hinton (29:32) the idea, I think we talked on the phone, you've got to treat these non-human identities as though you can fire them, right? The idea that you can get rid of these things, you can delete them, I suppose, maybe is maybe a simplified way to think about it. So take me through sort of the framework for governance of these things and the way you would, if you're thinking about liability and the concerns of that these things might create problems for you that result in cost. So take me through the government's idea that... Marshall Sorenson (29:32) Yeah. Yeah. Mm-hmm. Martin Hinton (30:02) we chatted about. Marshall Sorenson (30:04) Yeah, of course. I think this is a great way to sort of, again, demystify non-human identities and hopefully people can come away with this idea that they're not very fundamentally different from people. And I think the techniques that you can use to govern them are, again, not that different from the techniques you use to govern people. ⁓ So as we go through kind of, you know, what the best practices are for governance, we'll switch to a different analogy, if we will. ⁓ And this is of your typical office building. So ⁓ in a typical office building, and maybe this is outdated, ⁓ you're going to have somebody akin to a secretary, ⁓ somebody that can check people into the building, go access certain files ⁓ that can, you know, deliver mail to various offices. ⁓ That secretary has been given, you guessed it, a permission to go to certain areas of the building. ⁓ Based on her job function, his or her job function, the role, ⁓ they can access different things, they can perform different services. Now... As time goes on, they've done a good job, they start to get more and more permissions. Hey, maybe since they've done such a good job last year, they've been given a promotion and can access ⁓ the IT closet and they can access servers because they need to do something related to that. Or maybe they need to an ⁓ IT technician into the server room or they need to allow a contractor on site to repair the AC, whatever the case is. ⁓ Now what happens if it comes time to fire that secretary? Well, you know, that that individual represents ⁓ perhaps I don't want to say danger, but they have a lot of access. They've been given a lot of keys to the kingdom, as it were. ⁓ And I think that you can, you can again think of how you would handle that secretary in a similar way to how you would handle these non-human identities. So, you know, going through kind of the life cycle of that, of that individual is that you would provision them, right? You would bring them on board, the secretary or non-human identity, I'm going use the terms interchangeably. ⁓ You would bring them on board, you provision them, create them within your environment. say, yes, allow access to my calendar, give access to my ring camera. All right, great. You created them. Now, who is the owner of that? Who is the secretary's boss? Who is the ⁓ owner of that non-human identity? It's really important to identify that because if it comes time to make changes or ⁓ even decommission those things, it's important to know who actually owns this. ⁓ The authentication piece is also important. You want to make sure that if a secretary needs to get into the server room, that you have strong locks, that you have a deadbolt if needed, you've got security cameras watching it, making sure that these things are secure. ⁓ You also don't want to give, at least at first anyways, unless it's really dictated by the job function, you don't want to give permissions to the entire building, so to speak. Maybe they don't need to get into the CEO's office right away. Maybe they don't need to get into the server room right away. It's important to understand what's the minimum necessary. When we talked about all those bullet points that a person might not read or might not accept, those are permissions that might not even be needed. And so constraining those permissions is a good way to kind of get ahead of this problem. Mon, go ahead. Martin Hinton (33:43) I mean, it sounds to me, no, I was going to say you're describing a situation where if you've got a secretary who or a non-human identity that's guiding a thing, whether it's people or some sort of digital packet of information, like in a physical office, my ID might open the main door, my office, my floor, but I can't just go to any floor or maybe there's an executive wing or like you said, a technical space that very few people need to be in with any regularity and that if I wanted to do something in there, it would require a, you know, a call to security to say, Hey, today I need to go into the file room in the B2 basement level, but my ID doesn't open yet. You need to create that specialized moment. And theoretically, not unlike you might give someone 24 hours of access to, or a special code to get into a digital lock or something like that, that there is this, these parameters should, should exist around, ⁓ Marshall Sorenson (34:25) Mm-hmm. Yeah. Martin Hinton (34:41) as opposed to sort of free range access that would create the sprawl that we've heard so much about with regard to sort of governance of data and that sort of thing. Marshall Sorenson (34:51) Yeah, so let's flip that on its head. So now say that service that you want to grant access to, that's a contractor that comes on site and he says, I need to access the ventilation system. That's what you would think that they would need access to. But when he comes in the building and speaks with the front desk, they say, need access to the entire building. And you think, well, he's an AC repairman. The vents go everywhere in the building. Let's just go ahead and give him access. But what you don't know is that this supposed contractor had actually been hired by a rival corporation to come in and steal files, know, take pictures of diagrams and things like that. And you've just given them access to the entire building. You know, did you take the time to run a background check on this individual? No. Did you take time to verify that he actually needed to get access to the entire building? No. You said, I just want the AC repaired. And all of a sudden now you catch him in the server room, you know, with a USB stick. And, and now all of a sudden it's a different sort of conversation. And again, sort of drawing the parallels between human and non-human. It's, it's the same idea is that you can implement the same controls as you would for a real person, for a non-human, ⁓ to sort of restrict and control. constrain that behavior. Martin Hinton (36:13) I, it's so interesting. you, touched on something now in the, one of the more recent podcasts I've done, a cyber security expert out of the UK and I were discussing sort of the, um, the social engineering human part of this. And he used an analogy about, you know, if you wear a high vis vest, you're like a workman's reflective vest and you walk up to a desk and it grants you instant authority. And the, in what you just said then. Marshall Sorenson (36:34) Yeah. Yes. Martin Hinton (36:41) It struck me that we think of these non-human identities in a similar way. They are workers. They are like worker bees and they need to go where they need to go. And we have conditioned ourselves to think, yeah, they're going to do what they're supposed to. They would never do anything bad. They would never go somewhere they're not meant to. And that's a mistake. So we have this attitude about them like, oh, you know, here's the guy with the toolbox or the, you know, the plumber's kid or what the electrical box. He, yeah, he's, he knows what he's doing. He's in the room. He's in a vest. Marshall Sorenson (36:58) correct. Martin Hinton (37:10) He's supposed to be here. today I was told he only needs to go to room six and seven, but now he's like, I got to something in room 10 that I didn't think I need to. That's it. Right. I mean, how many, how many movies have we seen where the, the heist starts with people pretending to be sort of, you know, technicians or workmen who are coming to work on a building. Yeah. Marshall Sorenson (37:15) Mm-hmm. yeah. Yeah. And Martin, what you're describing in, you know, within the cybersecurity industry is what's known as a supply chain attack. This is where a third party vendor, somebody that you think you can trust, ⁓ turns out to have maybe a less than above board purpose, you know, be they, you know, intentional from the get go. It could also be that, ⁓ you know, that that's very same contractor has You know, has had the squeeze put on him and, know, is being influenced by an outside organization. What you're seeing is ⁓ with large state level actors that they're leveraging these non-human identities and people in some effects ⁓ to wreak havoc, to steal data. You know, they're influencing these things. If you can compromise a service, like say a group out of North Korea is able to compromise and control what is otherwise a relatively trusted service. Now all of sudden you're saying, ⁓ I trust this calendar organization app without knowing that it's controlled by bad actors and you've just let it into the building. That's a very concerning security event, ⁓ but you wouldn't do that with a person. So why would you do that with a, if you knew better, you wouldn't do that with a person. So why would you do it with a service or non-human identity? Martin Hinton (38:39) You The concept of third party risk or supply chain risk plays heavily into cyber insurance and the reality of sort of underwriting that sort of thing. How does all this sort of factor into that? Tell me, let's dive down into how this reality impacts the cyber insurance reality, if you will. know, the sloppy third party access controls and that sort of thing. What are we talking about? Marshall Sorenson (39:02) Very much so. Right. Right, so I think it's a matter of degrees with respect to cyber insurance. So one of the things that we encounter at Myriad 360 all the time is we work with a lot of companies to boost their security awareness training. And I would pose the question to a theoretical ⁓ cyber insurer, ⁓ would you insure a company that does not do any sort of security awareness training? Probably not. You probably wouldn't, or you might insure them, but you would boost their premiums by a large factor. Now, What if they didn't do that security awareness training, but they just did it very poorly? You know, it's very old school and it says, you know, don't click phishing links, which everybody kind of knows, don't click random links in an email, ⁓ but they don't cover things like deep fake attacks or, you know, the latest sort of social engineering techniques. Well, would you still? ensure that company or would you still give them the same premium? Probably not. And I think that that same idea applies to non-human identities and where I see people, I think you're seeing a larger trend of people are starting to pay more attention to it. So even if you think, this doesn't affect me, this doesn't, you my organization, we don't really. you know, we have a good security posture, we have documentation and policy in place, the insurer might see it differently because they're starting to pay attention to what damage these sort of non-human identities can do. So again, it's a matter of degrees. if you have no controls in place around non-human identities, if you've just given broad permissions and you don't really control the life cycle, you're not interested in monitoring or off boarding them. you might still get insurance. Your premium's gonna be through the roof. Now what if you have a little bit of control around the onboarding and offboarding, but you don't have good monitoring? You can't really see what they do after they've been given permission. You don't see where that contractor's going within the building. Yes, you checked his badge, but you don't have a security camera. That's a different conversation. And... What I think people should be paying attention to. So I'll address this from kind of both sides of the equation. If you are an organization that is looking to get insured, you need to start paying attention to this because again, as we said, said earlier in the conversation, the scale is growing and it is growing rapidly, especially with the onset of AI tools. You're going to see. I couldn't give you a number on the factor increase of these non-human identities, but it's large and it's only going to continue to get larger. So not paying attention to it puts you behind the curve. And if you're on the cyber insurance side of things, there are direct historical events that have proven that this stuff is worth caring about. There are two breaches last year. ⁓ Sales Loft and GameSight are two of the most popular ones you'll find in the media that indicate directly to these insurers that this is a real problem and that if you are going to you know underwrite a company you need to start paying attention to this stuff. It's no longer just hey do you have a complicated password? Do you have you know multi-factor authentication and you know push notifications whatnot? It's do you have a complete and clear picture on your non-human identities in your environment? If an auditor walked in the room, could you reasonably say with confidence what permissions every single identity has, what is risky, what is not, have we put in various controls, et cetera, et So whether you think, don't really care about these service accounts or not, people are starting to pay attention. The people that are going to be underwriting your policies, the people that are going to be writing compliance documents that you need to adhere to, they are going to be paying attention. Martin Hinton (43:25) I mean, you, I don't think we talked about it, but some of what you just said now made me think of all the stuff I've read really in the last week about the explosion of agentic AI kind of concepts where you have a, know, what is it people buying? Is it mini mac everyone seems to be buying? And then they give it a task and they go into the, you know, the root folders of the thing. it's suddenly producing websites and doing things by itself. That idea that, that that's the Marshall Sorenson (43:47) Mm-hmm. Martin Hinton (43:55) You know, if you've got an agent to AI that can, you know, monitor your website or generate local news based on press releases from the government or that sort of thing and turn it into what looks like a newspaper article, that is all operating in a way that creates permissions and exists and it's doing something on a very small scale. These things are exist. They've existed in a similar way. mean, am I, am I conflating these two things that are like associating them incorrectly just to kind of Marshall Sorenson (44:24) No, you're 100 % on the mark. And that's where I try to educate a lot of the clients that I work with is that AI is new. It feels new, and it is sort of a new paradigm for productivity, for content generation, for anything, really, depending on what AI model you use. But really, when you talk about permissions and the data that has access to, especially in the business world, a lot of the ways, a lot of the mechanisms that AI uses to operate are very old school. It could be as simple as the AI says, hey, I just need a username and password in order to log in. That's, mean, right there, you're giving permissions to that AI agent. ⁓ And that username and password might not ever expire. ⁓ You could go a more modern route with a standard called OAuth 2.0, which basically gives you a token in exchange for verifying and then that token can be used. So you're not sending out a username and password, but you're still giving it permission to access this environment. But the OAuth standard is well known. So my point is that, yes, AI feels new, but the way it accesses these things is very old school. ⁓ It's not... terribly different from what we see in the sort of pre-AI age. Unless, know, there are new standards being developed, obviously, but for the most part, it's very old school the way that these sort of larger critical business systems need to be protected. So there again, though, how easy is it to say, hey, AI, I need to give you, I need you to, you know, access this data in this folder. And says, great, you know. Do you consent? And you say, yeah, of course, I want to try to get this thing done. And right there, you're creating a non-human identity that you then need to manage and control. And are you sure that all of the permissions are valid? have you done monitoring? Have you done your due diligence is really the core question. Do you trust that these things have been built correctly? Martin Hinton (46:31) We do, don't we? We do trust. We do. Marshall Sorenson (46:34) We trust a lot, that's my point, yeah. There's almost this blind willingness to say, yeah, I trust it. But, there again. Martin Hinton (46:40) Well, I mean, we've been conditioned to say yes to these things, that they are going to help. I mean, you touched on it, right? And this is something that has come up again and again and again in the podcast I do and the reporting we do is that one of the things that cyber criminals take advantage of is we're generally inclined to help. In a business environment, we want to get things done and we want to do things efficiently and quickly. And sometimes we want to get them done before we have to leave on a Friday. Marshall Sorenson (46:46) Of course. Yes. Martin Hinton (47:10) These are all the realities of how these things help make those things and in any process move a little faster and intertwined in that is an enormous vulnerability that has developed. Again, I know he was touched on this, but I always want to be careful that I'm not overstating it because in this world, there's a lot of the sky is falling, it's trillions of dollars lost to cybercrime and it can happen to anyone. It's not a matter of when there's a lot of that sort of talk. It's not hyperbolic. Marshall Sorenson (47:18) Yes. Mm-hmm. Martin Hinton (47:40) It is very truly something that small and large companies alike should be paying attention to. And even individuals, when you click yes to something to give it access, think about it is what we're talking about. We need to put a little friction into that process. Marshall Sorenson (47:55) I wouldn't say friction, but I think just people need to be aware of how much there is, how much is bubbling under the surface that you might not be aware of, that it is truly everywhere. You almost can't have a modern IT infrastructure without it. In fact, I would wager it's impossible. I would love to eat my words on that, but I really think that it is core to how a modern business might work and how they leverage information technology to get their job done. ⁓ It is going to be everywhere. And so even just recognizing that fact is progress. Even just saying, yeah, okay, we have a lot of it. That's sort of step one, acceptance, if you will. ⁓ Acknowledging that you have a problem is one of the first steps. And so that I think it'd be very important. ⁓ But also understanding that Martin Hinton (48:46) Yeah. Marshall Sorenson (48:52) It's yeah slowing down is introducing friction is maybe not I don't want to say I Don't like the word friction. ⁓ I wouldn't say it's friction to slow down and assess these things I think it is it is wise in fact, I think our world moves very fast, you know, and I'm not saying we should be unproductive I'm not saying that we should be ⁓ know, forsaking the gains of, know, the productivity gains that these automations, these services can give us. But I think we should take more time to understand, to do that due diligence, ⁓ to protect ourselves because ultimately if you don't, and no one person can do this, I'm not saying that, but ultimately if you don't take care to understand the scope of the problem and the depth with which these things have sort of, you know, come into our lives, then you're going to be behind the curve. And the good news is, I don't want to be all doom and gloom, but the good news is that there are processes, there are tools, there are people out there, there are groups that are specializing in the management of these things, in controlling the problem. ⁓ To go back to the turnstile analogy, you have new police patrols, you have more modern turnstiles, yes. ⁓ Should we be in the situation in the first place? Ideally not, but that's the world we live in. So it's not all doom and gloom. ⁓ So I guess, you you look for the ones that look for organizations that have been ⁓ starting, that have recognized this problem and are starting to, you know, take steps to address it, that they're looking at the most modern tools that are going to give them ideally, you know, if I could leave, you know, if I could just put this thought out there. Ideally, would give you identical controls for your non-human identities as your human ones. It should almost be, in some senses, it should almost be an HR type ⁓ of problem versus an IT one. IT shouldn't really have to go manage life cycles or say, Bob Smith got promoted, now we need to give him the key to the IT closet. ⁓ Same thing with non-human identities. It shouldn't be an IT problem. It should almost be treated like an HR function in some ways. Martin Hinton (51:19) You touch on a common theme is IT and cybersecurity may overlap, but they're not the same thing. That there's a distinction there that increasingly because of the explosion, these kinds of vulnerabilities has created specialized skills that are falling outside of traditional IT skill sets and into this new world of cybersecurity. There's two points you made that I just want to reiterate for purposes of agreement. And I'll say this, we have benefited enormously. Marshall Sorenson (51:28) Yeah. Right. Martin Hinton (51:49) from technology and it's been remarkable and to your point earlier about the word friction and I tend to agree and I use the word friction because of the Steve Jobs and the iTunes store and that sort of thing but this idea that we want to move and do things with efficiency and effectiveness, right? That's what gets left out. There's two E's, there's efficiency and effective. ⁓ And the other point you made about people solving this problem and I've started to come to the idea that I'm going to call this the Volvo solution. Marshall Sorenson (52:01) Right. Martin Hinton (52:17) You think about the car and the car was created that it didn't have windshield wipers or airbags or seat belts. Those those safety or resilience to devices took decades to come along. I it wasn't I think it was the 60s and the 70s when we started to see three point seat belts and airbags and collisions. And obviously now we have a great deal more technology that's been introduced to vehicles to make them safer, whether it's a physical thing or a technical part. Marshall Sorenson (52:35) thing. Martin Hinton (52:44) And that we're in a similar moment now with regard to technology, with regard to non-human identities, as well as other things. That there is now this sort of, you know, middle-aged reality, well, we need to start eating a little better and maybe, you know, not eat as much red meat and go to the gym a little more. And it's an analogy I use quite often about how to keep your, the whole system healthy, as in the whole body healthy. You need to eat right, you need to exercise, you need to sleep well. There's more than one thing to do to solve a problem. Marshall Sorenson (53:00) Mm-hmm. Martin Hinton (53:12) And it sounds to me like we're in a situation with regard to non-human identities that mimics that sort of pattern of the evolution of anything really. Marshall Sorenson (53:20) Yeah, I would agree with that is that it is almost a culture thing. ⁓ You know, it's not some new statement to say that organizations have adopted a security culture because they have to. You have to be aware. You have to be ⁓ knowledgeable about the threats that are out there about. ⁓ even I wouldn't even say threats, know, good hygiene, you know, almost within your organization. ⁓ You know, like everybody's accustomed that you get the notification every now and again that says, hey, it's been 90 days. You need to rotate your password, you know, just in case it has been guessed by a bad actor. Go ahead and rotate it. Like that's good hygiene. That's good practice. And everybody's come to expect that. ⁓ Now, most of the time, though, that's not a person, again, that's not a person coming down and saying, hey, you need to rotate your password. That's a tool. That is a system itself that is trying to enforce that culture. And so the successful organizations, ones that are ⁓ what I would say ahead of the curve in this respect, that have, you know, that they're going to the gym, they're eating healthy, so to speak, is the ones that have been using these tools that are out there to enforce a culture change, you know, to at least, again, I said, if you could do one thing, it's just, just be aware of the problem. ⁓ Having tools out there that say, you know, be aware of, you know, when you click on that allow, or, you know, have visibility into this permission that you give and make sure that You you go back and review it every now and again to say, do we still need to give this permission? ⁓ That will help build that culture of understanding what non-human identity is, what it can do, the potential risk that it represents. ⁓ You know, I don't like to throw tools out there always. you know, in some senses, I like to think of myself in my role as something akin to a psychiatrist. You know, I didn't develop the drugs. ⁓ but I know how they work together. Sometimes I can offer ⁓ verbal affirmations or breathing exercises as an alternative, but I can understand how the complete picture fits together. And when I talk to my patients or my clients, the ones that are doing very well are the ones that have used these tools, again, to enforce that culture shift, the ones that have said, okay, we're gonna have visibility. We're gonna have, if nothing else, ownership of these things. Because they don't, with rare exceptions, they don't just appear out of nowhere. Almost always it is going to be a person, as we said, that clicks allow, that says yes, that says confirm, that is going to be creating this thing. And so it should be that person's responsibility to own it, to manage it, to monitor it, and if necessary, to decommission it. So again, the tools can help enforce this, but ultimately it's going to become Martin Hinton (56:14) You Marshall Sorenson (56:19) Again, the more successful ones are the ones that have built it into their cultural DNA. Martin Hinton (56:24) You touched on a couple of things that I routinely raise on the podcast and that's the idea that so much of this feels new, the technology, I mean, I'm old enough and I write for people who watched this far into the podcast. mean, I'm 55, I paid my rent with a stamp on an envelope and mail check, right? So I think that I can't tell you how much better it is to be able to Venmo or built or. Marshall Sorenson (56:44) Right. Martin Hinton (56:51) whatever Revolut, whatever the hell you might be using to pay bills and do it from your phone while you're sitting at the bus as opposed to having to sit down at a desk and write checks. There is, we've been through all this before. And again, this idea that there are solutions and, know, invention and progress always results in nothing's a straight line, all those types of things. And again, I had to reiterate that the problem is there to be solved. But when it comes to the problem, explain to me how... Marshall Sorenson (57:00) Mm-hmm. Martin Hinton (57:18) a bad actor, ⁓ a threat actor, a cyber criminal takes advantage of these. Like, what does an attack look like? How do you, how do you, if you have vulnerable non-human identities or too many, how does someone take them and, you know, profit from them and create the data breach or whatever it might be? Marshall Sorenson (57:28) Yeah. Yeah, and you know, I know our target audience, so this is where I really want the people that are underwriting these policies to really pay attention, because this is important. ⁓ In most of these breaches, what people are taking advantage of is, if I can sum it up in two things, it is overly permissive identities and a lack of monitoring. I'll say it again, overly permissive identities and a lack of monitoring. So let's break that first one down. Overly permissive. It's pretty straightforward. When we talked about earlier saying all those bullet points that say, allow me access to all of these different systems, that's bad. That could be bad, depending on the system. And that's exactly what happened. I mentioned two of the breaches in recent history. These were both in 2025. Salesloft and Gainsight. What happened? I'll use Salesloft in this example is they had a, an AI agent that had been given broad access to things like a Salesforce, which is a customer relationship management tool. at Google information, so ⁓ data, email, ⁓ Microsoft as well. So again, that's data, that's email, it's documents. ⁓ This AI agent had been given broad permissions to access all of these different systems for the purposes of its productivity suite. Again, going back to trying to unlock productivity and efficiency gains. But it had been given very broad permissions to access all of these things. And... Whether or not it needed all of that information is, well, that's a matter for the history books, I think. ⁓ But what happened is attackers ended up exploiting that permission because they were able to siphon off data, documents, emails, using the credentials that were created to access all of these different systems. So what happened was, again, this is prior to the breach. ⁓ Somebody within the organization and it's not just one organization. I should say ⁓ with sales loft I think it is now affected upwards of 700 I believe organizations that have been exposed to this breach At each point though somebody in that organization had to say Yes, we will allow this we've we've done Supposedly we've done our due diligence. We've done a review and we say yes, we will allow it to access our Salesforce our Google data our Microsoft data and we accept that. We accept the broad permissions that is asking for because we've done that trade off, know, or we've done that risk reward calculation. We trusted them. We're giving it these broad permissions. And when that happens, when somebody says, yes, ⁓ in this case, it was using ⁓ OAuth tokens, which again, just basically says, you're gonna get a token instead of username and password, it's gonna give you a token. ⁓ that you just go out and present to it. says, basically, ⁓ when you go to a bar and you show your ID, they'll give you a wristband. And you don't have to show your ID every time at the bar. You just show the wristband and you say, hey, I've already been checked once. ⁓ Same idea with OAuth tokens. So these tokens have been issued and they'd be reissued. say, all right, well, you have a pink wristband. It's 10 o'clock. We need to switch you to a blue wristband. But we see you already have it. Here you go. ⁓ That's OAuth in a nutshell. What happened was these attackers, got a hold of a pink wristband, so to speak. And what they did was they said, hey, I have a pink wristband, I can go to the VIP section, right? And the bouncer says, yeah, of course, of course you have a pink wristband. You're fine to go wherever you want, know, because that's the VIP wristband. You know, can get anywhere in the bar or in the club that you want to. And that's exactly what happened with the sales walk breach. these OAuth tokens had access to Salesforce, to Google, to Microsoft, and attackers were able to get access to that systems without having to go through ⁓ traditional checks. They weren't asked to re-authenticate. weren't asked to, nobody asked to see their ID again when they started accessing Google and Microsoft and Salesforce. They just started siphoning off data and said, They said, yeah, OK, because you have a pink wristband. You must be this AI agent. You must be this person that we've already authenticated. We're not going to re-verify. Because that's how most of these automated systems work. If you remember, I harken back to the manufacturing example. You you want an automated manufacturing line to just proceed. You don't want to stop and check every single time you need to access something or, you know, change the tooling or anything like that. You just want it to work. So they were able to access these systems. That's that's problem number one. Problem number two, lack of monitoring. The problem with how they access this data, it's not just that they access this data, but they did so in such a way that bypassed traditional monitoring systems. So they were able to spoof their IP address. They were able to spoof other sort of behavioral indicators that would otherwise say, hey, this is abnormal. So the Salesloft breach in particular, most organizations lacked the monitoring capability to say, Hey, this is abnormal. This sales loft agent has never accessed this data folder before. It has never accessed this set of emails. That's abnormal. Why is it now extracting this email or, you know, or forwarding this document to an unknown email address? They didn't know because they didn't have sufficient controls in place. So it was given broad access to all of these systems. It was not monitored against a behavioral baseline, should we say. And because of those, you the combination of those two factors, that's why these breaches were so successful. And that I think is exactly, I mean, if you ask around, if you ask me and if you ask around, that is exactly what attackers are looking for. There are, I mean, we've had decades of improvements and work done. to lock down the human identity because everybody assumes, it's a person. Obviously, they have broad access. They need to do their job. And so most of the security work has been done to lock down the human access, but not nearly as much has been done. ⁓ around the non-human side of things. that's exactly what attackers are exploiting. They say, nobody's monitoring it. They're very often given broad permissions and oftentimes very long lived permissions. And so we're going to go out and exploit that. Especially because I talked about how these human systems have been built. One of the most common ways of... you know, protecting a human is with multi-factor authentication. So you get a six digit pin code sent to your phone and you have to enter that on the website, right? That's a very common thing that I think most people have experienced. And there are new iterations of that, but you can't exactly do that with a non-human identity. If you have an automation running that says, generate me a report, but it also needs to access a third party database, when it accesses that third party database, you cannot, in most cases, you cannot ask it for multi-factor authentication. Do you want to be the one responsible for saying, if it accesses it three times an hour, are you really going to sit there and say, OK, approve, OK, approve, OK, approve every time that has to happen? No. you're giving it permission to access that third party database, but you're also saying, I'm just going to trust it. I'm not going to re-verify because this is a trusted process. And also, how bad could it be? That's a question people tend to ask. say, how bad could it be? Martin Hinton (1:05:57) I mean, you touch on the human element, right? We hear all about, know, 80 to 90% of cyber breaches are down to human error or sort of social engineering or phishing link. And, you you listen to people like the Marks and Spencer chairman, Archie Norman talk about having 50,000 employees and any one of them could be a way in. And there's a lot of attention to this. I feel like... Marshall Sorenson (1:06:17) Mm-hmm. Martin Hinton (1:06:23) we need to start thinking about every non-human identity as a similar vulnerability point. Is that crazy? Marshall Sorenson (1:06:28) Yes, 100,000%. Now, ⁓ I that's, I don't want to say it's every single one. ⁓ you're on an air gap network and it's never going to see the rest of the internet, maybe it's a different conversation. Maybe you treat that risk a little bit differently. But the truth of the matter is, yes, everything is potentially a threat vector. It could potentially be. you know, the bomb that goes off that you never saw, you never saw the clock ticking on. ⁓ And I think that's, you know, that should, if you've been paying attention so far, that should scare the daylights out of you because I've talked about the scale. I've talked about the lack of monitoring and now I've talked about what happens when you don't pay attention and things go wrong. You're talking about critical data and lots of it being extracted through channels you weren't even paying attention to. You had so much focus on the front door and all the security cameras pointed there at the people coming into the bar that you didn't notice, you know, the contractor who had pulled employee records and is now going out the fire escape. You know, it's channels that people don't pay attention to that I think are getting them in trouble. Martin Hinton (1:07:47) So we've been talking a little over an hour, so I just want to move to the close. One of the things that we discussed in advance is some practical takeaway. If you've heard some of this or you've heard a little bit of it and you're like, ⁓ I don't even know the answer to what vulnerability we have. So for the small business owner or a CISO or a broker or underwriter, we've got a few questions. So I'm just going to read them and we'll go through them and we can come back. Passes are allowed. So if you want to take a pass and we'll come back to it, go ahead. ⁓ Marshall Sorenson (1:07:51) too. Mm-hmm. Let's go for it. Martin Hinton (1:08:16) Can you inventory every identity with access to critical data, human and non-human in systems? Marshall Sorenson (1:08:22) Yeah, if I'm a person, if I'm a small business owner and I was presented with that question, most likely the answer would be no. And I think that's an important, you know, as you scale up, the answer becomes closer to yes. ⁓ But I think it's, I think most people are going to say no to that. They're going to say, you know, if an auditor walked in the room. I don't know. That's the truth of matter. To help answer that question, though, I think, again, it goes back to enforcing that tooling. If you can at least understand the scope of the problem, if you introduce ⁓ tools or processes that can say, hey, I want to get better, if I can at least see everything in my environment, that is already a great step forward. ⁓ Regardless of how you control them, at least seeing them is important. Knowing where every contractor in your building is located, knowing where all the rooms that the secretary has access to, that's a very big step forward. Martin Hinton (1:09:21) Yeah. So in the cyber insurance and incident response world, what about after the fact? we prove who accessed what, where, when, why, after something happened? Marshall Sorenson (1:09:31) Yeah, that again is going back to that monitoring that I highlighted with the sales law accident ⁓ is enforcing the sort of behavioral control, enforcing just basic audit logging. ⁓ As we've said before, things like AI, they're new. ⁓ They feel new, they are new, but the ways that they access these systems is very old school. And most of them are going to be... ⁓ including their own audit logs. if ⁓ I'm a director of security or if I'm a small business owner, regardless who you are, I want to ask the question, do I have sufficient logging in place? Do I have that ⁓ converging into a single source of truth where I can investigate all of these logs, all of the events going across my system? Again, regardless if you have mature processes to action upon all of this data, as long as you have it in a single place and you have sufficient telemetry, that is a good step forward. That is a good starting point. Martin Hinton (1:10:28) I'm imagining in the ideal environment, you've got a sort of mosaic of your infrastructure, your IT environment, all your connected devices and systems and that sort of thing. And the non-human identities appear like dots and there are some that can go only in the green space and they're green dots. And if you see a green dot in the red space, it's in the wrong place. That's the monitoring idea you want. Not unlike if you were looking at a, I don't know, a military base or an office building where you can sort of see where people are at all times. And when that person... who is never supposed to be or does not have permission to be somewhere they're not supposed to be, yes, permission to be, excuse me, they find themselves somewhere they're not supposed to be or don't have permission to be, I suppose, which is a distinction. Is that what we're talking about with the monitoring? Like, you got to know what's going on in your environment, digitally and physically. Marshall Sorenson (1:11:03) Mm-hmm. You have to. You have to put in the digital equivalent of security cameras. you go to one of those shiny office buildings in New York, they're going to ask you to sign in, check in at the front desk, or sign the guest book. Say, it's me. I was here at this time, this date, et cetera, et cetera. They'll give you a temporary pass. But you can guarantee those security cameras are going to be on the entire time. Organizations need to adopt the digital equivalent of all of these controls. Again, I'm going to try to draw the parallel of controlling Human access, know, you would monitor you would obviously monitor a person in your office environment You need to do the same thing with your your non-humans your non people Martin Hinton (1:11:55) So I want to move to wrapping up. One of the things that I mentioned is we obviously didn't get to everything. Is there anything we didn't discuss that you'd like to mention or anything we did discuss you'd like to say a little more about? Marshall Sorenson (1:12:03) Mm-hmm. That's a good question. ⁓ mean, we talked about how it's, if I could stress one point, is that it's not a very complex topic. It's the scale that we're really contending with. I think what people need to pay attention to is that a lot of the habits that you'd be designing around, need to be ⁓ developed in a similar fashion. I can't stress that enough. ⁓ And if you're an insurance organization and you're looking to underwrite an organization, or if you're an organization that's looking to be underwritten, you need to adopt these hygiene habits and controls and really make it a part of the culture. That, think, is the important part. And that can come from the top down. I know there's some people that subscribe to the philosophy that hey, this needs to start from the bottom up. Everybody needs to be responsible. But in some cases, it's, you know, it's difficult to lead a horse to water and then make a drink. You know, in some cases, it's easier to say, hey, look how tasty this water is, and then the horses will come themselves. So I think that that fact is also true when it comes to non-human identities is that it can be enforced from the top down and yeah, go ahead. Martin Hinton (1:13:21) Yeah, I... No, I because I've heard that sentiment about cybersecurity very broadly mentioned that, you know, we spend a lot of time making employees take their quarterly phishing training or whatever it might be that, you know, gives us the ability to hopefully make sure we're a little safer, but also check that box on our cyber insurance policy and that sort of thing. And the idea that I've heard expressed is that the importance of this needs to be more of a CEO board level reality so that it... Marshall Sorenson (1:13:42) Yeah. Martin Hinton (1:13:51) Like you said, of cascades down through an organization and then also there's the training that can sort of augment that mindset. Yeah. Marshall Sorenson (1:13:54) Yeah. But I would, to the CEOs and CISOs listening, I would not say that it is entirely your responsibility. Don't feel like you have to entirely shoulder the burden, ⁓ because it is a shared responsibility. And that's true of most areas of cybersecurity, is that you see organizations, the successful ones anyway, are adopting a shared responsibility or a shared risk model. And that applies to everything in cybersecurity, including what we've talked about with non-human identities. is that you need to be understanding that no one person should feel that they own the entire picture. And I don't think anybody really should own the entire picture. It should be a shared model, ⁓ a shared community of responsibility for this stuff. And the culture is a great way, know, enforcing the culture from tooling is a great way to start to build that, you know, to build that shared responsibility model. And then you can, you can take that in any direction you want, but as long as it's sort of ingrained with everybody that it's a shared thing, nobody's going to get it right. ⁓ Nobody's going to be a hundred percent. That's, that's where I think ⁓ that's where I think people can start to make some great steps. And that I think is where, you know, we can. know, perhaps Myriad can provide a lot of help and a lot of... We can do our own part to help create that shared responsibility model. Nobody's going to know everything about cybersecurity. I sure don't. ⁓ But together, I think if we build that community and build that partnership, that's where we can start to be really ahead of the curve. Everybody has their own area of responsibility, but they share it with everybody else. There's cross-collaboration, there's cross-pollination. ⁓ That's where I think we play very well. Martin Hinton (1:15:44) You sort of take us into the last point I wanted to touch on, and it's the idea that as much as this stuff is everywhere and there's been a sprawl of it, that's only going to increase with sort of AI, right? If I could put you on the spot to look into the future and give business owners or individuals one piece of advice or one warning for this new age of, know, agentic AI or AI agents or, you know, autonomous activity and, know, no human, nevermind in the loop, but even on the loop, the kind of ideas. What do you see coming in this space and what caution or cautionary tale, without scaring people, because again, I think there's a lot of upside to what's going on. But what do we need to be thinking about for the future as much as any of us could predict the future? Marshall Sorenson (1:16:24) Yeah. ⁓ Yeah, I mean, think it's, depending on your opinions of, know, bubble or not, ⁓ I think AI is gonna stick around. And I think that what's important now is for organizations to understand we are, whether you realize it or not, and again, not to scare people, whether you realize it or not, we are in an arms race, Martin. Right now we are in an arms race between people using AI for good and people using AI for bad. And if you are trying to fight the bad guys who are using AI with traditional tools, you're gonna lose that battle nine out of 10 times. Maybe 99 out of 100 times, whatever stat you wanna say. ⁓ The reason for that is it unlocks a lot of speed. So to the ones listening, I would say don't be afraid of fighting fire with fire, as it were. Don't be afraid of using these tools that people are building and designing to combat this problem because if you don't, you might end up on the other side of that equation. So I would say that when I predict the future going, I see a very, very large increase in the adoption of AI-powered tools to help fight this problem. ⁓ You know, both from the sort of management aspect of it, as well as from the sort of security aspect of it. There's, there's, as I said, it's not going away. I only see that adoption being increased. And so I would say optimistically to end on an optimistic note, I would say, don't be afraid of using that tooling, you know, don't be afraid of using AI to help empower your organization because there again, it's a shared responsibility. You can share some of that risk with AI tools in your environment. You can say. you AI are gonna be responsible for maybe cutting through some of the noise and finding that signal, ⁓ that needle in the haystack, where it would take me days to find that needle. ⁓ AI can do it in seconds now. So don't be afraid of using these tools. ⁓ And also don't be afraid of the amount of tools out there. Obviously there's a lot out there ⁓ that people might get overwhelmed, but there will always be new players. There's always gonna be new angles of attack. ⁓ So just keep working with it. Don't try to be kind of a Luddite in some senses. Really use the tools that are available to you to create that shared responsibility model and to fight this in a modern way. Martin Hinton (1:19:00) I so I spent six years writing and directing military history documentaries. you well, escalation isn't maybe the right word, but you see this in history. And my point is that, again, this sort of issue and the way we address it is not new, right? The enemy starts to build bigger battleships. You build bigger battleships. They adopt air power. You build an air force, right? There are very practical things. And that doesn't mean you wind up going to war. Marshall Sorenson (1:19:20) Right. ⁓ Martin Hinton (1:19:27) It just means that you look like a less vulnerable target and maybe they go somewhere else or they leave you alone or you negotiate it in a less dynamic way. it is again, one of the things that I think that can often kind of feel like the weight for small business owners, even large business owners and individuals is that all this is new. It's very abstract because it's digital and we don't see it. But the fact of the matter is we've dealt with big problems like those we're discussing today and Marshall Sorenson (1:19:37) That's correct. Martin Hinton (1:19:56) ongoing issues and there a perseverance required. mean, it's a process. There's no real end game here whether we're going to continue to adjust and adapt again, just like you might continue to exercise or eat well in order to stay healthy in your own body. anything else Marshall before we wrap up? Marshall Sorenson (1:20:01) Yes. Exactly. No, I just want to say it's been a pleasure to be on. I really do feel passionately about this stuff. I think that we have perhaps a little bit of a scary future ahead of us, but also a very exciting one. think that I like to be on the side of the good guys. I think we stand at kind of a... I don't want to say a precipice because that sounds dramatic, but I think we stand ⁓ kind of at the edge of a new frontier when considering AI in this equation. ⁓ But just with a, you know, as people tackled a new frontier, ⁓ they did it sort of with, you know, covered wagons and horses ⁓ and they conquered it anyways. And I think that that analogy applies here as well. Martin Hinton (1:20:39) I-I- I agree with you. mean, I have a 25 year old and a 19 year old child and talking to them about the world. I think, you you could think that the sun is setting and that the sky is falling, or you can look at the world and see it as a dawning and that there are new problems with every day. And in the classic business sense, you build businesses that solve problems, right? That make money solving problems. And I think that there is the opportunity to be in that situation now with regards to whether it's not human identities or something even. further afield from the technical world, there is that, it's important to remember that. So I think that's a really, really good note to end on. So Marshall, thank you so much for the time. I really, really enjoyed the conversation. ⁓ Contact information from Marshall and Myriad360 are in the show notes. ⁓ If you've got a question or anything like that, please drop it in there and I'll answer it. And if I can't, I'll get it to Marshall, he'll see it. But again, Marshall, thank you so much. ⁓ I'm Martin Hinton. Marshall Sorensen is with Myriad 360. He's out of Atlanta, but Myriad 360 is based in New York City, and he's a cybersecurity solutions architect who's been talking to us today about non-human identities and all that that means. So, I ⁓ really, really enjoyed it. So, thank you very much. Marshall Sorenson (1:22:06) Yeah, thank you, Martin. Have a great one. Martin Hinton (1:22:08) My pleasure. Everyone else, thank you so much for watching. you could like, subscribe. Again, like I've said, leave a comment. We're always interested in your thoughts and your feedback. And if not, enjoy the rest of your day. Thanks again. I'm Martin Hinton, Executive Editor of Cyber Insurance News. Thanks for watching.