Estimated reading time: 6 minutes
We’ve seen this before: a new technology emerges, companies rush to get ahead, and governance tries to keep up. According to Delinea’s latest research, AI risk has reached this point. About 90% of organizations are pushing cybersecurity teams to relax identity controls so AI projects can progress. This tradeoff increases the risk of unauthorized access, data breaches, and loss of control over machine identities, shadow AI, and privileged access.
The report, Uncovering the Hidden Risks of the AI Race, which surveyed more than 2,000 IT decision-makers worldwide, found that leaders want to deploy AI faster. This puts more pressure on security teams to ease identity controls, which creates oversight challenges.
Art Gilliland, CEO of Delinea, states: “The pressure to move fast on AI is real, but identity governance has not kept pace.” This gap creates material exposure across enterprise environments.
AI Security Confidence Collides With Reality
Organizations say they feel ready for AI, but the report points out an “AI security confidence paradox.”
Eighty-seven percent believe their identity security is good enough for AI adoption, but 46% admit their governance falls short, showing weak validation practices.
The report says 82% of organizations are confident they can find non-human identities. However, less than a third actually validate these identities in real time.
This gap creates hidden AI risks by allowing unauthorized or unnoticed activity. Companies that trust their systems without full monitoring or audits may not catch data leaks, fraud, or system manipulation.
Identity Visibility Gaps Expand With AI Adoption
The research found that blind spots are common in identity environments. Nearly 90% of organizations report having at least one visibility gap.
The biggest gaps are with machine and non-human identities, such as AI agents, service accounts, and automated processes. A chart on page 9 shows these ongoing gaps in cloud, SaaS, and AI environments.
AI environments have the highest risk of long-term visibility problems. These gaps make it more likely that suspicious or harmful activities in AI systems will go unnoticed for a long time.
When security teams lack visibility, they can’t spot unusual behavior or respond quickly to threats from AI operations. This makes security breaches, insider threats, and misuse of sensitive data more likely.
Non-Human Identities Drive New Threat Vectors
As AI grows, the number of non-human identities increases quickly. These now far outnumber human accounts. The report mentions industry estimates with ratios as high as 82 to 1.
About 42% of organizations say that AI growth has made non-human identity risk much higher. Only a few say they are not concerned.
AI agents work on their own and make decisions based on context. When they ask for new permissions or higher privileges, there’s a risk of unauthorized access or mistakes. This makes traditional security models harder to manage.
The report states: “AI capability equals AI risk.” Greater access enables greater functionality. That same access increases exposure.
The Cyber Insurance News Podcast has covered AI Risk extensively
This episode features Chris Kelly, President of Delinea
Other Podcast Content on AI Risk
- AI Risk: The Insurance Industry Faces a Faster, Bigger Ransomware Repeat
- AI Risk: The “Black Hole” Problem With Contracts, Data, and Liability
- AI Risk Today: The Bank-Wire Future Of Every Interaction
- Cyber Liability Insurance Underwriting: Non-Human Identity Controls Matter
- Non-Human Identity: The 45:1 Cyber Insurance Risk
Standing Privilege And Limited Traceability Raise Alarms
Many organizations rely on persistent access for AI systems. This approach creates long-lived credentials and standing privileges.
About 59% of organizations say they have no good alternative to standing access. At the same time, 80% can’t always explain why an AI identity took a privileged action.
When actions can’t be traced, it’s harder to hold people accountable or respond to incidents. Attackers can use these blind spots to move through systems, raising the risk of bigger security problems.
Security experts warn that machine identities often lack ownership. This gap leads to unmanaged access and overlooked risks.
Shadow AI Adds Another Layer Of Exposure
The report highlights the rise of shadow AI. Employees deploy unsanctioned tools and agents without oversight.
About 53% of organizations encounter unauthorized AI tools accessing systems. Detection often takes hours or days. Only 28% report real-time detection capability.
Shadow AI behaves like compromised credentials. It operates with trusted access and remains difficult to detect, increasing the chances of data leaks, unauthorized data manipulation, and the spread of malicious software across enterprise networks.
Experts warn that decentralized AI deployment creates centralized risk. Employees prioritize productivity. Security teams lose visibility.
Attackers Shift Focus To Identity Infrastructure
Threat actors increasingly target identity systems. The report highlights ransomware, credential theft, and identity-based attacks.
Attackers use AI to map privileges and identify high-value systems. They exploit identity providers and single sign-on platforms.
The report states: “Legitimate access does not mean safe access.” Attackers often use valid credentials rather than traditional exploits.
About 92% of organizations expect AI to amplify identity-related threats, potentially increasing risks such as credential stuffing and privileged account compromise, leading to unauthorized access and significant security breaches.
Get The Cyber Insurance News Upload Delivered
Subscribe to our newsletter!
Security Teams Trade Control For Speed
Organizations prioritize innovation speed over governance. The report shows consistent pressure to reduce security friction.
On page 15, data shows 90% of organizations push teams to loosen access controls. Many grant exceptions or disable safeguards.
While this tradeoff improves operational efficiency, it also introduces unmanaged access risk, potentially leading to unauthorized system access, data loss, and increased cyberattack likelihood. Identity often becomes the weakest link.
Experts compare this pattern to previous technology shifts. Cloud adoption and BYOD followed similar paths. Security often lagged behind innovation.
FAQ AI Risk And Identity Security
AI risk refers to the security, operational, and governance threats tied to AI systems and agents.
AI creates more machine identities, more access paths, and more chances for weak oversight.
It found that 90% of organizations pressure security teams to loosen identity controls for AI.
It describes the gap between high confidence in AI readiness and weak real-world governance.
They are machine accounts such as bots, service accounts, and AI agents.
They often have broad privileges and weak monitoring, which raises security exposure.
Shadow AI is the use of unapproved AI tools or agents without formal IT oversight.
Always-on privileges give attackers more opportunity to exploit those identities.
They can improve visibility, validate activity in real time, and enforce least-privilege access.
Organizations cannot manage AI risk until they know every identity and every privilege in play.
Related Cyber Liability Insurance Posts
- AI In The SOC: Alert Overload, Human Judgment, And Hidden Risk Shape Security’s Next Chapter
- Cyber Warfare Escalates Worldwide As AI Turns Digital Conflict Into Constant Pressure
- Why Cyber Insurance Underwriting Is Moving Beyond Questionnaires – NEW PODCAST
- As Cyber Insurance Growth Stalls: Report Shows Europe Key to Rebooting Market
- The Hidden Costs of Cyberattacks on Small Businesses