AI Risk Grows As Firms Sacrifice Identity Security For Speed

Estimated reading time: 6 minutes

We’ve seen this before: a new technology emerges, companies rush to get ahead, and governance tries to keep up. According to Delinea’s latest research, AI risk has reached this point. About 90% of organizations are pushing cybersecurity teams to relax identity controls so AI projects can progress. This tradeoff increases the risk of unauthorized access, data breaches, and loss of control over machine identities, shadow AI, and privileged access.

The report, Uncovering the Hidden Risks of the AI Race, which surveyed more than 2,000 IT decision-makers worldwide, found that leaders want to deploy AI faster. This puts more pressure on security teams to ease identity controls, which creates oversight challenges.

Art Gilliland, CEO of Delinea, states: “The pressure to move fast on AI is real, but identity governance has not kept pace.” This gap creates material exposure across enterprise environments.

AI risk cybersecurity concept showing digital padlock and identity security in data center environment, highlighting cyber insurance threats, non-human identities, and AI-driven security vulnerabilities
Uncovering the Hidden Risks of the AI Race – Delinea

AI Security Confidence Collides With Reality

Organizations say they feel ready for AI, but the report points out an “AI security confidence paradox.”

Eighty-seven percent believe their identity security is good enough for AI adoption, but 46% admit their governance falls short, showing weak validation practices.

The report says 82% of organizations are confident they can find non-human identities. However, less than a third actually validate these identities in real time.

This gap creates hidden AI risks by allowing unauthorized or unnoticed activity. Companies that trust their systems without full monitoring or audits may not catch data leaks, fraud, or system manipulation.

Identity Visibility Gaps Expand With AI Adoption

The research found that blind spots are common in identity environments. Nearly 90% of organizations report having at least one visibility gap.

The biggest gaps are with machine and non-human identities, such as AI agents, service accounts, and automated processes. A chart on page 9 shows these ongoing gaps in cloud, SaaS, and AI environments.

Bar chart showing identity visibility gaps across AI-related environments, cloud infrastructure, SaaS integrations, legacy systems, and DevOps pipelines, highlighting rising AI risk and cyber insurance exposure in enterprise security

AI environments have the highest risk of long-term visibility problems. These gaps make it more likely that suspicious or harmful activities in AI systems will go unnoticed for a long time.

See also  CISOs Urge Shift to Proactive Cybersecurity Strategy Amid Rising Threats

When security teams lack visibility, they can’t spot unusual behavior or respond quickly to threats from AI operations. This makes security breaches, insider threats, and misuse of sensitive data more likely.

Non-Human Identities Drive New Threat Vectors

As AI grows, the number of non-human identities increases quickly. These now far outnumber human accounts. The report mentions industry estimates with ratios as high as 82 to 1.

About 42% of organizations say that AI growth has made non-human identity risk much higher. Only a few say they are not concerned.

AI agents work on their own and make decisions based on context. When they ask for new permissions or higher privileges, there’s a risk of unauthorized access or mistakes. This makes traditional security models harder to manage.

The report states: “AI capability equals AI risk.” Greater access enables greater functionality. That same access increases exposure.

The Cyber Insurance News Podcast has covered AI Risk extensively

This episode features Chris Kelly, President of Delinea

Other Podcast Content on AI Risk

Standing Privilege And Limited Traceability Raise Alarms

Many organizations rely on persistent access for AI systems. This approach creates long-lived credentials and standing privileges.

About 59% of organizations say they have no good alternative to standing access. At the same time, 80% can’t always explain why an AI identity took a privileged action.

When actions can’t be traced, it’s harder to hold people accountable or respond to incidents. Attackers can use these blind spots to move through systems, raising the risk of bigger security problems.

Security experts warn that machine identities often lack ownership. This gap leads to unmanaged access and overlooked risks.

See also  Maria Long Promoted to Resilience Chief Underwriting Officer

Shadow AI Adds Another Layer Of Exposure

The report highlights the rise of shadow AI. Employees deploy unsanctioned tools and agents without oversight.

About 53% of organizations encounter unauthorized AI tools accessing systems. Detection often takes hours or days. Only 28% report real-time detection capability.

Shadow AI behaves like compromised credentials. It operates with trusted access and remains difficult to detect, increasing the chances of data leaks, unauthorized data manipulation, and the spread of malicious software across enterprise networks.

Experts warn that decentralized AI deployment creates centralized risk. Employees prioritize productivity. Security teams lose visibility.

Attackers Shift Focus To Identity Infrastructure

Threat actors increasingly target identity systems. The report highlights ransomware, credential theft, and identity-based attacks.

Attackers use AI to map privileges and identify high-value systems. They exploit identity providers and single sign-on platforms.

The report states: “Legitimate access does not mean safe access.” Attackers often use valid credentials rather than traditional exploits.

About 92% of organizations expect AI to amplify identity-related threats, potentially increasing risks such as credential stuffing and privileged account compromise, leading to unauthorized access and significant security breaches.

Get The Cyber Insurance News Upload Delivered
Subscribe to our newsletter!

Security Teams Trade Control For Speed

Organizations prioritize innovation speed over governance. The report shows consistent pressure to reduce security friction.

On page 15, data shows 90% of organizations push teams to loosen access controls. Many grant exceptions or disable safeguards.

Uncovering the Hidden Risks of the AI Race – Delinea

While this tradeoff improves operational efficiency, it also introduces unmanaged access risk, potentially leading to unauthorized system access, data loss, and increased cyberattack likelihood. Identity often becomes the weakest link.

Experts compare this pattern to previous technology shifts. Cloud adoption and BYOD followed similar paths. Security often lagged behind innovation.

FAQ AI Risk And Identity Security

2. Why does AI increase identity risk?

AI creates more machine identities, more access paths, and more chances for weak oversight.

3. What did the Delinea report find?

It found that 90% of organizations pressure security teams to loosen identity controls for AI.

4. What is the AI security confidence paradox?

It describes the gap between high confidence in AI readiness and weak real-world governance.

5. What are non-human identities?

They are machine accounts such as bots, service accounts, and AI agents.

6. Why are non-human identities a problem?

They often have broad privileges and weak monitoring, which raises security exposure.

7. What is shadow AI?

Shadow AI is the use of unapproved AI tools or agents without formal IT oversight.

8. Why is standing access risky for AI agents?

Always-on privileges give attackers more opportunity to exploit those identities.

9. How can organizations reduce AI risk?

They can improve visibility, validate activity in real time, and enforce least-privilege access.

10. What is the main takeaway for cybersecurity leaders?

Organizations cannot manage AI risk until they know every identity and every privilege in play.

Leave a Comment

×