Cyber Risk Management Lags Behind AI Adoption, Report Finds

Estimated reading time: 6 minutes

Enterprises are accelerating their deployments of artificial intelligence, yet cyber risk management practices continue to lag, according to a new global study from OpenText and the Ponemon Institute. The report highlights a widening gap between innovation and security readiness, raising concerns about AI governance across the cyber insurance and cybersecurity sectors.

The study, Managing Risks and Optimizing the Value of AI, GenAI & Agentic AI, surveyed 1,878 IT and security professionals around the world to see how organizations handle AI-related risks. The results show that while AI use is growing quickly, most companies still lack strong governance, security controls, and operational maturity.

Rapid AI Adoption Outpaces Cyber Risk Management Controls

Cyber risk management challenges as AI adoption grows, featuring OpenText and Ponemon Institute report insights on AI security, governance, and cyber risk management trends

This fast growth shows in the numbers: over half of companies have already started using generative AI tools. The report says 52% of organizations have fully or partly put GenAI solutions in place.

But this growth is not matched by improvements in cyber risk management. Only 43% of organizations use a risk-based approach to govern their AI systems, showing a clear gap. This growing gap leaves organizations more exposed. Nearly 79% have not reached full AI maturity in cybersecurity, which increases the risk of serious incidents.

Muhi Majzoub, EVP of Product & Engineering, emphasized the importance of foundational controls. He said, “Security and governance are foundational to getting real value from AI.”

Organizations are rapidly deploying AI without aligning it to structured cyber risk management strategies, significantly heightening uncertainty for insurers assessing AI-driven risk exposure.

Low AI Maturity Weakens Enterprise Security Posture

The report identifies low AI maturity as a central risk factor. Only 21% of organizations report achieving full AI maturity, where systems are fully deployed and risks are assessed.

On page 6 of the report, a maturity model shows that 42% of organizations remain in early adoption stages. Another 37% report partial deployment with limited oversight.

Ultimately, this lack of maturity severely undermines cyber risk management outcomes. As a direct consequence, organizations urgently struggle to measure AI effectiveness and consistently fail to integrate risk metrics into decision-making.

The report also notes that mature organizations use KPIs and regularly inform executives about AI-driven risk reduction. Without these essential practices, organizations are left dangerously blind to cyber risks and gaps in response planning.

See also  BOXX Launches Tech E&O With Integrated Cyber Insurance For Tech Firms

Watch Our Podcast Non-Human Identity: The 45:1 Cyber Insurance Risk

AI Governance And Resource Constraints Limit Progress

Organizations face persistent and urgent governance challenges that cripple effective cyber risk management.

According to the chart on page 32, 50% of respondents say AI systems require, “too much staff to implement and maintain AI-based technologies and 44 percent of respondents say the staff does not have enough time to integrate AI-based technologies.” The idea that AI demands too many people stood out.

Another 46% cite insufficient budget as a key barrier.

Time constraints also play a major role in these challenges. About 44% of respondents say they lack time to integrate AI into security workflows. These operational gaps are critically limiting the ability to implement consistent and effective cyber risk management controls.

Only 41% of organizations have AI-specific data privacy policies in place. Because of this shortfall, regulatory compliance becomes more complicated, and exposure to privacy-related claims increases.

AI Introduces New Threat Vectors And Amplifies Existing Risks

The report highlights a growing threat landscape driven by AI capabilities. On page 31, phishing and social engineering attacks lead at 40%, followed by ransomware at 34% and denial-of-service attacks at 33%.

AI-generated attacks now account for 27% of incidents, reflecting a rising trend in automated threat activity. Agentic AI introduces additional complexity. About 55% of respondents believe that AI agents increase the risk of data theft.

Furthermore, 66% say AI agents make intrusion detection more difficult due to increased stealth and automation capabilities, compounding existing security challenges.

Taken together, these findings indicate that cyber risk management must evolve to address the new and complex AI-driven attack vectors emerging in today’s landscape.

Trust, Explainability, And Reliability Remain Key Barriers

Organizations report limited confidence in AI-driven security tools. Only 51% say AI effectively reduces the time to detect anomalies and threats.

Fewer than half, at 48%, believe AI improves threat detection and analysis.

See also  Squalify Launches 24-Hour Cyber Risk Quantification Product, Essential CRQ

Bias and model risk are now major threats. About 62% of respondents warn that minimizing bias is proving nearly impossible, fueling growing anxiety. Operational issues also affect performance. Around 45% cite errors in AI decision rules, while 40% report poor data quality.

These limitations force organizations to maintain human oversight. More than half of the respondents confirm that human involvement remains necessary.

This ongoing reliance on human oversight, however, reduces the benefits of automation and further complicates cyber risk management strategies.

Get The Cyber Insurance News upload Delivered
Subscribe to our newsletter!

Cyber Risk Management Gaps Impact Compliance And Insurance Readiness

The report shows that AI adoption complicates compliance efforts. Nearly 59% of respondents say AI makes it harder to meet privacy and security regulations.

At the same time, governance frameworks remain inconsistent across organizations, further compounding compliance issues.

A notable gap exists between executives and operational teams. On page 20, 81% of C-level leaders report formal AI policies, compared to only 41% of technicians.

This disconnect not only creates uneven cyber risk management practices across enterprises but also adds complexity to security readiness. For cyber insurers, these inconsistencies increase underwriting complexity and pose challenges for risk assessment.

Industry-Wide Implications For Cyber Insurance And Security Leaders

Collectively, the findings signal a shift in how organizations must approach cyber risk management in AI-driven environments.

Organizations should focus on matching AI adoption with clear governance, identity management, and explainability frameworks to handle new cyber risks.

The report stresses that managing non-human identities is important, since machine identities are increasing quickly and bringing new vulnerabilities. Majzoub reinforced this approach, stating that organizations must build “transparency and control into AI from the start.”

Cyber insurers will likely adjust underwriting models to reflect AI-related exposures. Security leaders must act now to prioritize continuous monitoring, enforce policies, and rapidly adopt risk-based governance to urgently close the maturity gap.

FAQ: Cyber Risk Management And AI Security

2. Why is cyber risk management important for AI adoption?

It ensures security, compliance, and trust while organizations deploy AI technologies at scale.

3. What percentage of organizations have mature AI security programs?

Only about 21% of organizations report full AI maturity with risk assessment and governance in place.

4. What are the biggest AI-related cyber risks?

Key risks include data theft, bias, misinformation, and AI-driven cyberattacks like phishing and deepfakes.

5. How does AI impact threat detection?

AI improves speed but still struggles with accuracy, bias, and reliability in threat detection.

6. Why do organizations struggle with AI governance?

They face limited budgets, lack of skilled staff, and insufficient time to integrate AI securely.

7. What role does explainability play in AI security?

Explainability helps teams understand AI decisions, improving trust and regulatory compliance.

8. How does AI affect regulatory compliance?

AI increases complexity, making it harder for many organizations to meet privacy and security regulations.

9. Are AI agents increasing cyber risk?

Yes, many experts say AI agents raise risks like data theft and make intrusion detection harder.

10. What should companies prioritize for better cyber risk management?

They should focus on governance frameworks, continuous monitoring, identity management, and human oversight.

Related Cyber Insurance Posts

Leave a Comment

×