Estimated reading time: 7 minutes
Fools Rush In – “Bypassing Security” –
“Organizations are bypassing security and governance for AI in favor of do-it-now AI adoption.” That line in the press release announcing IBM’s latest Cost of a Data Breach Report 2025 rings with urgency. It evokes the old line, “Fools rush in,” penned by English poet Alexander Pope in 1711. His full line offers stark caution about AI, a caution organizations may soon wish they had heeded. It reads “Fools rush in where angels fear to tread.”
“The data shows that a gap between AI adoption and oversight already exists, and threat actors are starting to exploit it,” said Suja Viswesan, Vice President, Security and Runtime Products, IBM.
According to the report, 13% of organizations experienced breaches involving AI models or applications. Of those, 97% lacked basic access controls. The result? Sixty percent faced data compromises. Thirty-one percent faced operational disruptions. In Pope’s phrasing, fools have indeed rushed in – and the angels are staying back.
The Human Cost of Ignoring AI Oversight
AI isn’t just a system upgrade. It’s a wholesale transformation of operations, decision-making, and risk. Yet, according to IBM’s study, many organizations treat it as a plug-and-play feature. The report shows that most AI deployments lack clear oversight. Of the companies surveyed, nearly two-thirds had no policy for managing AI. Among those that did, only 45% enforced strict approval processes.
This lack of discipline isn’t theoretical. It leads to tangible damage. For example, organizations with no governance policies were more likely to suffer operational disruption, reputation loss, and financial damage. Those that didn’t audit AI use also saw higher average breach costs of over $4.6 million compared to $4.1 million among those with oversight mechanisms.
Shadow AI: The Risk You Don’t See
Shadow AI isn’t a theoretical risk. It’s already embedded in many workplaces. Employees use unauthorized tools to enhance productivity. Sometimes it’s a chatbot to write reports. Sometimes it’s a code assistant integrated into development environments. What they have in common is that they operate outside IT’s visibility.
According to the report, shadow AI incidents resulted in broader and deeper breaches. They exposed more personal data, especially customer PII, which was compromised in 65% of such breaches. In contrast, authorized AI systems had lower PII exposure rates.
The danger isn’t just the breach, it’s the domino effect. Shadow AI often stores data across multiple environments, which makes detection and containment harder. These breaches took, on average, an additional week to resolve.
Attackers Use AI Too – And They’re Getting Better
AI isn’t just a risk because it can be breached. It also fuels smarter attacks. IBM’s report found that 16% of all breaches involved attackers using AI tools. These attacks are no longer crude spam emails. Now, phishing messages are hyper-personalized. Deepfake impersonations sound and look real.
These tools let attackers scale their efforts quickly. The report found AI reduced the time to craft a convincing phishing email from 16 hours to just 5 minutes. It’s not just volume; it’s precision. And the results speak for themselves.
Enjoy The Video Version of This Article – And Subscribe to our YouTube Channel
Operational Fallout and Long-Term Damage
A breach doesn’t end when the system is restored. It continues in the form of lost trust, damaged reputation, and rising costs. According to IBM, only 35% of breached organizations fully recovered. For most, the road to recovery stretched beyond 100 days. Some took over 150.
Even more concerning, nearly half of all organizations raised their prices after a breach to offset losses. About one-third hiked prices by 15% or more. That burden doesn’t fall on shareholders; it lands on customers.
Operational disruption also spiked. 86% of organizations reported disruptions, including halted production lines and failed service delivery. In sectors like healthcare or finance, these aren’t just inconveniences—they’re life-or-death scenarios.
Get The Cyber Insurance News Upload Delivered
Every Sunday
Subscribe to our newsletter!
The Tools Are There. Why Aren’t They Used?
Security AI and automation offer proven benefits. Organizations using these tools extensively had the shortest breach lifecycles (204 days) and the lowest breach costs ($3.62 million). Yet only 32% of companies use them widely.
This adoption gap is puzzling. The tools exist. The benefits are proven. But companies hesitate. Maybe it’s cost. Maybe it’s inertia. Either way, the result is the same—higher risk and greater damage.
Meanwhile, attackers evolve. The report calls this the “AI arms race.” Those who fail to adapt will fall behind and get breached.
To Err Is Human. But Will AI Forgive?
IBM’s 2025 report isn’t just a snapshot. It’s a warning. It shows how quickly organizations are adopting AI without understanding the stakes. We’ve seen this story before in cloud adoption, in mobile devices, in every rushed tech deployment that preceded AI.
“The report revealed a lack of basic access controls for AI systems, leaving highly sensitive data exposed, and models vulnerable to manipulation,” said Viswesan. Adding, “The cost of inaction isn’t just financial, it’s the loss of trust, transparency and control.”
Alexander Pope’s wisdom echoes once more: “A little learning is a dangerous thing.” We’re still learning what AI can do. But it’s already doing damage when left ungoverned. And Pope’s other reminder? “To err is human; to forgive, divine.” That’s hopeful when humans are involved. But will AI, in the hands of err-prone humans, be so forgiving?
Plain-Language Analogy
Imagine AI is like a new teenage driver. Companies hand it the keys to a car without seat belts or brake checks. They’re thrilled with the speed but ignore the black ice ahead. Shadow AI? That’s your teen secretly borrowing the car. Now add deepfake attacks, that’s like using fake licenses at a traffic stop. And when something crashes? Companies act surprised they didn’t have insurance.
Alexander Pope warned of fools rushing in. But he also warned of shallow knowledge and human error. With AI, we’re rushing, learning little, and hoping nothing goes wrong. But hope, as another saying goes, is not a strategy.
Key Findings
- $4.44M – Global average cost of a data breach dropped by 9%, driven by faster detection aided by AI and automation.
- 97% – Of organizations breached via AI, nearly all lacked proper AI access controls, often exposing sensitive data and operations.
- $4.92M – Malicious insider attacks had the highest average breach cost among initial threat vectors for the second year.
- $670K – Additional breach cost for organizations using high levels of shadow AI, which also exposed more sensitive and IP data.
- $1.9M – Average savings from extensive use of AI and automation in security operations, cutting breach lifecycle by 80 days.
- 63% – Organizations that refused to pay ransoms, a growing trend despite the high cost of disclosed ransomware attacks.
- 49% – Breached organizations that plan to invest in new security measures—down from 63% in the previous year.
- 63% – Breached organizations that lack AI governance policies or are still developing them, leaving AI largely unmonitored.
- 1 in 6 – Breaches involved attackers using generative AI, especially for phishing (37%) and deepfake impersonation (35%).
Methodology
The findings in IBM’s Cost of a Data Breach Report 2025 are based on in-depth research conducted by the Ponemon Institute. Researchers analyzed breach data from 600 organizations across 17 industries and 16 countries between March 2024 and February 2025. Each organization had experienced a data breach involving 2,960 to 113,620 compromised records. The study involved 3,470 interviews with executives, IT leaders, and security professionals directly involved in breach response. Using activity-based costing, IBM and Ponemon quantified breach impacts across four key categories: detection and escalation, notification, post-breach response, and lost business. This year’s report also introduced new metrics focused on AI governance, shadow AI incidents, and the use of AI by attackers in security events.