Estimated reading time: 5 minutes
Resisting The New-New-Shiny Thing Is Hard!
UpGuard’s new “State of Shadow AI” report lands with a jolt. The study shows widespread use of unapproved Artificial Intelligence at work. “8 out of 10 employees” use unauthorized tools, according to the companion press release. Security leaders participate, too. “68% of security leaders… admit to incorporating unauthorized AI,” the release states.
The underlying report echoes the same pattern at global scale. It finds 81% of workers use unapproved AI tools. It also finds 88% of security leaders do the same. AI governance be damned.
The AI Risk Paradox
Training is not slowing Shadow AI. UpGuard highlights a paradox. Workers with more AI safety training report higher use of unapproved tools. The report shows a positive correlation between AI literacy and Shadow AI behavior. It frames this group as “AI power users.”
Greg Pollock, UpGuard’s head of Research and Insights, captures the tension. “Our data shows that increased security training and literacy does not curtail increased shadow AI usage; in fact, it increases it.” This challenges standard compliance playbooks.
Trust Is Shifting Away From People
Trust is moving toward machines. One in four workers says AI is their most trusted information source. That share rises in some regions and industries. UpGuard links trust in AI with greater Shadow AI usage. The more workers trust AI, the more they bypass policy.
Data Exposure and Regulatory Pressure
Data leakage worries dominate. Security leaders rank “employees sharing confidential data with AI tools” as the top concern. The report notes widespread awareness of sensitive data flowing into AI systems. Workers and security teams both observe leaks involving financial data, strategy, PII, passwords, and source code.
This creates clear AI Risk for regulated sectors. HIPAA, GLBA, SOX, and GDPR exposure grows with every pasted snippet. Model training by third-party providers can compound risk if prompts feed future outputs. Contract gaps and unclear data retention increase liability.
Blocking Fails – Workarounds Flourish
Hard blocks do not solve the problem. UpGuard reports that 45% of employees who hit blocked apps find workarounds. It also notes that 90% of workers do not notice when security blocks AI tools. That gap shows blocking often targets imagined behavior, not real usage.
The press release aligns with this view. “The problem cannot be solved by blocking applications, as 41% of employees find a way around it.” Employees choose unapproved tools because they are easier to use. UpGuard’s data supports that convenience drives behavior.
Get The Cyber Insurance News Upload Delivered
Subscribe to our newsletter!
Seniority and Geography Matter
Shadow AI usage rises with seniority. Executives report the highest rates of regular use. The report also shows regional differences among security leaders. Usage is highest in the U.S. and India. It is lower in Canada and in parts of APAC where regulatory standards are advancing. Culture and law appear to shape risk appetite.
Corporate Governance and Insurance Implications
Shadow AI undermines governance by weakening audit trails and incident response. It complicates eDiscovery and privilege, and it can breach vendor contracts and NDAs. It raises questions of confidentiality obligations and trade secret protection.
For cyber insurers, the signal is blunt. AI Risk correlates with poor control effectiveness. Carriers will look for measurable AI governance and ask about the inventory of AI tools and data flows. They will want approved providers, enterprise contracts, and model isolation. They will weigh guardrails, human-in-the-loop controls, and DLP coverage across prompts and outputs.
Watch Our Recent Podcast – AI Risk: The Insurance Industry Faces a Faster, Bigger Ransomware Repeat
Claims risk expands across several scenarios:
- Privacy and PII breach: Employees paste sensitive data into public AI tools. Logs or training corpora later expose it.
- IP leakage: Source code and trade secrets move to third-party models. Ownership and confidentiality weaken.
- Defamation and misinformation: Model outputs create reputational harm or regulatory scrutiny.
- Regulatory penalties: Unlawful cross-border transfers and profiling trigger enforcement.
Coverage and exclusions will be tested by prompt leakage, model misuse, and contractual indemnities. Expect tighter underwriting questionnaires and AI-specific conditions.
What Works: Visibility and Guided Enablement
UpGuard argues for a pivot from fear to enablement. The report urges visibility into AI usage, intelligent guardrails, and vetted tools. Make the secure path faster and easier. Provide enterprise agreements, logging, and role-based controls—pair awareness training with usable alternatives.
In short, measure before you mandate. Inventory tools, map data classes, and define allowed use cases. Attach legal addenda to AI vendors. Require opt-out of training on enterprise data. Log prompts and outputs for high-risk roles. Standardize red-teaming and approval workflows.
Methodology Snapshot
The report draws on two surveys. It includes 1,000 workers in the U.S. and U.K. It also includes 500 security leaders across the U.S., Canada, APAC, and India.