Estimated reading time: 9 minutes
Netskope Threat Labs reviewed 2025 to predict the next set of enterprise threats. This look back helps security teams prepare for 2026. The 2026 Cloud and Threat Report covers global trends in cloud use, phishing, and malware. It also tracks the fast growth of genAI and related data leaks. The report describes itself as “a critical preview of the challenges and risks” ahead. It highlights that most risk comes from unregulated genAI use in daily work. Employees act quickly, but controls often lag behind. Now, AI risk is part of everyday workflows, policy gaps, and identity-based attacks. This mix of shadow AI, overtrusted integrations, and poisoned packages turns the supply chain cybersecurity into a high-speed conveyor belt for stolen data and malware.
“GenAI adoption has shifted the goal posts,” said Ray Canzanese, Director of Netskope Threat Labs. “It represents a risk profile that has taken many teams by surprise in its scope and complexity, so much so that it feels like they are struggling to keep pace and losing sight of some security basics,” added Canzanese.
Report Scope And Measurement Window
Netskope used anonymized data from its Netskope One platform for this report. The data covers October 1, 2024, to October 31, 2025. The report sees 2026 as a year when risks will add up. It warns that teams need to handle both old threats and new genAI risks at the same time.
In This Report
Netskope highlights five trend lines with direct relevance for cyber insurers and security leaders. SaaS genAI use tripled, and prompts rose sixfold in one year. GenAI policy violations doubled, and the average firm saw 223 incidents per month. Personal cloud apps drove 60% of insider incidents. Phishing clicks fell, yet 87 of 10,000 users clicked each month, and Microsoft led brand mimicry. Malware still rode the coattails of trusted services like GitHub, OneDrive, and Google Drive.
SaaS GenAI Use Surges Across Enterprises
Companies still face challenges with employees trying out genAI on their own. Many start with personal accounts before official tools are available. The report calls this “shadow AI,” meaning AI use that is not visible or controlled by the company. It notes that 47% of genAI users still use personal AI apps. Governance got better over the year, but there are still gaps. Personal AI use dropped from 78% to 47%, while company-managed accounts went up from 25% to 62%. Some users now use both types of accounts, with overlap rising from 4% to 9%. This shows that people want convenience and features faster than governance can keep up.
“Not only do security teams still have to manage existing risks, but they now also have to manage the risks created by genAI.”
2026 Netskope Cloud and Threat Report
Most organizations use just a few main platforms. ChatGPT was used by 77% of organizations, Google Gemini by 69%, and Microsoft 365 Copilot by 52%. Perplexity grew from 23% to 35%, and Grok reached 28% after April. The amount of use is a big part of the risk. Organizations sent a median of 18,000 prompts per month, up from 3,000. The top 25% sent over 70,000 prompts, and the top 1% sent more than 1.4 million each month. Each prompt could include sensitive information, which raises the risk of accidental leaks or contract violations.
The report also predicts changes in platform popularity. It says Gemini is “poised to surpass ChatGPT in the first half of 2026.” This matters for governance because teams will need to manage several platforms at once. It also matters for insurers, since relying on a few vendors can lead to bigger losses if something goes wrong.
AI Risk Shows Up In Policy Violations And IP Leakage
The report connects more genAI use with more exposure to outside parties. Many tasks need uploads or connectors to other services, which can create direct paths for data leaks. Policy violations doubled for the average company. About 3% of genAI users caused policy issues. On average, companies had 223 genAI data policy violations each month. The top 25% of companies had 2,100 incidents per month, affecting 13% of their genAI users.
Netskope flags a governance gap that complicates loss measurement. It states that “fully 50% of organizations lack enforceable data protection policies for genAI apps.” Detected violations reflect only the enforced slice. Unenforced environments can leak data without alarms. That reality matters for underwriting questionnaires and control testing. It also shapes claims disputes over reasonable security and policy conditions.
The most common types of violations carry serious risks. Source code made up 42% of genAI violations, regulated data 32%, and intellectual property 16%. Passwords and keys often show up in code and configuration files. These leaks can cause many problems. Leaked source code can lead to IP disputes, trade secret claims, and competition issues. Leaked regulated data can require breach notifications and bring regulatory action. Finally, leaked credentials can lead to account takeovers and fraud.
The report also warns about new connectors that increase risk. AI browsers and Model Context Protocol integrations can reach local or cloud resources and perform tasks. If an agent is compromised, it can move data faster than a person. If set up incorrectly, an agent can leak data by having too much access. Prompt injection can make agents follow harmful instructions. AI risk increases when autonomous tools have too many permissions.
Blocking Risky Apps Cuts Exposure Fast
Many companies lower their risk by blocking genAI apps that are not approved. Netskope reports that 90% of organizations use active blocking. On average, each company blocks 10 apps. The block list shows which apps are seen as most risky. ZeroGPT was blocked by 45% of organizations, mainly because detection tools often need full-text or source code uploads. DeepSeek was blocked by 43%, due to worries about transparency, changes in how the platform works, and data control.
Netskope also saw a big increase in the number of genAI apps. The number tracked grew from 317 to over 1,600 in a year. The average company used 33% more apps, going from 6 to 8. The top 1% of companies increased from 47 to 89 apps. These outliers have more ways for data to leave and less control. The report expects more companies to use allow lists in 2026 and predicts stricter controls as AI browsers and new ways to use AI increase risk.
Agentic AI Expands Insider Impact And Attack Surface
Agentic AI can act on its own across both internal and external systems. This independence makes things faster but also increases the chance of mistakes and misuse. The report shows that more companies are using services that support agentic workflows. Thirty-three percent used OpenAI services through Azure, 27% used Amazon Bedrock, and 10% used Google Vertex AI. Bedrock users and traffic tripled in a year. Vertex AI users increased six times, and its traffic went up ten times.
Watch Our Podcast – AI Risk: The Insurance Industry Faces a Faster, Bigger Ransomware Repeat
APIs are now a key area for control. Seventy percent of companies connect to api.openai.com, 54% to AssemblyAI, and 30% to Anthropic APIs. These connections often run behind internal tools and automation. They also send sensitive information through prompts and data retrieval. Managed platforms help reduce some risks with privacy and control features, but they still need careful design, monitoring, and safe integration. Tools with too many permissions can cause bigger problems. Weak APIs make it easier for data to leak. Information can spill between projects or users, and mistakes can cause more damage. AI risk increases when agents have too much authority.
Personal Cloud Apps Keep Fueling Insider Loss
Personal cloud apps remain a major source of insider exposure, and the report puts hard numbers on the problem.
- 60% of insider incidents involve personal cloud app instances.
- 70%-77% of organizations added real-time controls for personal apps.
- 63% of organizations use DLP for personal apps.
- 50% of organizations use DLP to manage genAI risk.
- 43% most commonly controlled app: Google Drive.
- 31% most commonly controlled app: Gmail.
- 28% most commonly controlled app: OneDrive.
- 28% rank for personal ChatGPT controls.
- 21% increase in users uploading data to personal cloud apps over the year.
- 31% of users upload data to personal cloud apps each month.
- 15% of users interact with AI apps each month.
- 54% of personal-app policy violations involve regulated data.
- 22% involve intellectual property.
- 15% involve source code.
- 8% involve passwords or API keys.
These figures raise concerns about cyber insurance for silent exfiltration and weak audit trails.
Phishing And Malware Stay Reliable For Attackers
Phishing clicks went down, but attackers are still successful on a large scale. Clicks dropped from 119 to 87 per 10,000 users each month, a 27% decrease. Microsoft was the most copied brand, causing 52% of phishing clicks. Hotmail caused 11%, and DocuSign 10%. Netskope notes a rise in OAuth consent phishing, where attackers trick users into giving access to bad cloud apps. This method gets around passwords and MFA. Session hijacking kits can also steal tokens and cookies in real time. The report recommends “continuous session monitoring, token protection, and abnormal access detection” to fight these threats. It also expects attackers to use AI to make better phishing attempts in 2026.
Get The Cyber Insurance News Upload Delivered
Subscribe to our newsletter!
Malware is also spreading through trusted channels. Each month, 12% of organizations faced risks from GitHub, 10% from OneDrive, and 5.8% from Google Drive. Web-based malware often uses tricks like iframes, fake uploaders, and fake CAPTCHA pages. The report mentions “LLM-assisted malware,” which makes it easier and faster to create and hide malware. It also highlights supply chain risks in modern cloud systems, such as Shai-Hulud targeting npm, Salesforce token revocations after suspicious API calls through Gainsight, and the Salesloft breach (UNC6395) linked to compromised SaaS connectors.
Recommendations For 2026 Controls
Netskope Threat Labs encourages organizations to “take a fresh look” at their security. It suggests checking all HTTP and HTTPS downloads in web and cloud traffic to block malware. It also recommends blocking apps that are not needed for business or are too risky. DLP policies should be used to find sensitive data, like source code, regulated data, passwords, keys, intellectual property, and encrypted data. Remote Browser Isolation is advised for visiting high-risk sites, such as new domains. These actions help meet cyber insurance requirements for basic controls and provide proof for underwriting, while also reducing avoidable losses.
Related Cyber Liability Insurance Posts
- Cyber Insurance Alert: Aflac Taps CyEx Medical Shield After 22.65M-Person Breach
- Cyber Insurance Sunday – Upload
- Mynatt Insurance Agency Enhances Cyber Insurance Offering for Tampa and Temple Terrace Businesses
- Guy Carpenter Unveils Revolutionary Cyber Reinsurance Product, CatStop+, to Tackle Risk Volatility
- NuHarbor’s SLED Cybersecurity Report Unveils Trends and Strategies