An AI would probably try to get you to use “shocking truth” in the headline. It should no longer be shocking; it is simply the truth. Like a car or a hammer, artificial intelligence (AI) is a tool. It can help build or destroy. It can help the getaway or chase the criminal. The user can drive the nail home or smash a thumb. AI, like any tool’s impact, depends on who’s behind the wheel or, in this case, the keyboard. The latest Check Point Research AI Security Report reveals a blunt truth many other reports have concluded: AI in cybercrime is no longer a make-believe menace. It’s here. It’s evolving. And it is reshaping the nature of cyberattacks and the entire cyber threat landscape.
AI Arms Race: Offense and Defense Accelerate
AI’s reach now spans both cyber offense and defense. Enterprises are using it to protect systems. But hackers have learned the same tricks.
“AI threats are no longer theoretical—they’re here and evolving rapidly,” warns Lotem Finkelstein, Director of Check Point Research.
Cybercriminals now develop malicious AI models like WormGPT and FraudGPT, explicitly designed for phishing, malware, and deception. These models don’t just generate harmful content; they do it convincingly and at scale.
Fake AI Platforms and Impersonation: The Face of Fraud Has Changed
Criminals are also building fake AI platforms to distribute malware or steal data. A Chrome extension mimicking ChatGPT was used to hijack Facebook sessions and gain full remote access to user accounts.

AI-powered impersonation is dangerously real. Attackers create deepfake audio and video to simulate executives, celebrities, or loved ones. A chilling example: scammers impersonated Italy’s defense minister to extort money from his high-profile contacts.
Social Engineering Powered by AI: Real-Time Deception at Scale
“Even poorly phrased scams can be profitable when sent to millions,” the report notes.
AI has supercharged this concept. Through interactive chatbots, real-time voice manipulation, and video impersonation, AI drives convincing frauds in multiple languages. Scammers no longer need to speak English well or at all.
These systems are fully autonomous. Tools like X137 can manage dozens of conversations simultaneously, mimicking human interaction with uncensored replies tailored to deceive.
Visual Deepfakes and ID Theft: AI’s Role in Fraudulent Verification
AI-generated visuals are bypassing KYC (Know Your Customer) verification systems. Cybercriminals sell fake identities for as low as $70, and full-service fraud kits target banks and fintech firms.
One high-profile attack involved a live video deepfake impersonating executives at a British firm. The fake meeting convinced an employee to wire £20 million to criminals.
Enterprise Risks: Shadow AI and Data Leaks
Artificial Intelligence’s infiltration into the enterprise is widespread. At least 51% of networks now use AI services, often without security oversight. Over 1 in 13 AI prompts contains sensitive data, making unmonitored use a serious breach risk.
“The rapid adoption of new AI services makes it challenging for security admins to manage them effectively,” warns the report.
Credential Theft and AI Service Abuse: Generative AI as a Commodity
Credentials for tools like ChatGPT are now sold in dark web forums, allowing criminals to bypass restrictions and generate malicious content anonymously. In one example, 400 ChatGPT account credentials were leaked for public use.
Tools like Silver Bullet automate credential stuffing, and AI-generated scripts help attackers craft phishing kits and brute-force password generators.
Jailbreaking AI: Turning Ethical Models into Cyber Weapons
Attackers are increasingly jailbreaking LLMs (Large Language Models) to bypass security. Techniques include role-playing, encoding harmful requests, and direct system call manipulation. A popular dark web post explains how to convert ChatGPT into “WormGPT,” a malware-producing tool.
These methods expose a deeper vulnerability: some AI models can be manipulated into jailbreaking themselves.
AI in Nation-State Cyber Threat Operations
“Iranian, Russian, and Chinese groups have used AI tools like Gemini for content creation, localization, and persona development,” the report confirms.
State-affiliated Advanced Persistent Threats (APTs) are embedding AI into every attack phase—from reconnaissance to malware deployment. LLM poisoning is also a growing concern, with adversaries targeting AI training data to inject backdoors or disinformation.
What’sWhat’s Next? Fighting Fire with Fire
Check Point’s response includes Artificial Intelligence-for-defense solutions like GenAI Protect and Infinity AI Copilot, which monitor prompt risks and automate threat response. But the message is clear: automation works both ways. It can increase or decrease cyber threats
“AI is your most valuable tool for analyzing, detecting, preventing, and predicting threats,” the report emphasizes.
Other News: Will AI Help the Hackers or Cyber Carriers More?(Opens in a new browser tab)