Estimated reading time: 4 minutes
If You See It, Hear It, Believe It… You May Be Wrong
We’ve become the opposite of the three wise monkeys: seeing, hearing, and speaking evil—all thanks to deepfakes. You can no longer believe what you see, hear, or say. According to Pindrop’s 2025 Voice Intelligence & Security Report, deepfakes are now a daily risk. Fraud attempts using synthetic voices surged over 1,300% in just one year. As Vijay Balasubramaniyan, Pindrop CEO, puts it, “Voice fraud is no longer a future threat—it’s here, and it’s scaling at a rate that no one could have predicted.”
Synthetic Voices Now Mimic Reality
In 2023, AI-generated voice fraud was sporadic. In 2024, it became a flood. Attacks rose from one monthly event to over seven daily. That’s a 1,300% increase in deepfake use. The tech behind them is growing more convincing. Tools once delayed by several seconds now operate in real-time.
Deepfake attacks are now emotionally expressive, with voices simulating joy, sadness, and urgency. This evolution makes synthetic speech indistinguishable from human voices in many cases. Pindrop has responded by enhancing detection with data from over 500 TTS engines.
Contact Centers Face Rising Threats
Fraud in contact centers has reached a six-year high. In 2024, Pindrop identified fraud attempts in 1 in every 599 calls—up 26% from the year before. That’s one fraudulent call every 46 seconds.
Voice-based impersonation isn’t limited to fraudsters. Pindrop warns of deepfake job candidates using synthetic voices during interviews. These impersonators exploit video and audio technologies to deceive recruiters and potential employers.
Retail, Banking, and Insurance Under Siege
The report outlines deepfake-related fraud spikes across key industries:
- Insurance: Up 475%
- Banking: Up 149%
- Retail: Doubled, now facing one fraud in every 127 calls
These sectors are prime targets due to their heavy reliance on voice-based interactions and self-service systems.
GET THE CYBER INSURANCE NEWS UPLOAD DELIVERED
FREE EVERY SUNDAY
Subscribe to our newsletter!
Pindrop’s Response: AI vs. AI
To combat deepfakes, Pindrop launched Pulse for Meetings—a software that detects audio manipulation in real-time. The system identifies not just the presence of synthetic speech but the AI model used to generate it.
Pindrop is also tracking voice conversion tech used to mimic pitch, tone, and accent. Fraudsters now have access to spoofing-as-a-service tools, breached data, and phishing tutorials on the dark web.
Balasubramaniyan’s Warning: The Fraud Landscape Has Changed
Balasubramaniyan emphasizes that synthetic voice attacks are no longer outliers. “Deepfakes, synthetic voice tech, and AI-driven scams are reshaping the fraud landscape. The numbers are staggering,” he said.
This reality demands new security frameworks. Static methods, such as knowledge-based questions or OTPs, are easily bypassed. More than 88% of contact centers still rely on these outdated tools.
2025 Forecast: The Year of Deepfake Fraud
Pindrop projects a +162% rise in deepfake-related fraud in 2025. By year-end:
- Deepfaked calls may rise +155%
- Retail could face fraud in 1 in every 56 calls
- Contact center fraud exposure could reach $44.5 billion
These projections signal a future where AI outpaces traditional defenses.
Data Breaches Fuel the Fire
Massive data leaks provide the fuel for deepfake attacks. In 2024, there were 3,158 breaches, nearly matching the record of 3,205 breaches in 2023. The number of people impacted rose 312%, reaching over 1.7 billion breach notices.
Stolen personally identifiable information (PII), including full bank account numbers, has flooded the dark web. Fraudsters use this data to train deepfakes for precise impersonation.
What Organizations Must Do Now
Enterprises must move beyond reactive security. Instead, they should adopt:
- Real-time deepfake detection
- Advanced voice biometrics
- Multi-factor risk-based authentication
- Passive liveness checks
As the line between human and machine blurs, Pindrop’s report reminds us that the mission is to verify the “right human”—not just a human.
RELATED NEWS:
AI Cybercrime in 2025: A Dangerous Intersection(Opens in a new browser tab)
Cybercrime Trends as Businesses Embrace AI(Opens in a new browser tab)
Cyber Insurers Grapple with AI Deepfakes(Opens in a new browser tab)