Skip to main content

Threats Detecting Deepfakes in 2025

Would you like to learn more?

Download our Pentest Sourcing Guide to learn everything you need to know to successfully plan, scope, and execute your penetration testing projects.

The rapid adoption of AI has significantly transformed the world and business in the past few years, but also the cybersecurity landscape. AI-powered security solutions offer promise, but are also being leveraged by malicious actors to improve their capabilities. Darktrace's global survey of nearly 1,800 security leaders and practitioners reveals the extent of AI-driven threats and the challenges organizations face in defending against them. Here is an overview:

  • 74% of organizations report experiencing significant impacts from AI-powered cyber threats

  • 89% expect these AI-powered threats to remain a serious challenge well into the future

  • 84% of security leaders in Asia-Pacific report feeling the effects of AI-driven cyber threats, compared to 71% in Latin America

  • Only a slight majority (56%) believe AI-powered threats are fundamentally different from traditional cyber threats

GenAI has been used to enable phishing attacks and AI generated exploit code and malware is enabling attackers to move faster on publicly available information. Generative AI has attracted the most attention overall, but risk is not limited to LLM text generation. For example, deepfake threats are evolving too and their potential represents an apex threat considering the efficacy with which realistic voice and image spoofing can spread misinformation or enable powerful social engineering attacks. Let's dive into the details.

Where Can Deepfakes Cause the Most Damage?

Deepfakes are no longer just a novelty—they have become a powerful tool for cybercriminals, enabling financial fraud, corporate deception, and large-scale misinformation. With deepfake attacks increasingly targeting executives, financial institutions, and even the general public, the potential for economic and reputational damage is growing at an alarming rate.

  • 75% of deepfake attacks target CEOs and executives making the stakes high for a successful attack

  • Financial fraud involving deepfake voice cloning and video manipulation is rapidly increasing. Deloitte expects Deepfake attacks to cause $40 billion in annual damage by specifically targeting financial institutions.

  • AI-driven deepfakes are being used for impersonation scams, corporate fraud, and misinformation campaigns. According to the FBI, impersonation scams cost $12.5 billion nationally in 2023

  • The emergence of real-time deepfake streaming tools increases the difficulty of identifying online deception

  • Scammers are leveraging deepfaked celebrities to boost fraud with enticing endorsements, phishing scams, and fraudulent sales

Deepfake is An Apex Threat: Real-Life Examples

Considering the risk, deepfakes represent an apex cybersecurity threat. This is especially true in processes that involve a high volume of interactions where personal relationships are not a precursor to financial transactions such as many phone or online services provided by financial institutions.  

Here are some real-life examples where deepfakes have proved successful: 

  • Executives Targeted for Fraud: Scammers used a deepfake video of a Hong Kong company's Chief Financial Officer (CFO) to trick employees into transferring over $25 million to fraudsters.

  • Voice Cloning Scam in Kozhikode, Kerala (₹40,000 Fraud): A 73-year-old retired government employee was tricked into transferring ₹40,000 after scammers used AI-generated deepfake video and voice manipulation to fabricate an emergency situation.

  • Voice Cloning Kidnapping Scam in Delhi (₹50,000 Fraud): Cybercriminals used AI voice cloning to mimic a kidnapped child’s cries, convincing an elderly woman, Lakshmi Chand Chawla, to send ₹20,000 via Paytm before realizing the child was never in danger.

  • Celebrity Deepfake Scams in India: Fraudsters exploited the identities of celebrities like Alia Bhatt, Ranveer Singh, Aamir Khan, Virat Kohli, and Shahrukh Khan using AI-generated endorsements and videos, including political endorsements and fake promotions for betting apps.

AI-powered platforms such as FaceCam.ai & Deep-Live-Cam already allow low skilled attackers to leverage real-time face-swapping in video calls, impersonate celebrities, politicians, or family members for fraud and identity theft. These publicly available online services will continue to improve while well funded sophisticated adversaries will continue to develop their own custom tools.

Storm-2139 is Upping Their Deepfake Game

Storm-2139 is a global cybercrime network identified by Microsoft, consisting of individuals who develop, distribute, and use malicious tools to bypass security restrictions on generative AI platforms. The group is part of a larger structure that includes anonymous participants across multiple countries, including the United States, Austria, China, Russia, India, the Netherlands, Argentina, and Switzerland. The latest revelations about Storm-2139 come from Microsoft’s Digital Crimes Unit (DCU), as detailed in a report by Steven Masada, Assistant General Counsel at Microsoft.

They exploited stolen credentials to infiltrate Microsoft's AI services and alter their behavior, effectively weaponizing generative AI. This enabled them to create and distribute deepfake content while staying under the radar. Microsoft’s investigation revealed their internal communications and even tracked chatter within their group, which showed signs of internal distrust and panic after Microsoft’s legal actions.

The Cat and Mouse Game Between Defenders and Deepfake Technologies

Many technical research papers describe the frontline battle between defenders and deepfake technologies. As soon as defenders develop new techniques for identifying AI generated media such as video, audio, and even text, defenders re-apply new layers to fill in those detectable gaps. 

Gartner believes that liveness detection is critical for defense against deepfakes attacks against identity verification highlighting two primary approaches:

  • Active liveness detection, requiring user actions like head turns or facial expressions to verify authenticity.

  • Passive liveness detection, analyzing micromovements, 3D depth, and light reflection changes to detect fake identities.

Other approaches to deepfake detection combine liveness detection with advanced client-side defenses to mitigate injection attacks.

Conclusion

Deepfakes are rapidly evolving into a major cybersecurity threat, enabling financial fraud, misinformation, and sophisticated social engineering attacks. As cybercriminals refine their tactics, defenders face an ongoing battle to detect and mitigate these threats. High-profile scams, AI-driven fraud, and adversarial groups like Storm-2139 highlight the urgency of improved detection methods. With real-time deepfake tools becoming more accessible, organizations must stay vigilant, leveraging advanced AI detection and cybersecurity strategies to counter this escalating risk.

Let's Connect

Share your details, and a member of our team will be in touch soon.

Packetlabs Company Logo
    • Toronto | HQ
    • 401 Bay Street, Suite 1600
    • Toronto, Ontario, Canada
    • M5H 2Y4
    • San Francisco | HQ
    • 580 California Street, 12th floor
    • San Francisco, CA, USA
    • 94104