Deepfakes, Fraud & Digital Deception: The New Cybercrime Frontier in 2025

By Connor Duthie



In the last few years, AI-generated deepfakes have gone from a niche internet curiosity to one of the most pressing cyber threats facing businesses worldwide. And in 2025, the technology has matured to a point where the lines between real and fake are blurring, fast.

This isn’t just about fake celebrity videos or satirical memes. We’re talking about corporate fraud, multimillion-pound losses, and a growing wave of AI-assisted scams that are hitting real businesses in real time.

So, how did we get here? And more importantly, how can organisations fight back?

The Rise of Deepfake-Driven Crime

Deepfakes, AI-generated videos, audio, or images that mimic real people, are now so convincing that even trained professionals can struggle to tell the difference.

The numbers speak for themselves. The 2025 Identity Fraud Report by Entrust revealed that deepfake attacks were occurring every five minutes globally in 2024. Hiya’s Q4 2024 Global Call Threat Report found that more than one-third of consumers across the U.S., U.K., Canada, Germany, France, and Spain encountered deepfake voice fraud attempts, with average U.K. losses being reported at roughly £13,342.

The UK’s National Crime Agency reports that the threat from cybercriminals, particularly those operating in the UK and other English-speaking countries such as the US, has increased significantly since 2023. This rise is being driven by a loose collective of online actors known as “The Com”, who use tactics such as phishing, vishing, SIM swapping, and even ransomware, while continually diversifying their methods (National Strategic Assessment 2025 of Serious and Organised Crime).

This isn’t just criminals “having a go” with new tech. They’re using it strategically, to bypass biometric authentication, to trick finance teams into wiring funds, and to launch disinformation campaigns that can erode a company’s reputation overnight.

Real-World Losses in the Millions

It’s one thing to read statistics. It’s another to see the human (and financial) impact.

In January 2025, a multinational firm lost $200 million Hong Kong dollars, which translates to £20 million, after scammers used deepfakes to imitate several high-level members of staff. Believing he was in a legitimate meeting with his colleagues, a finance employee authorised the transfer (CNN).

In another high-profile case, a bank branch manager in the UAE was deceived into transferring $35 million after receiving a call from what he believed was the company’s director. The voice on the other end was a highly realistic AI-generated clone, supported by forged emails and legal documents. The scam, which involved at least 17 individuals, remains one of the largest confirmed examples of deepfake audio fraud (Dark Reading).

Closer to home, cybercriminals used a voice clone and YouTube footage of Mark Read, CEO of WPP, the world’s largest advertising group, to stage a fake Microsoft Teams meeting. They attempted to solicit money and personal details from an agency leader. The scam was unsuccessful, but it underscored the growing risk of deepfakes in corporate communications (Financial Times).

How Criminals Weaponise Deepfakes

Deepfake fraud isn’t limited to one attack type. Cybercriminals are:

  • Impersonating executives in video calls to approve fake wire transfers
  • Using synthetic voices to bypass phone-based authentication
  • Creating fake IDs and images to outsmart biometric systems
  • Producing phishing campaigns that are indistinguishable from genuine corporate communications


According to Microsoft’s Cyber Signals (Issue 9, 2025), AI-powered deception is now a global threat, with $4 billion in fraud attempts thwarted in the last year alone. And the trend is accelerating, Gartner predicts that by 2026, 30% of enterprises will consider standalone identity verification unreliable due to deepfake threats.

The Defence: Building Resilience Against AI Impersonation

The scary part? Most businesses still aren’t ready. The UK Cyber Security Breaches Survey 2025 found that only 19% of UK businesses provide any form of cybersecurity training, despite AI impersonation now being a known risk.

If your organisation wants to stay ahead, here’s where to start:

  1. Train Your Team - Awareness is your first defence. Staff should be able to spot suspicious requests, even if they “look” or “sound” legitimate.
  2. Verify All High-Risk Requests - Use a second channel (phone call, secure messaging) to confirm any sensitive or financial actions.
  3. Upgrade Authentication - Move beyond voice and image checks. Multi-factor authentication is a must.
  4. Monitor Your Digital Footprint - Use tools to detect fake content featuring your brand or executives.
  5. Report and Share Incidents - The faster information is shared across industries, the harder it becomes for scammers to succeed.


The Bottom Line

Deepfakes are no longer a futuristic “what if”, they’re a present-day threat. And like any evolving cyber risk, they won’t be solved with technology alone. It’s about awareness, processes, and building a culture where verification is second nature.

Because in a world where seeing is no longer believing, trust has to be earned twice, once with your people, and once with your defences.

Protect Your Business Before It’s Too Late

Book a consultation today and find out how we can help you stay one step ahead of AI-powered cybercrime.

Speak to a Cybersecurity Expert

Related Articles

CONTACT US TODAY:

Back to start
aberdeen skyline graphic
x