Deepfake-related losses have surpassed $1,5-billion to date, with approximately $1-billion of those losses occurring in 2025 alone.

According to a new Surfshark study, losses approached $400-million in 2024, following a cumulative $130-million from 2019 to 2023, signaling an exponential escalation in both frequency and sophistication.

Producing a one-minute deepfake video previously cost between $300 and $20 000, depending on the video’s quality. Today, with widely available AI tools, that same minute can be generated for just a few dollars, making scams cheaper to execute and easier to scale.

New forms of fraud have emerged as a result. Some scammers are creating AI-generated images of lost pets to deceive worried owners into making small payments, often as little as $50, under the false promise that their missing animals have been found. This emerging scheme highlights how the plummeting cost and growing availability of AI image and video-generating tools are fueling a surge in deepfake fraud.

“As the cost to fabricate shockingly accurate images and videos falls to near zero, scammers are industrialising deception,” says Miguel Fornes, information security manager at Surfshark. “The lost-pet scam is a stark example: it weaponises sorrow or hope at small dollar amounts, which makes victims less suspicious or prone to litigation and allows criminals to scale quickly.”

According to Fornes, while novelty schemes like “lost pet” deepfakes are growing, they remain small compared to the most lucrative deepfake-enabled scams. Investment scams continue to dominate, with deepfake impersonations and fabricated credentials causing substantial corporate losses.

Deepfakes have also been used during job interviews, enabling scammers to bypass identity checks and infiltrate organizations. One such case involved a cybersecurity firm unknowingly hiring a North Korean hacker who successfully deepfaked his interview, IDs, and background checks.

Fornes urges individuals and organisations to stay vigilant: Use strong cybersecurity, identity verification, and staff training to spot and stop deepfake-driven fraud. As AI improves, many old visual cues are fading, so the best defense is skeptical thinking.

His expert advice is to treat all unexpected requests, especially about money or sensitive data, as potentially suspicious.

  • Verify via trusted channels before acting on requests (call back using known numbers).
  • Check context and audio: “too-clean” sound, slight lip-sync drift, or mismatched background noise.
  • Watch micro-details (odd hands/fingers, uncanny eyes, jerky motion)–but don’t rely on visuals alone.
  • Slow down: urgency, pressure, or emotional hooks are classic scam tactics.
  • Use multi-factor checks: code words, second approvers, out-of-band confirmations for payments or account changes.
  • Train high-risk teams (Support, HR, Finance); require live, in-location verification for hiring and scrutinize resumes/addresses.
  • Treat virtual phone number-only contacts, new domains, or no digital footprint as red flags.