According to cybersecurity experts at Surfshark, 2025 experienced a massive surge in deepfake-related fraud losses.
The numbers climbed over $1,1-billion, tripling from $360-million in 2024, and marking a staggering ninefold increase from the $128-million total recorded between 2020 and 2023.
Social media platforms played a central role in these scams: 83% of all deepfake-related losses originated on these platforms.
Three platforms – Facebook, WhatsApp and Telegram – accounted for 93% of deepfake scam losses originating on social media.
Facebook was the most common, resulting in $491-million in losses, followed by WhatsApp with $199-million, and Telegram with $167-million.
Other social media platforms, such as TikTok, Instagram, and Threads, accounted for nearly $36-million in losses, while an additional $31-million in losses occurred on platforms whose specific names were not identified.
“There is no surprise that deepfake-related fraud thrives on social media,” says Miguel Fornes, information security manager at Surfshark.
“The disproportionate number of scams on these platforms is likely sustained by two principal reasons: Facebook remains the largest social media platform globally, making it statistically more profitable and logical for scammers to use this channel; while WhatsApp and Telegram carry a psychological bias of ‘relational trust’, since these platforms are usually meant for close friends and colleagues – in short, people whom we already trust – making the content shared through them more likely to be trusted.”
The predominant type of deepfake fraud in 2025 involved impersonating famous individuals to promote fraudulent investment opportunities. This type of fraud accounted for 80% of total deepfake-related losses, and made up 96% of losses on social media platforms, amounting to $886-million.
Scammers used deepfake videos and audio to convincingly pose as celebrities, business leaders, or financial experts, persuading victims to trust and invest in bogus schemes. One of the most notorious cases is the UK engineering firm Arup, where a finance employee joined a video call where every single person was a deepfake except the victim, and a deepfaked CFO duped him into executing a $25-million payment.
Another notable deepfake scam type was romance fraud, where scammers used realistic videos and audio to build fake romantic relationships with victims, later requesting money for urgent health crises or convincing them to invest in fraudulent schemes.
Women were targeted in 57% of cases, while men accounted for 43%, with romance scams contributing to an estimated $10-million in losses.
A remarkable scenario is the recurring scams seizing Brad Pitt’s image, with examples such a French national losing $850 000.00 or two Spanish nationals losing a total of $385 000.00.
How to protect yourself?
Fornes says that, while companies such as Meta did a good job in the past with human content moderators – and still uses them to combat deepfakes – and the widespread adoption of the promising C2PA standard that aims to cryptographically verify the origin of any piece of media to combat deepfakes, the bitter truth is that companies have to do more properly curb this issue.
In the meantime, the overall population should be aware of and avoid deepfake scams. Fornes offers the following advice:
- Scrutinise for visual indicators: visual artifacts such as unnatural lighting or facial inconsistencies; and audio anomalies, including rhythm irregularities or poor lip-syncing.
- * “Seeing” is no longer “believing”: you must verify the source. If a video shows a company announcing a breakthrough or a celebrity announcing a giveaway, ignore the video and check the official website.
- Zero trust for “celebrity” opportunities: With 80% of deepfake losses tied to famous people promoting investments, this is the biggest red flag.
- Treat private messages with skepticism: Understand that a forwarded message from “Uncle John” or “Kindergarten Group” is not verified. Do not lower your guard just because the platform is private.
- Lock down your “training data”: Deepfakes require data (photos and audio) to train the AI. So, review your social media privacy settings and minimise your profile to “Friends Only” where possible. Avoid uploading high-quality videos of yourself talking directly to the camera on public profiles unless necessary, as these are the gold standard for scammers looking to clone your voice and likeness.
- Watch out for the quintessential red flags of scams: Deepfakes is just another wrapping of the same old scam method. Pay close attention if any media brings any kind of spectacular gain, appeals to our emotions to try to convince us about something, or an unexpected “romance to finance” twist. Also, suspect any unusual requests or communication from unexpected channels.
- Establish a family “safe word”: Agree on a simple safe word or question with your close family members today. If anyone calls claiming to be in an emergency (kidnapped, arrested, hospital) and needs money, ask for the safe word.