The first half of 2025 had almost four times as many deepfake incidents (a total of 580 incidents) as the entire year of 2024 – and losses from deepfake-related fraud reached $410-million, according to a new report from Surfshark. Since 2019, deepfake technology used for fraud has resulted in $897-million in losses.
“The trajectory of how many incidents happen and how much financial loss they generate is very concerning,” says Tomas Stamulis, chief security officer at Surfshark. “As deepfake technology evolves so fast, it is getting easier for criminals to use it for fraudulent activities, especially as no concrete regulations are yet in place to stop them.
“And even though many actions are being implemented – like Europe’s AI Act, Denmark’s copyright law reform, and AI bills in the US – deepfake technology continues to advance faster than authorities can actually prevent fraudulent incidents from occurring,” he adds.
Stamulis points out that criminals target both businesses and individuals – with businesses losing 40% ($356-million) and individuals 60% ($541-million) of the $897-million total. Individuals are more at risk as they are easier to manipulate and are less likely to implement sophisticated security measures.
Key highlights from the report include:
- The most common deepfake fraud activity is impersonating famous people to promote fraudulent investments, which resulted in $401-million in losses.
- Another method favoured by cybercriminals is impersonating company executives to trigger fraudulent transfers ($217-million).
- Yet another type of fraud involves using deepfake technology to bypass biometric verification systems in order to take out loans or steal data ($139-million).
- Lastly, romance scams – which are widely used by criminal groups – have caused $128-million in losses.
Considering the future evolution of deepfake incidents, Stamulis says that the number of deepfakes will continue to rise, however, eventually people will become immune to them. For example, now, when an adult receives ransomware with an explicit fake picture of themselves, their immediate instinct is to comply or go to the authorities. However, in the near future, we will get so used to seeing deepfake content of ourselves and others that we will not be so easily manipulated and simply ignore them.
“But to achieve this, we need a strong emphasis on educating people to recognise deepfakes,” Stamulis says. “For example, always double-check the source of the content before believing or sharing it. If there are any doubts, directly contact the person or institution supposedly behind the message. Create a family secret code to verify identity during a suspicious call. And never send money or sensitive documents to someone met only online.”
“Lastly, we must also prioritise fostering critical thinking and continuously improving advanced malicious deepfake detection technology,” Stamulis adds.