Full “feature-length” AI films of child sexual abuse may now be “inevitable” unless urgent action is taken, experts warn, as rapidly improving technology means AI video is now “indistinguishable” from genuine imagery.
New data, published on Friday (11 July 2025) by the Internet Watch Foundation (IWF) shows confirmed reports of AI-generated child sexual abuse imagery have risen 400%, with AI child sexual abuse discovered on 210 webpages in the first six months of 2025 (1 January to 30 June).
In the same period in 2024, IWF analysts found AI child sexual abuse imagery on 42 webpages. Each page can contain multiple images or videos.
Disturbingly, the number of AI-generated videos has rocketed in this time, with 1 286 individual AI videos of child sexual abuse being discovered in the first half of this year compared to just two in the same period last year.
All the AI videos confirmed by the IWF so far this year have been so convincing they had to be treated under UK law exactly as if they were genuine footage.
Of the 1 286 AI videos confirmed this year, 1 006 were assessed as the most extreme (Category A) imagery – videos which can depict rape, sexual torture, and even bestiality.
Now, the charity is sounding the alarm that AI-generated videos of child sexual abuse have become so realistic they can be “indistinguishable” from genuine footage of child sexual abuse.
The IWF, which is at the front line in finding and preventing child sexual abuse imagery online, says the picture-quality of AI-generated videos of child sexual abuse has progressed “leaps and bounds” over the past year, and that criminals are now creating AI-generated child sexual abuse videos at scale – sometimes including the likenesses of real children.
Highly realistic videos of abuse are no longer confined to very short, glitch-filled clips, and the potential for criminals to create even longer, more detailed videos is becoming a reality.
Analysts also warn AI-generated child sexual abuse is becoming more “extreme” as criminals become more adept at creating and depicting new scenarios.
This has prompted fears criminals may one day be able to create full, feature length child sexual abuse films unless urgent action is taken now.
Derek Ray-Hill, interim chief executive of the IWF, says: “We must do all we can to prevent a flood of synthetic and partially synthetic content joining the already record quantities of child sexual abuse we are battling online. I am dismayed to see the technology continues to develop at pace, and that it continues to be abused in new and unsettling ways.
“Just as we saw with still images, AI videos of child sexual abuse have now reached the point they can be indistinguishable from genuine films. The children being depicted are often real and recognisable, the harm this material does is real, and the threat it poses threatens to escalate even further.”
Creating, possessing and distributing AI-generated child sexual abuse imagery is illegal in the UK, but the IWF says government must honour its manifesto commitment to ensure the safe development and use of AI models by introducing binding regulation to make sure this developing technology is safe by design, and cannot be abused to create this material.
Ray-Hill adds: “We must get a grip on this. At the current rate, with the way this technology is evolving, it is inevitable we are moving towards a time when criminals can create full, feature-length synthetic child sexual abuse films of real children. It’s currently just too easy to make this material.
“A UK regulatory framework for AI is urgently needed to prevent AI technology from being exploited to create child sexual abuse material.
“While new criminal offences via the Crime and Policing Bill are welcome, the window of opportunity to ensure all AI models are safe by design is swiftly closing.
“The Prime Minister only recently pledged that the Government will ensure tech can create a better future for children. Any delays only set back efforts to safeguard children and deliver on the Government’s pledge to halve violence against girls. Our analysts tell us nearly all this AI abuse imagery features girls. It is clear this is yet another way girls are being targeted and endangered online.”
The IWF has unique powers to proactively hunt down child sexual abuse imagery on the internet. Because of this, its trained and experienced analysts are often the first to discover new ways criminals are abusing technology.