British Technology Companies and Child Protection Officials to Test AI's Capability to Create Abuse Images
Technology companies and child safety agencies will receive authority to assess whether artificial intelligence systems can generate child abuse images under recently introduced British laws.
Significant Rise in AI-Generated Illegal Material
The announcement coincided with revelations from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the government will allow designated AI developers and child protection organizations to inspect AI systems – the underlying systems for conversational AI and visual AI tools – and ensure they have adequate protective measures to stop them from creating depictions of child sexual abuse.
"Ultimately about stopping abuse before it occurs," declared the minister for AI and online safety, adding: "Experts, under strict conditions, can now detect the risk in AI systems early."
Addressing Regulatory Challenges
The changes have been implemented because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation process. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at preventing that problem by enabling to halt the creation of those images at their origin.
Legal Structure
The amendments are being added by the government as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI models developed to generate child sexual abuse material.
Practical Consequences
This recently, the official visited the London base of a children's helpline and heard a simulated call to advisors involving a account of AI-based exploitation. The call depicted a adolescent seeking help after facing extortion using a explicit AI-generated image of himself, constructed using AI.
"When I hear about children facing extortion online, it is a source of extreme frustration in me and justified concern amongst parents," he stated.
Concerning Statistics
A leading internet monitoring foundation reported that cases of AI-generated abuse material – such as webpages that may contain multiple files – had significantly increased so far this year.
Cases of category A content – the gravest form of abuse – increased from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, making up 94% of prohibited AI images in 2025
- Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "constitute a vital step to guarantee AI products are secure before they are launched," stated the chief executive of the internet monitoring foundation.
"AI tools have enabled so victims can be targeted all over again with just a simple actions, giving offenders the ability to create possibly limitless quantities of sophisticated, lifelike child sexual abuse material," she added. "Content which further commodifies victims' trauma, and makes children, particularly girls, more vulnerable on and off line."
Support Interaction Data
The children's helpline also released details of support interactions where AI has been referenced. AI-related harms mentioned in the sessions include:
- Employing AI to rate body size, physique and looks
- Chatbots discouraging young people from talking to trusted guardians about harm
- Facing harassment online with AI-generated material
- Online extortion using AI-faked images
During April and September this year, the helpline conducted 367 support interactions where AI, chatbots and related topics were mentioned, four times as many as in the same period last year.
Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellbeing, encompassing utilizing chatbots for assistance and AI therapeutic apps.