British Technology Firms and Child Protection Officials to Examine AI's Ability to Generate Exploitation Images
Tech firms and child protection organizations will receive authority to assess whether artificial intelligence systems can produce child abuse material under recently introduced UK legislation.
Significant Rise in AI-Generated Harmful Content
The declaration came as revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the authorities will permit approved AI developers and child safety organizations to examine AI models – the underlying systems for conversational AI and image generators – and verify they have sufficient protective measures to stop them from creating depictions of child exploitation.
"Fundamentally about stopping abuse before it occurs," stated Kanishka Narayan, noting: "Specialists, under strict protocols, can now detect the risk in AI systems early."
Addressing Legal Obstacles
The changes have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was published online before addressing it.
This law is aimed at averting that problem by helping to stop the creation of those materials at source.
Legal Structure
The amendments are being added by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on owning, creating or distributing AI systems developed to generate exploitative content.
Practical Consequences
This week, the official toured the London headquarters of a children's helpline and heard a mock-up call to counsellors involving a report of AI-based exploitation. The interaction portrayed a teenager requesting help after facing extortion using a explicit AI-generated image of themselves, created using AI.
"When I hear about young people facing extortion online, it is a cause of extreme anger in me and rightful anger amongst families," he said.
Alarming Statistics
A leading online safety foundation reported that instances of AI-generated exploitation content – such as webpages that may include multiple images – had significantly increased so far this year.
Instances of category A material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
- Female children were predominantly targeted, making up 94% of prohibited AI images in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a vital step to ensure AI products are secure before they are launched," stated the head of the internet monitoring organization.
"AI tools have made it so survivors can be victimised repeatedly with just a simple actions, providing offenders the capability to make possibly limitless quantities of sophisticated, photorealistic exploitative content," she continued. "Material which additionally commodifies victims' suffering, and makes young people, especially girls, more vulnerable on and off line."
Support Session Data
The children's helpline also published information of counselling interactions where AI has been referenced. AI-related risks discussed in the conversations comprise:
- Employing AI to evaluate body size, body and appearance
- Chatbots discouraging young people from talking to trusted guardians about harm
- Facing harassment online with AI-generated material
- Online blackmail using AI-manipulated pictures
During April and September this year, Childline conducted 367 support interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing using chatbots for assistance and AI therapeutic apps.