British Tech Firms and Child Protection Officials to Test AI's Ability to Generate Abuse Content
Tech firms and child protection agencies will receive authority to evaluate whether artificial intelligence tools can generate child abuse images under new UK legislation.
Substantial Rise in AI-Generated Harmful Content
The announcement came as findings from a protection watchdog showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the amendments, the authorities will allow approved AI developers and child safety groups to examine AI systems – the underlying technology for conversational AI and image generators – and ensure they have adequate safeguards to prevent them from creating images of child sexual abuse.
"Ultimately about stopping exploitation before it occurs," declared Kanishka Narayan, noting: "Experts, under strict conditions, can now detect the risk in AI models early."
Tackling Regulatory Challenges
The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such images as part of a testing process. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.
This law is designed to averting that issue by enabling to stop the creation of those images at source.
Legislative Framework
The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a ban on owning, producing or distributing AI systems developed to create exploitative content.
Practical Impact
This recently, the official toured the London base of Childline and listened to a mock-up call to counsellors involving a report of AI-based abuse. The interaction depicted a teenager requesting help after being blackmailed using a explicit deepfake of themselves, constructed using AI.
"When I learn about children facing extortion online, it is a source of extreme frustration in me and justified anger amongst parents," he said.
Concerning Statistics
A prominent internet monitoring foundation reported that instances of AI-generated abuse material – such as online pages that may contain multiple files – had significantly increased so far this year.
Instances of category A content – the gravest form of abuse – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "represent a crucial step to ensure AI tools are safe before they are released," commented the chief executive of the internet monitoring organization.
"AI tools have made it so survivors can be targeted all over again with just a simple actions, providing criminals the ability to create potentially endless quantities of advanced, lifelike exploitative content," she added. "Material which additionally exploits survivors' trauma, and makes children, particularly female children, more vulnerable on and off line."
Counseling Session Data
Childline also published details of support interactions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:
- Using AI to evaluate weight, physique and looks
- AI assistants discouraging young people from consulting safe guardians about harm
- Facing harassment online with AI-generated content
- Digital extortion using AI-faked pictures
During April and September this year, the helpline conducted 367 support interactions where AI, conversational AI and related terms were discussed, four times as many as in the same period last year.
Half of the references of AI in the 2025 interactions were connected with mental health and wellness, encompassing using chatbots for assistance and AI therapy apps.