British Technology Companies and Child Safety Officials to Test AI's Capability to Generate Abuse Content
Technology companies and child safety organizations will be granted permission to assess whether artificial intelligence tools can produce child exploitation material under recently introduced UK legislation.
Substantial Rise in AI-Generated Illegal Content
The declaration came as revelations from a protection watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the authorities will allow designated AI companies and child safety groups to inspect AI models – the foundational technology for conversational AI and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing depictions of child sexual abuse.
"Fundamentally about preventing abuse before it occurs," declared the minister for AI and online safety, adding: "Experts, under strict conditions, can now identify the risk in AI models early."
Addressing Regulatory Challenges
The amendments have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and others cannot generate such images as part of a testing process. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is designed to averting that issue by enabling to halt the production of those materials at their origin.
Legal Framework
The changes are being introduced by the authorities as revisions to the crime and policing bill, which is also implementing a ban on possessing, producing or sharing AI models designed to create exploitative content.
Practical Consequences
This recently, the official toured the London headquarters of Childline and listened to a simulated call to counsellors featuring a account of AI-based abuse. The call depicted a teenager requesting help after being blackmailed using a sexualised AI-generated image of himself, created using AI.
"When I learn about young people experiencing blackmail online, it is a cause of intense anger in me and rightful concern amongst families," he said.
Alarming Statistics
A leading internet monitoring organization stated that instances of AI-generated abuse content – such as online pages that may include numerous files – had more than doubled so far this year.
Cases of category A content – the gravest form of exploitation – rose from 2,621 visual files to 3,086.
- Female children were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "represent a crucial step to ensure AI products are secure before they are released," commented the chief executive of the online safety organization.
"Artificial intelligence systems have enabled so victims can be victimised repeatedly with just a simple actions, giving criminals the ability to create possibly endless quantities of advanced, lifelike child sexual abuse material," she added. "Content which additionally commodifies survivors' suffering, and makes children, especially girls, less safe on and off line."
Counseling Session Data
Childline also released details of support interactions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:
- Employing AI to evaluate body size, physique and looks
- AI assistants discouraging children from consulting trusted guardians about harm
- Facing harassment online with AI-generated material
- Online blackmail using AI-faked images
During April and September this year, Childline conducted 367 support interactions where AI, chatbots and associated terms were discussed, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including using chatbots for assistance and AI therapeutic apps.