British Technology Companies and Child Safety Officials to Test AI's Ability to Generate Exploitation Images

Technology companies and child protection agencies will receive permission to evaluate whether AI systems can produce child exploitation images under new British laws.

Significant Rise in AI-Generated Illegal Material

The announcement came as revelations from a protection watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the changes, the authorities will allow designated AI companies and child protection groups to inspect AI systems – the foundational systems for chatbots and image generators – and verify they have sufficient safeguards to stop them from creating images of child exploitation.

"Fundamentally about stopping exploitation before it happens," declared Kanishka Narayan, adding: "Specialists, under strict protocols, can now detect the risk in AI systems early."

Tackling Legal Challenges

The amendments have been introduced because it is against the law to create and possess CSAM, meaning that AI creators and others cannot create such images as part of a evaluation regime. Until now, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This legislation is aimed at averting that problem by helping to stop the creation of those images at source.

Legislative Framework

The amendments are being added by the government as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or sharing AI models designed to create child sexual abuse material.

Practical Consequences

This recently, the minister visited the London headquarters of Childline and listened to a mock-up conversation to advisors featuring a account of AI-based abuse. The call portrayed a adolescent seeking help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.

"When I hear about children experiencing blackmail online, it is a source of extreme frustration in me and justified concern amongst families," he said.

Concerning Data

A leading internet monitoring foundation stated that cases of AI-generated exploitation content – such as online pages that may include numerous files – had significantly increased so far this year.

Cases of the most severe content – the most serious form of abuse – rose from 2,621 visual files to 3,086.

  • Female children were overwhelmingly targeted, accounting for 94% of illegal AI images in 2025
  • Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Response

The law change could "represent a vital step to guarantee AI tools are safe before they are released," stated the chief executive of the online safety organization.

"AI tools have enabled so survivors can be targeted all over again with just a simple actions, giving criminals the ability to create possibly endless amounts of advanced, lifelike exploitative content," she added. "Content which further exploits survivors' trauma, and makes children, especially girls, more vulnerable both online and offline."

Counseling Session Information

Childline also published details of support sessions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:

  • Using AI to rate body size, physique and looks
  • Chatbots dissuading young people from talking to trusted adults about abuse
  • Being bullied online with AI-generated content
  • Digital extortion using AI-manipulated pictures

Between April and September this year, the helpline conducted 367 counselling sessions where AI, chatbots and related topics were mentioned, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellness, including using AI assistants for assistance and AI therapeutic applications.

Courtney Saunders MD
Courtney Saunders MD

Elara is a seasoned betting analyst with a passion for data-driven strategies and casino gaming insights.