UK Tech Companies and Child Protection Officials to Test AI's Capability to Create Exploitation Images

Technology companies and child protection agencies will be granted permission to evaluate whether artificial intelligence tools can produce child abuse material under recently introduced British laws.

Substantial Increase in AI-Generated Illegal Content

The declaration coincided with revelations from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the government will allow designated AI developers and child safety organizations to examine AI systems – the foundational systems for chatbots and visual AI tools – and verify they have sufficient safeguards to stop them from creating depictions of child exploitation.

"Ultimately about stopping abuse before it occurs," declared the minister for AI and online safety, adding: "Experts, under strict conditions, can now detect the risk in AI systems early."

Addressing Regulatory Obstacles

The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot generate such images as part of a testing process. Previously, authorities had to wait until AI-generated CSAM was published online before dealing with it.

This legislation is aimed at preventing that problem by helping to halt the creation of those materials at their origin.

Legislative Structure

The changes are being added by the government as revisions to the criminal justice legislation, which is also implementing a ban on owning, creating or sharing AI models designed to create child sexual abuse material.

Real-World Consequences

This week, the minister visited the London headquarters of a children's helpline and heard a simulated conversation to advisors involving a report of AI-based exploitation. The interaction portrayed a adolescent seeking help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.

"When I learn about young people facing blackmail online, it is a source of intense frustration in me and justified concern amongst parents," he said.

Concerning Data

A leading internet monitoring foundation reported that instances of AI-generated abuse content – such as online pages that may contain numerous files – had significantly increased so far this year.

Instances of category A material – the gravest form of abuse – increased from 2,621 visual files to 3,086.

  • Girls were predominantly targeted, accounting for 94% of illegal AI images in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "represent a vital step to guarantee AI products are secure before they are released," commented the head of the online safety organization.

"Artificial intelligence systems have made it so survivors can be targeted all over again with just a few clicks, giving offenders the capability to create potentially endless amounts of sophisticated, lifelike child sexual abuse material," she added. "Content which additionally commodifies victims' trauma, and makes children, particularly girls, more vulnerable on and off line."

Support Session Data

The children's helpline also released details of support sessions where AI has been mentioned. AI-related harms discussed in the conversations include:

  • Employing AI to evaluate body size, physique and appearance
  • Chatbots discouraging children from consulting trusted adults about abuse
  • Being bullied online with AI-generated content
  • Digital extortion using AI-faked images

Between April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were related to mental health and wellness, encompassing using AI assistants for support and AI therapeutic applications.

Kimberly Brown
Kimberly Brown

A passionate digital artist and educator sharing insights on creative techniques and industry trends.