British Technology Companies and Child Safety Officials to Examine AI's Ability to Create Abuse Content

Technology companies and child protection organizations will receive permission to assess whether artificial intelligence systems can generate child exploitation material under recently introduced UK legislation.

Substantial Rise in AI-Generated Harmful Content

The announcement coincided with findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the authorities will allow approved AI developers and child protection groups to examine AI models – the underlying systems for conversational AI and image generators – and ensure they have sufficient safeguards to prevent them from producing images of child exploitation.

"Ultimately about preventing abuse before it happens," declared Kanishka Narayan, adding: "Experts, under strict conditions, can now detect the danger in AI systems promptly."

Addressing Legal Challenges

The amendments have been introduced because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing process. Until now, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is designed to preventing that problem by enabling to halt the production of those materials at their origin.

Legislative Structure

The changes are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or distributing AI models designed to generate child sexual abuse material.

Practical Consequences

This recently, the official toured the London headquarters of a children's helpline and listened to a mock-up conversation to advisors featuring a report of AI-based abuse. The interaction depicted a teenager seeking help after facing extortion using a sexualised deepfake of themselves, created using AI.

"When I learn about children experiencing blackmail online, it is a source of intense anger in me and rightful concern amongst parents," he stated.

Alarming Data

A prominent online safety organization stated that cases of AI-generated abuse content – such as webpages that may include numerous images – had significantly increased so far this year.

Instances of category A content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were predominantly victimized, making up 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a vital step to guarantee AI products are secure before they are released," commented the chief executive of the online safety organization.

"AI tools have made it so survivors can be victimised repeatedly with just a few clicks, providing offenders the ability to create possibly limitless amounts of sophisticated, photorealistic exploitative content," she continued. "Content which additionally exploits survivors' suffering, and makes children, especially girls, more vulnerable both online and offline."

Counseling Session Information

The children's helpline also released details of counselling sessions where AI has been referenced. AI-related harms mentioned in the conversations comprise:

  • Using AI to rate weight, physique and appearance
  • AI assistants dissuading young people from consulting safe adults about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-faked pictures

During April and September this year, the helpline conducted 367 support interactions where AI, conversational AI and associated terms were discussed, significantly more as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellness, including using chatbots for assistance and AI therapeutic apps.

Erin Howell
Erin Howell

Elara Vance is a legacy strategist and author focused on intergenerational wealth and family business continuity.