British Tech Firms and Child Safety Officials to Examine AI's Capability to Generate Abuse Content

Tech firms and child safety agencies will be granted authority to assess whether AI systems can produce child abuse images under recently introduced British legislation.

Significant Increase in AI-Generated Harmful Material

The declaration came as findings from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the authorities will allow designated AI companies and child safety organizations to examine AI models – the underlying systems for conversational AI and image generators – and ensure they have adequate protective measures to prevent them from producing depictions of child sexual abuse.

"Fundamentally about stopping abuse before it happens," declared Kanishka Narayan, noting: "Specialists, under strict conditions, can now detect the risk in AI models promptly."

Tackling Legal Challenges

The changes have been introduced because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such content as part of a testing process. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.

This legislation is designed to averting that problem by helping to stop the creation of those materials at their origin.

Legislative Framework

The amendments are being added by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, producing or distributing AI models developed to create child sexual abuse material.

Practical Impact

This week, the minister visited the London headquarters of Childline and heard a simulated call to advisors involving a account of AI-based exploitation. The interaction depicted a adolescent seeking help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.

"When I hear about young people facing extortion online, it is a source of intense frustration in me and justified concern amongst parents," he said.

Alarming Statistics

A leading online safety organization stated that cases of AI-generated exploitation material – such as online pages that may include multiple files – had more than doubled so far this year.

Instances of the most severe content – the gravest form of abuse – increased from 2,621 visual files to 3,086.

  • Girls were predominantly victimized, making up 94% of illegal AI depictions in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "represent a crucial step to ensure AI tools are secure before they are launched," stated the chief executive of the online safety organization.

"AI tools have made it so survivors can be victimised all over again with just a simple actions, providing criminals the ability to make possibly limitless amounts of sophisticated, lifelike exploitative content," she added. "Content which additionally exploits victims' suffering, and makes children, particularly female children, less safe both online and offline."

Counseling Session Information

Childline also released details of counselling sessions where AI has been mentioned. AI-related risks mentioned in the sessions comprise:

  • Employing AI to rate body size, physique and appearance
  • Chatbots dissuading children from talking to safe adults about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated pictures

During April and September this year, the helpline delivered 367 support interactions where AI, chatbots and related terms were discussed, four times as many as in the same period last year.

Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, including utilizing AI assistants for support and AI therapeutic applications.

Jimmy Hunter
Jimmy Hunter

A passionate gamer and tech writer with over a decade of experience covering video games and industry developments.