UK Technology Companies and Child Protection Agencies to Examine AI's Capability to Create Exploitation Content

Technology companies and child protection organizations will be granted permission to assess whether AI tools can produce child abuse material under new UK legislation.

Substantial Increase in AI-Generated Harmful Content

The declaration came as revelations from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the amendments, the government will allow approved AI companies and child safety groups to inspect AI models – the foundational technology for conversational AI and image generators – and verify they have adequate protective measures to stop them from producing depictions of child exploitation.

"Fundamentally about stopping abuse before it happens," stated Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now detect the danger in AI models early."

Tackling Regulatory Obstacles

The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot create such images as part of a testing process. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it.

This law is aimed at averting that issue by helping to stop the creation of those materials at their origin.

Legal Framework

The changes are being introduced by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on owning, creating or distributing AI systems developed to create exploitative content.

Real-World Impact

This week, the official visited the London base of Childline and listened to a simulated call to advisors featuring a account of AI-based abuse. The call depicted a adolescent seeking help after facing extortion using a explicit AI-generated image of themselves, created using AI.

"When I hear about young people facing blackmail online, it is a source of intense frustration in me and justified concern amongst parents," he stated.

Alarming Data

A prominent online safety foundation stated that instances of AI-generated abuse material – such as online pages that may contain multiple files – had more than doubled so far this year.

Instances of the most severe material – the gravest form of exploitation – rose from 2,621 visual files to 3,086.

  • Girls were predominantly victimized, accounting for 94% of prohibited AI depictions in 2025
  • Depictions of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a crucial step to ensure AI products are safe before they are launched," commented the chief executive of the internet monitoring foundation.

"AI tools have enabled so survivors can be victimised all over again with just a few clicks, giving offenders the ability to make potentially limitless quantities of sophisticated, lifelike child sexual abuse material," she added. "Material which further commodifies survivors' trauma, and renders young people, particularly female children, more vulnerable both online and offline."

Support Session Data

Childline also released details of counselling sessions where AI has been referenced. AI-related harms discussed in the sessions comprise:

  • Using AI to rate body size, physique and appearance
  • Chatbots discouraging young people from talking to safe guardians about harm
  • Being bullied online with AI-generated content
  • Online blackmail using AI-manipulated images

Between April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and related topics were discussed, four times as many as in the same period last year.

Half of the references of AI in the 2025 interactions were connected with mental health and wellness, encompassing utilizing AI assistants for support and AI therapeutic applications.

Wendy Guerra
Wendy Guerra

Digital marketing strategist with over a decade of experience, passionate about helping brands thrive online through data-driven approaches.