New Law Gives Tech Companies and Child Safety Agencies a Tool to Combat AI-Generated Abuse Images
The UK government has introduced new legislation that allows tech companies and child safety agencies to test artificial intelligence tools for their potential to create child abuse images. This move comes as the number of reports of AI-generated child sexual abuse material (CSAM) has more than doubled in the past year, with 426 cases reported in 2025 compared to 199 in 2024.
The law change will give designated companies and agencies permission to examine AI models, such as those used by chatbots like ChatGPT and image generators like Google's Veo 3, under strict conditions. The aim is to prevent the creation of CSAM at source and help authorities spot potential risks early on.
Experts welcome this move, saying it will ultimately stop abuse before it happens. However, critics argue that AI-generated CSAM has become a significant problem, with girls making up 94% of illegal images in 2025 and depictions of newborns to two-year-olds rising significantly.
The Internet Watch Foundation reports that instances of category A material have more than doubled, with new cases involving "potentially limitless amounts of sophisticated, photorealistic child sexual abuse material". This has made it difficult for survivors, who are often victimized again when they see AI-generated content.
Childline, a UK helpline for children, has also seen an increase in counselling sessions related to AI and online bullying. The charity warns that AI-faked images and chatbots can be used to blackmail children and dissuade them from reporting abuse.
While the law is aimed at preventing the creation of CSAM at source, critics argue that it may not go far enough to address the root causes of online child abuse. Nevertheless, this move represents a significant step towards making AI products safer before they are released into the public domain.
The UK government has introduced new legislation that allows tech companies and child safety agencies to test artificial intelligence tools for their potential to create child abuse images. This move comes as the number of reports of AI-generated child sexual abuse material (CSAM) has more than doubled in the past year, with 426 cases reported in 2025 compared to 199 in 2024.
The law change will give designated companies and agencies permission to examine AI models, such as those used by chatbots like ChatGPT and image generators like Google's Veo 3, under strict conditions. The aim is to prevent the creation of CSAM at source and help authorities spot potential risks early on.
Experts welcome this move, saying it will ultimately stop abuse before it happens. However, critics argue that AI-generated CSAM has become a significant problem, with girls making up 94% of illegal images in 2025 and depictions of newborns to two-year-olds rising significantly.
The Internet Watch Foundation reports that instances of category A material have more than doubled, with new cases involving "potentially limitless amounts of sophisticated, photorealistic child sexual abuse material". This has made it difficult for survivors, who are often victimized again when they see AI-generated content.
Childline, a UK helpline for children, has also seen an increase in counselling sessions related to AI and online bullying. The charity warns that AI-faked images and chatbots can be used to blackmail children and dissuade them from reporting abuse.
While the law is aimed at preventing the creation of CSAM at source, critics argue that it may not go far enough to address the root causes of online child abuse. Nevertheless, this move represents a significant step towards making AI products safer before they are released into the public domain.