"AI Safety Net: UK Law Aims to Prevent Child Abuse Image Creation"
The UK government has introduced a new law that grants tech companies and child safety agencies permission to test artificial intelligence tools' ability to generate child abuse images. The move comes as reports of AI-generated child sexual abuse material (CSAM) have more than doubled in the past year, from 199 cases in 2024 to 426 cases in 2025.
Under the new law, designated companies and organizations will be allowed to examine AI models used by chatbots like ChatGPT and image generators such as Google's Veo 3, with the aim of preventing their creation of images that exploit children. The government believes this will help stop abuse before it happens.
The changes are part of amendments to the crime and policing bill, which also introduces a ban on possessing, creating or distributing AI models designed to generate CSAM. The law is aimed at addressing the growing concern of AI-generated child abuse material, with instances of category A material β the most serious form of abuse β rising from 2,621 images in 2024 to 3,086 this year.
Girls are disproportionately targeted by these AI-generated images, making up 94% of all cases in 2025. The Internet Watch Foundation has reported a significant increase in reports of AI-generated CSAM, with the number more than doubling so far this year.
Kerry Smith, CEO of the Internet Watch Foundation, hailed the law change as "a vital step" to ensure AI products are safe before they are released. She warned that AI tools can be used to victimize survivors and create an endless supply of sophisticated child abuse material.
The new law is seen as a crucial move to address this growing concern, with experts predicting it could help prevent the creation of child abuse images at source. The government has also acknowledged the growing concern of AI-generated blackmail using AI-faked images, with Childline reporting a significant increase in counselling sessions related to these issues.
As the use of AI continues to grow, lawmakers are taking steps to ensure that these technologies are developed and used responsibly. The new law marks an important shift towards prioritizing child safety online and preventing the creation of exploitative content.
The UK government has introduced a new law that grants tech companies and child safety agencies permission to test artificial intelligence tools' ability to generate child abuse images. The move comes as reports of AI-generated child sexual abuse material (CSAM) have more than doubled in the past year, from 199 cases in 2024 to 426 cases in 2025.
Under the new law, designated companies and organizations will be allowed to examine AI models used by chatbots like ChatGPT and image generators such as Google's Veo 3, with the aim of preventing their creation of images that exploit children. The government believes this will help stop abuse before it happens.
The changes are part of amendments to the crime and policing bill, which also introduces a ban on possessing, creating or distributing AI models designed to generate CSAM. The law is aimed at addressing the growing concern of AI-generated child abuse material, with instances of category A material β the most serious form of abuse β rising from 2,621 images in 2024 to 3,086 this year.
Girls are disproportionately targeted by these AI-generated images, making up 94% of all cases in 2025. The Internet Watch Foundation has reported a significant increase in reports of AI-generated CSAM, with the number more than doubling so far this year.
Kerry Smith, CEO of the Internet Watch Foundation, hailed the law change as "a vital step" to ensure AI products are safe before they are released. She warned that AI tools can be used to victimize survivors and create an endless supply of sophisticated child abuse material.
The new law is seen as a crucial move to address this growing concern, with experts predicting it could help prevent the creation of child abuse images at source. The government has also acknowledged the growing concern of AI-generated blackmail using AI-faked images, with Childline reporting a significant increase in counselling sessions related to these issues.
As the use of AI continues to grow, lawmakers are taking steps to ensure that these technologies are developed and used responsibly. The new law marks an important shift towards prioritizing child safety online and preventing the creation of exploitative content.