AI-generated child sex abuse images targeted with new laws

2025-02-02 02:22:00

Abstract: UK introduces laws against AI child abuse material, including its creation and possession, with penalties up to 10 years. Border checks and website bans added.

The UK government has announced the introduction of four new laws to combat the threat posed by artificial intelligence (AI)-generated child sexual abuse images. These laws aim to better protect children, making the UK the first country in the world to make it illegal to possess, create, or distribute AI tools designed to generate Child Sexual Abuse Material (CSAM), with offenders facing up to five years in prison.

Additionally, possession of AI pedophile manuals will also be made illegal, with offenders facing up to three years in prison. These manuals instruct people on how to use AI to sexually abuse young people. Home Secretary Yvette Cooper stated, "We know that the activity of sick predators online can often lead to the most horrific abuse in the real world. This government will not hesitate to take action to ensure our laws keep pace with the latest threats, to keep children safe online."

Other laws include making it a crime to operate websites where pedophiles can share child sexual abuse content or provide instructions on how to groom children, with offenders facing up to ten years in prison. Furthermore, border forces will be empowered to instruct individuals they suspect pose a sexual risk to children to unlock their digital devices for inspection when attempting to enter the UK, as CSAM is often filmed abroad, with offenders facing up to three years in prison depending on the severity of the images.

AI-generated CSAM involves images that are partially or entirely created by computers. Software can “undress” real images and replace one child's face with another, creating realistic-looking images. In some cases, real children's voices are also used, meaning innocent abuse survivors are being re-victimized. Fake images are also used to blackmail children, forcing victims into further abuse. Cooper stated, "These four new laws are powerful measures aimed at keeping our children safe online as technology develops. It is vital that we tackle child sexual abuse both online and offline to better protect the public."

However, some experts believe the government could have gone further. Professor Clare McGlynn, an expert in the legal regulation of pornography, sexual violence, and online abuse, said the changes were "welcome" but that there were still "significant gaps." She argued that the government should ban "undressing" apps and address the "normalization of sex with very young-looking girls on mainstream porn sites," which she described as "simulated child sexual abuse videos." These videos "involve adult actors, but they look very young and are shown in children's bedrooms with toys, braids, braces, and other signs of childhood. This material can be found through the most obvious search terms and legitimizes and normalizes child sexual abuse. Unlike many other countries, this material is still legal in the UK."

The Internet Watch Foundation (IWF) has warned that increasing numbers of AI child abuse images are being created and that they are becoming more prevalent on the open web. The charity's latest figures show a 380% increase in reports of CSAM, with 245 confirmed reports in 2024 compared to 51 in 2023. Each report may contain thousands of images. A study last year found 3,512 AI child abuse and exploitation images on a dark web site in one month. The number of images in the most severe category (Category A) increased by 10% compared to the same month the previous year. Experts say AI CSAM often looks very realistic and is difficult to tell apart from real images.

IWF’s interim CEO, Derek Ray-Hill, said: “The availability of this AI content further fuels sexual violence against children. It encourages and emboldens abusers and makes real children less safe. Of course, more work needs to be done to prevent AI technology from being abused, but we welcome (the) announcement and see these measures as a vital starting point.” Barnardo's CEO Lynn Perry welcomed the government's action to tackle AI-generated CSAM, "which normalizes the abuse of children and puts more children at risk both online and offline." She added, "It is vital that legislation keeps pace with technological advancements to prevent these horrific crimes. Tech companies must ensure their platforms are safe for children. They need to take action to introduce stronger safeguards, and Ofcom must ensure the Online Safety Act is implemented effectively and powerfully.”

The newly announced measures will be introduced as part of the Criminal Justice Bill when it is put before Parliament in the coming weeks.