Four new laws will tackle the threat of sexual abuse images on children generated by artificial intelligence (AI), the government announced.
The Ministry of the Interior says that the United Kingdom will be the first country in the world to make illegal to own, create or distribute AI tools designed to create sexual children’s abuse (CSAM), with a punishment being able to go up to five years in prison.
Having AI – Pedophile manuals that teach people how to use AI for sexual abuse – will also be illegal, and offenders will have up to three years in prison.
“What we see is that AI is now putting the abuse of children online on steroids,” said the secretary at home, Yvette Cooper, on BBC Sunday with Laura Kuensberg.
Cooper said that AI “industrialized the scale” of sexual abuse against children and that government measures “should have to go further”.
The other laws that will be presented include making an offense to manage websites where pedophiles can share sexual abuse content or provide advice on how to prepare children. This would be liable to a sentence of up to 10 years in prison.
And the border force will receive powers to instruct the individuals they suspect of pose a sexual risk for children to unlock their digital devices for inspection when they try to enter the United Kingdom, because the CSAM is often filmed at the ‘stranger. Depending on the severity of the images, this will be punishable up to three years in prison.
The artificially generated CSAM involves images that are partly or entirely generated by computer. The software can “nudify” real images and replace one child’s face with another, creating a realistic image.
In some cases, the real votes of children are also used, which means that innocent survivors of mistreatment are being revised.
False images are also used to make children sing and force victims to other abuses.
The National Crime Agency (NCA) said that there are 800 arrests each month concerning the threats to children online. He said 840,000 adults threaten children nationally – both online and offline – which represents 1.6% of the adult population.
Cooper said: “You have authors who use AI to help them get better married or make children sing and children, distort images and use those to attract young people to other abuses, just things more horrible that take place and also become more Sades. “
She continued: “This is an area where technology does not remain motionless and our response cannot remain motionless to ensure the safety of children.”
Some experts, however, think that the government could have gone further.
Professor Clare McGlynn, an expert in the legal regulation of pornography, sexual violence and online abuses, said that changes were “welcome” but that there were “important shortcomings”.
The government should prohibit “nudifying” applications and attack the “normalization of sexual activity with young girls on traditional pornographic sites”, she said, describing these videos as “videos of abuse abuse simulated sex “.
These videos “imply adult actors, but they seem very young and are shown in the children’s rooms, with toys, mats, adjoins and other markers of childhood”, she declared. “This material can be found with the most obvious and legitimate research terms and normalizes sexual abuse on children. Unlike many other countries, this material remains legal in the United Kingdom.”
The Internet Watch Foundation (IWF) warns that more sexual abuse IMA images of children are produced, they become more widespread on the open web.
The latest data from the charity show that CSAM reports generated by AI increased by 380% with 245 confirmed reports in 2024 against 51 in 2023. Each report can contain thousands of images.
In research last year, he found that over a period of one month, 3,512 images of sexual abuse and sexual exploitation on children were discovered on a dark website. Compared to one month in the previous year, the number of most serious category images (category A) had increased by 10%.
Experts say that the IA CSAM can often be incredibly realistic, which makes it difficult to say the reality of the false.
The acting director general of IWF, Derek Ray-Hill, said: “The availability of this AI content feeds sexual violence against children.
“He embraces and encourages attackers, and it makes real children less safe. There is certainly more to do to prevent the exploitation of AI technology, but we welcome (the announcement) announces it, and Think that these measures are a vital starting point. “
Lynn Perry, Director General of the Charitable Beds Bannardo, praised government action to combat the CSAM produced by AI “which normalizes the abuse of children, by putting more in danger, both on and offline “.
“It is essential that continuous legislation for technological progress to prevent these horrible crimes,” she added.
“Technological companies must ensure that their platforms are safe for children. They must take measures to introduce stronger guarantees, and OFCOM must ensure that the online security law is implemented effectively and robust. “
The new measures announced will be presented as part of the Crime and Police Bill for Parliament in the coming weeks.