Artificial intelligence fuels major rise in child sexual abuse images
Research shows a rise of 1,325 per cent rise in AI-generated images of child sexual abuse.
The report, published by the Childlight Global Child Safety Institute, hosted by the University of Edinburgh, highlights the increasing dangers of AI-generated material, like deepfakes, that can be used to generate images or videos of underage individuals without their consent.
Reports of these images, where children are used to create sexual images, rose from 4,700 reports logged by the US-based National Center for Missing and Exploited Children in 2023 to over 67,000 in 2024.
The report shows that over 55 per cent of assessed child sexual abuse material is produced by relatives, with fathers estimated to have created over 900,000 images in 2024 alone.
“People often say home is where the heart is – but sadly for too many children, home is where the hurt is,” said Childlight chief executive Paul Stanfield. “We see betrayal of trust by those known to children on a vast scale, compounded by insufficient protections by tech companies and regulators to avoid digital crime scenes in children’s bedrooms.”
The study found that nearly one in five children in western Europe reported experiencing unwanted sexual interactions online before turning 18. These can come from solicitation attempts, grooming and pressured sexual acts. The data suggests that nearly 15 million children are affected in the region, with one in seven children reporting that they had experienced unwanted sexual interactions in the past year.
Childlight says that technology-facilitated abuse is widespread around the world, with more than 60 per cent of data hosted in the Netherlands estimated to be child sexual abuse material (CSAM).
The report says the “deliberate commercially led choices” of major technology companies are making it harder to prevent these crimes. For example, the rolling out of end-to-end encryption without safeguards has made it harder for child protection agencies and charities to detect and stop potential sources of CSAM.
When she was 13, Rhiannon-Faye McDonald was abused after being approached by a man posing as a fellow teenager online. Soon afterwards, he arrived at her address in Yorkshire and abused her in person. Today McDonald campaigns through the Marie Collins Foundation for better online safety regulations.
“For too long technology companies have favoured profit over safety,” McDonald said. “A rising number of children being abused is a direct result. For most victims and survivors, even with the right support, the impacts are significant and long-lasting. We live with misplaced self-blame and the fear of being recognised by those who have seen the images or videos of our abuse. For anybody who believes that it's 'just a photo', this couldn't be further from the truth.”
Holyrood Newsletters
Holyrood provides comprehensive coverage of Scottish politics, offering award-winning reporting and analysis: Subscribe