Menu
Subscribe to Holyrood updates

Newsletter sign-up

Subscribe

Follow us

Scotland’s fortnightly political & current affairs magazine

Subscribe

Subscribe to Holyrood
by Ruaraidh Gilmour
06 May 2026
Cybercriminals struggling to adopt AI, study finds

Cybercriminals struggling to adopt AI, study finds

Cybercriminals have been struggling to make inroads with the adoption of AI in their activities, according to a new joint report by three UK universities.

The report, led by researchers from the universities of Edinburgh, Strathclyde and Cambridge, which is the first of its kind, analysed a dataset of 100 million posts from underground cybercrime communities and found that most hackers lack the skills or resources to support real innovation within their criminal activities.

AI was found to be most effective in cybercrime schemes where it was used for hiding patterns that are often detectable by cybersecurity defenders, and for running social media bots that conduct misogynistic harassment and make money from fraud.  

Researchers used a combination of machine learning tools and manual sampling techniques in their analysis of the conversations, which helped identify communication on how cybercriminals were experimenting with AI from November 2022, marking the release of ChatGPT.  

For criminals who are already skilled in coding, the analysis found that AI assistants were proving useful but did not reduce the barrier to entry for people hoping to commit cybercrimes.  

Some evidence also found that AI tools in more advanced forms of automation were being used in social engineering and bot farms.  

It also found that safety mechanisms on major chatbots are significantly reducing the potential harm. However, the research suggests that there is still cause for concern after observing early evidence that these communities are having some success in manipulating the outputs of the mainstream chatbots.

Dr Ben Collier, a senior lecturer in digital methods at the University of Edinburgh’s School of Social and Political Science, said: “Cybercriminals are experimenting with these tools, but as far as we can tell it’s not delivering them real benefits in their own work.  

“Our message to industry is: don't panic yet. The immediate danger comes from companies and members of the public adopting poorly secured AI systems themselves, opening them up to catastrophic new attacks that can be performed by cybercriminals with little effort or skill.”

Holyrood Newsletters

Holyrood provides comprehensive coverage of Scottish politics, offering award-winning reporting and analysis: Subscribe

Get award-winning journalism delivered straight to your inbox

Get award-winning journalism delivered straight to your inbox

Subscribe

Popular reads
Back to top