Subscribe to Holyrood updates

Newsletter sign-up


Follow us

Scotland’s fortnightly political & current affairs magazine


Subscribe to Holyrood
by Sophie Toura, Policy Analyst ControlAI.
27 February 2024
Associate Feature: Deepfakes are a growing threat to our democracy and our privacy

Taylor Swift and Sir Keir Starmer are among those who have been targeted by high-profile deepfakes

Partner content

Associate Feature: Deepfakes are a growing threat to our democracy and our privacy

In 2017 when online forums began sharing pornographic videos with popular actresses’ faces edited in, it was a sign of what was to come. Since then, deepfakes – images, audio and video depicting people without their consent, most commonly used for pornography, fraud, and misinformation – have become depressingly common. Worse still, deepfake production has grown rapidly, with no signs of stopping. It is trivially easy, and essentially free, to produce deepfake content, requiring only 30 seconds of video or audio to produce life-like imitations of individuals - saying or doing whatever the deepfake creator wants.

Last October, an audio clip falsely depicting Sir Keir Starmer swearing at his staff went viral on X (formerly Twitter). The following month, a clip falsely depicting Sadiq Khan suggesting that Remembrance Sunday should be delayed sparked backlash and riots from people convinced they needed to “protect the Cenotaph”. And in January, thousands of voters across the US state of New Hampshire received a surprise call from President Biden, discouraging them from voting.

There is a growing movement in favour of action against deepfakes. Currently, it is illegal to share deepfake sexual abuse material for the purposes of harassment but it is not illegal to generate it. Similarly, deepfakes for fraud or manipulation of elections are perfectly legal to produce. This cannot continue.

Control AI is leading a campaign against deepfakes - with a growing cross-party coalition of MPs, Lords and politicians from across the UK. This reflects the concerns of the public, with 89% of Scottish adults polled supporting a ban on deepfakes.

The legal approach to date has treated deepfakes as a game of whack-a-mole - relying on social media networks to remove material, and to trace users generating this content. Social media networks try to remove abusive material, but all too often this is shutting the door after the horse has bolted. In January, deepfake sexual abuse material of Taylor Swift was seen more than 47 million times before being removed. Enforcement of existing laws means targeting the behaviour of millions of individuals - often hidden behind anonymous accounts.

Instead, we can stop the production of deepfakes through regulating a small number of well-funded AI companies and cloud providers. Companies that produce deepfake technology, create and enable deepfake content, and facilitate its spread should be required to introduce and maintain safeguards to prevent deepfake creation.

Most AI generated media is beneficial and innocuous - deepfakes are not. Action is needed, or else abuse, fraud and the undermining of democracy itself will become evermore common-place. Deepfakes must be banned.

This article is sponsored by ControlAI

Holyrood Newsletters

Holyrood provides comprehensive coverage of Scottish politics, offering award-winning reporting and analysis: Subscribe

Get award-winning journalism delivered straight to your inbox

Get award-winning journalism delivered straight to your inbox


Popular reads
Back to top