Menu
Subscribe to Holyrood updates

Newsletter sign-up

Subscribe

Follow us

Scotland’s fortnightly political & current affairs magazine

Subscribe

Subscribe to Holyrood
by Sofia Villegas
07 May 2025
Civil servants told not to discuss policy with ChatGPT

Civil servants warned not to discuss policy with AI | Alamy

Civil servants told not to discuss policy with ChatGPT

Civil service staff has been warned to avoid sharing government policy details when using artificial intelligence (AI) tools.

Guidance on the use of technologies such as ChatGPT, Deepseek or X’s Grok, warns civil servants not to discuss confidential information with AI tools to ensure the government is not linked to “insensitive or inappropriate” content.

The guidance, obtained via a freedom of information request by The Herald, covers large language models and tells civil servants to “not share anything you would not be happy to share with a member of the public.”

While it encourages “colleagues to experiment and try different AI tools”, it urges them not to discuss campaign material or details related to ministerial announcements.

The guidance says: “Do not use sensitive information in generative AI tools. Be aware that non-personal data can be sensitive and will not always be protected by data protection regulation.

“This could be details of an embargoed press release, creative material designed as part of a campaign or the text of a ministerial announcement. Do not share anything you would not be happy to share with a member of the public.

“Do not use terms which could infer future government policy or thinking in generative AI tools.

“For example, you should not ask questions such as 'what might happen if the Scottish Government set the driving age to 12?'.

“Generative AI works like a giant database, and a search term like this links the Scottish Government with the concept of driving at 12. This means similar text could be returned as a search result in other users' searches and could potentially link the Scottish Government to outputs that are insensitive or inappropriate.”

It follows on from Holyrood revealing councils are using AI to write reports and to respond to constituents in “a clear and concise manner”.

The guidance continues: “Given the warnings provided, any decisions to use a paid-for access model should consider whether there are sufficient safeguards around access to information submitted to that AI system.

“You must also fact check any information supplied by an AI system, as there is no other way to determine which answers are accurate, and which are ‘hallucinated’.”

Hallucinations are AI-generated responses that contain false or misleading information presented as fact.

Holyrood Newsletters

Holyrood provides comprehensive coverage of Scottish politics, offering award-winning reporting and analysis: Subscribe

Get award-winning journalism delivered straight to your inbox

Get award-winning journalism delivered straight to your inbox

Subscribe

Popular reads
Back to top