Menu
Subscribe to Holyrood updates

Newsletter sign-up

Subscribe

Follow us

Scotland’s fortnightly political & current affairs magazine

Subscribe

Subscribe to Holyrood
by Sofia Villegas
07 February 2024
The puzzle of generative AI policymaking – key takeaways from the House of Lords report

The inquiry heard from stakeholders and reviewed documents to put forward recommendations | Alamy

The puzzle of generative AI policymaking – key takeaways from the House of Lords report

Last week a House of Lords committee released a report on its inquiry into large language models (LLMs). 

A large language model is a type of artificial intelligence (AI). It is a deep learning model trained on vast amounts of data that it uses to recognise and generate text. An example of the technology is ChatGPT, which has more than 180m users, according to analytics firm Similarweb. 

In July, the Communications and Digital Committee of the House of Lords launched an inquiry into the innovative technology, to evaluate what actions should be taken within the next three years so the UK could respond to opportunities and risks. 

The inquiry’s call for evidence ran until September 2023 and gathered views from 41 experts, held roundtables with SMEs, reviewed more than 900 pages of written evidence and visited Google and the University College London’s centre for AI.  

On Friday, February 2, the committee published its report on LLMs, tackling debates concerning open and closed models, barriers to the use of AI, and how collaborative policymaking should be. 

Here Holyrood breaks down the key takeaways from the report, which claims LLMs will have similar impacts to that of the internet. 

 

…open or closed...

Both systems carry significant implications which will shape competition within the market.

On the one hand, the report found that open-access models are usually cheaper, more accessible and allow for more transparency and community-led improvements. However, these are said to be more complicated to fix and lag on benchmarks. In their submissions to the committee, Getty Images and OpenUK highlighted that the emergence of numerous open technologies meant “nuanced regulatory proposals” were “essential” as otherwise gaps could give room for different models to be exempt from responsibilities.

On the other hand, the report found that closed models may present risks around “overreliance and concentrated market power”. This may lead to a “first mover advantage” meaning smaller businesses may not be able to take advantage of this technology, it is argued.

Considering both ends, the report concluded fair market competition had to be the “key” policy objective and suggested a combination of open and closed-sourced technology would ensure the UK is “not out of the race” to shape the industry.

 

…the influence of external experts…

Witnesses called for further inclusion of public sector expertise in policymaking and for “technical standards or benchmarks” to be published for consultation.

The report pointed out that over-reliance on private sector stakeholders could lead to policies succumbing to “groupthink”, whereby they meet commercial interests rather than public ones.

It pointed out this trend had already narrowed the debate on AI safety, focusing on catastrophic risk – unlikely in the next three years – instead of more immediate issues such as copyright infringement or reliability.

To tackle this issue, it called for mitigations against such “conflict of interests” to be publicly available and for a six-month follow-up after a private sector expert lands a policy-making role to ensure these mitigations have been followed.

 

…risks and barriers…

Generative AI can have a significant contribution to the Scottish economy. In November, the Labour Market Outlook revealed a third of Scottish employers saw cost saving as a potential benefit from using the technology.

However, the House of Lords  report stated that the “ongoing failure” to bridge the digital skills gap will slow down any social or economic benefits.

According to a report by the charity Inspiring Scotland, as of 2020, almost two out of every ten people in the country had no digital skills whatsoever. However, forecasts predict there will be 15,500 job openings for tech professionals each year north of the border.

The House of Lords  report also revealed it had not found “plausible evidence of imminent widespread AI‑induced unemployment”, suggesting that fear about AI taking over jobs might be slowing down the adoption of the technology.

So, it called for collaboration between the Department of Education, the Department for Science, Innovation and Technology (DSIT), and industry to “upskills and reskill” workers as well as to enhance public awareness of the “implications of AI employment”.

The report also aimed to “distinguish hype from reality” and divided potential risks into four categories: near-term security, catastrophic, existential, and societal risks.

It stated that the most immediate risks come from making existing malicious activities, such as synthetic child sexual abuse material, easier and cheaper rather than qualitatively new risks.

With a UK general election set to happen before January 2025, it also highlighted the urgency for a strategy to tackle disinformation in this context.

Data protection was perceived as another weak link in policy, especially within the health sector. The report recommended the Department for Health and Social Care to work with NHS bodies to embed future-proof data protection provisions in licensing terms. It is claimed that this strategy would help reassure patients especially in the face of overseas corporations acquiring LLM businesses working with NHS data.

The report concluded that the UK Government should agree on an “AI risk taxonomy and risk register” to clarify priorities and the "magnitude of the issues”.

 

…are strategies going far enough…

The paper found that measures were reactive rather than preventative and that there was a lack of balance between innovation and risk. For instance, it pointed out that the AI Safety Summit, which took place in early November at Bletchley Park, was too focused on making “AI systems safe”, rather than on responsible innovation and adoption.

It also stated dependence on big tech for policymaking had led to a more negative outlook on LLMs, an approach which it said risks the UK becoming “dependent on a small number of overseas tech firms”.

To change this, it suggests the appointment of non-tech-heavy experts, including ethicists and social scientists as advisers to the AI Safety Institute.

It further claimed there is a lack ambition for the UK to become a hub for LLM research, making entrepreneurs choose overseas offers – which may lead to the UK losing its influence on setting international regulations.

 

…going forward…

Boosting academic research and embracing the potential of UK spinouts was deemed essential to remain internationally competitive. 

The report also called for more UKRI funding for AI PhDs, claiming that failing to do so could bring significant security threats.

For instance, the Intelligence and Security Committee raised concerns about China placing intellectual property transfer as a condition of funding.

In terms of developing a sovereign LLM capability, the document suggests that such a task should be commissioned to external developers and that doing so would both reduce risks and be more cost-effective, as developers would provide the software while the government would set the ethical and safety standards.

However, it warned it is “too early” to integrate LLMs in high‑stakes applications such as critical national infrastructure or the legal system.

On copying other countries' regulations, the report said the UK should balance and implement foreign measures if suitable – in other words, only use the ones that work in accordance with national risks and priorities. 

Such “competing priorities” would also make global regulatory divergence more likely in a short to medium‑term basis, it is argued, hence the report suggested that progressing domestic action would ensure the UK does not fall “behind the curve” in legislation.

However, it concluded extensive primary legislation aimed solely at LLMs was inappropriate, as “the technology is too new, the uncertainties too high and the risk of inadvertently stifling innovation too great”.

 

…The White Paper…

In January, the UK Government published a framework to provide “practical considerations for anyone planning or developing a generative AI solution” yet it was “incomplete and dynamic” in response to the rapidly developing nature of the technology.

However, the report deemed the pace for delivering central support functions  “inadequate” as in November regulators were unaware of the central function’s status and how it would operate.

This was described as a lack of knowledge which “undermined confidence in the government’s commitment to the regulatory structures needed to ensure responsible innovation”.

The report also called for the government to provide a timeline for establishing further legal clarity on who is liable any issues with models and revealed copyright remained a loophole in legislation.

In response to concerns over AI and copyright infringement, the UK Government called on the Intellectual Property Office to develop a code of practice to enable AI to develop via copyright-protected work while also protecting the creative industries. 

However, the report set Spring 2024 as a deadline for the process to finish, saying that future-proofing copyright principles “cannot sit on its hands for the next decade”.

 

Holyrood Newsletters

Holyrood provides comprehensive coverage of Scottish politics, offering award-winning reporting and analysis: Subscribe

Read the most recent article written by Sofia Villegas - Scottish spaceport achieves ‘historic milestone’ inching closer to first launch.

Get award-winning journalism delivered straight to your inbox

Get award-winning journalism delivered straight to your inbox

Subscribe

Popular reads
Back to top