UK Government is at risk of "stumbling into dangerous territory" over the use of "killer robots", warn Lords

Written by John Johnston on 16 April 2018 in News

Report warns autonomous weapons with the power to "hurt, destroy or deceive" human beings could be developed if the Government does not better control the use of AI

Image credit: Holyrood

The UK Government is at risk of "stumbling into dangerous territory" over the use of "killer robots", according to a new House of Lords report.

The report by the House of Lords Artificial Intelligence Committee says autonomous weapons with the power to "hurt, destroy or deceive" human beings could be developed if the Government does not better control the use of AI.

It says the Government risks “stumbling through a semantic haze into dangerous territory” if it fails to provide more clarity on what classifies as an autonomous weapon.

The peers warn that the Government’s definition of a lethal AI weapon is out-of-step with most other NATO nations, including the US – making it harder to create international laws around the use of the lethal technology.

Digital Minister Matt Hancock argued that progress was being made.

“We think that the existing provisions of international humanitarian law are sufficient to regulate the use of weapons systems that might be developed in the future," he said.

“Of course, having a strong system and developing it internationally within the UN Convention on Certain Conventional Weapons is the right way to discuss the issue.”

But the report accuses the Government of “hamstringing” international efforts to provide clarity around AI, and warns that it could result in an “ill-considered drift” towards the use of autonomous weapons.

Committee Chair Lord Clement-Jones said that an “ethical approach” to the use of AI would help secure the benefits of the technology while mitigating against its misuse.

“The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences," he said.

“The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.

“AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”

Tags

Categories

Related Articles

The privacy paradox: If we care about our online privacy, why don't we do more to protect it?
6 September 2018

Douglas White, head of advocacy at the Carnegie UK Trust, on how to define, value and better protect our online privacy

Related Sponsored Articles

Associate feature: 5 ways IoT is transforming the public sector
5 February 2018

Vodafone explores some of the ways IoT is significantly improving public sector service delivery

Associate feature: Who keeps your organisation secure?
19 February 2018

BT's Amy Lemberger argues that having the right security in place to protect your organisation is no longer just an option. It is a necessity.

Associate feature: Government begins to "rightsize"​ its estate
17 September 2018

BT's Simon Godfrey on how government is fundamentally rethinking its strategy for both people and places

Intelligent Connectivity: Coping With an Explosion in Traffic
10 September 2018

At BT, we realise that digital technology is changing the way we all do business. Make smart decisions with intelligent...

Share this page