UK Government is at risk of "stumbling into dangerous territory" over the use of "killer robots", warn Lords

Written by John Johnston on 16 April 2018 in News

Report warns autonomous weapons with the power to "hurt, destroy or deceive" human beings could be developed if the Government does not better control the use of AI

Image credit: Holyrood

The UK Government is at risk of "stumbling into dangerous territory" over the use of "killer robots", according to a new House of Lords report.

The report by the House of Lords Artificial Intelligence Committee says autonomous weapons with the power to "hurt, destroy or deceive" human beings could be developed if the Government does not better control the use of AI.

It says the Government risks “stumbling through a semantic haze into dangerous territory” if it fails to provide more clarity on what classifies as an autonomous weapon.

The peers warn that the Government’s definition of a lethal AI weapon is out-of-step with most other NATO nations, including the US – making it harder to create international laws around the use of the lethal technology.

Digital Minister Matt Hancock argued that progress was being made.

“We think that the existing provisions of international humanitarian law are sufficient to regulate the use of weapons systems that might be developed in the future," he said.

“Of course, having a strong system and developing it internationally within the UN Convention on Certain Conventional Weapons is the right way to discuss the issue.”

But the report accuses the Government of “hamstringing” international efforts to provide clarity around AI, and warns that it could result in an “ill-considered drift” towards the use of autonomous weapons.

Committee Chair Lord Clement-Jones said that an “ethical approach” to the use of AI would help secure the benefits of the technology while mitigating against its misuse.

“The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences," he said.

“The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.

“AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”

Tags

Categories

Related Articles

Disability adviser Kate Sang recruited to expert panel on single use plastics
15 March 2018

Expert panel on plastics was established following the Programme for Government to explore ways to reduce circulation of single-use plastics

The moral case for divestment from fossil fuels is far from clear cut
22 February 2018

Professor Robert Ellam discusses climate change and calls for universities to divest from fossil fuels

Related Sponsored Articles

Associate feature: 5 ways IoT is transforming the public sector
5 February 2018

Vodafone explores some of the ways IoT is significantly improving public sector service delivery

Associate feature: Who keeps your organisation secure?
19 February 2018

BT's Amy Lemberger argues that having the right security in place to protect your organisation is no longer just an option. It is a necessity.

Associate feature: The Grief of GDPR Compliance
23 April 2018

Sean Luke, BT's CIO for the Universities Sector, on the strange parallels between GDPR readiness and grief

What opportunities do next generation technologies present to organisations?
17 April 2018

BT brought together CIOs from well known organisations to identify the key threats and opportunities that new technologies are presenting

Share this page