UK Government is at risk of "stumbling into dangerous territory" over the use of "killer robots", warn Lords
Report warns autonomous weapons with the power to "hurt, destroy or deceive" human beings could be developed if the Government does not better control the use of AI
Image credit: Holyrood
The UK Government is at risk of "stumbling into dangerous territory" over the use of "killer robots", according to a new House of Lords report.
The report by the House of Lords Artificial Intelligence Committee says autonomous weapons with the power to "hurt, destroy or deceive" human beings could be developed if the Government does not better control the use of AI.
It says the Government risks “stumbling through a semantic haze into dangerous territory” if it fails to provide more clarity on what classifies as an autonomous weapon.
The peers warn that the Government’s definition of a lethal AI weapon is out-of-step with most other NATO nations, including the US – making it harder to create international laws around the use of the lethal technology.
Digital Minister Matt Hancock argued that progress was being made.
“We think that the existing provisions of international humanitarian law are sufficient to regulate the use of weapons systems that might be developed in the future," he said.
“Of course, having a strong system and developing it internationally within the UN Convention on Certain Conventional Weapons is the right way to discuss the issue.”
But the report accuses the Government of “hamstringing” international efforts to provide clarity around AI, and warns that it could result in an “ill-considered drift” towards the use of autonomous weapons.
Committee Chair Lord Clement-Jones said that an “ethical approach” to the use of AI would help secure the benefits of the technology while mitigating against its misuse.
“The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences," he said.
“The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.
“AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”
Cleaner Air for Scotland Governance Group (CAFSGG) was set up to help deliver the Scottish Government’s clean air strategy
Under the plans, the LEZ will only cover 20 per cent of buses and will not include cameras to catch offenders
Expert panel on plastics was established following the Programme for Government to explore ways to reduce circulation of single-use plastics
Professor Robert Ellam discusses climate change and calls for universities to divest from fossil fuels
Vodafone explores some of the ways IoT is significantly improving public sector service delivery
BT's Amy Lemberger argues that having the right security in place to protect your organisation is no longer just an option. It is a necessity.
Sean Luke, BT's CIO for the Universities Sector, on the strange parallels between GDPR readiness and grief
BT brought together CIOs from well known organisations to identify the key threats and opportunities that new technologies are presenting