SCEINCE

Could autonomous robots be more dangerous than nuclear bombs?


There may be autonomous weapon systems – known as killer robots killed human beings for the first time last year, according to the recent UN Security Council report on the civil war in Libya. History can define this as the starting point for the next great arms race, which has the potential to be the last for humanity.

Autonomous weapon systems are robots with deadly weapons that can operate independently, selecting and attacking targets without man weighing those decisions. The military around the world are invests a lot in research and development of autonomous weapons. USA only budgeted $ 18 billion for autonomous weapons between 2016 and 2020

Meanwhile, human rights and humanitarian organizations compete to establish regulations and bans on the development of such weapons. Without such inspections, foreign policy experts warn that destructive technologies for autonomous weapons will dangerously destabilize current nuclear strategies, as well as because they could radically change perceptions of strategic dominance, increases the risk of preventive attacks, and as they could become combined with chemical, biological, radiological and nuclear weapons myself.

Like human rights specialist with an emphasis on artificial intelligence weapon, I find that autonomous weapons create unstable balances and fragmented safeguards for the nuclear world – for example, minimally limited by the US President strike powers – more unstable and more fragmented.

Deadly bugs and black boxes

I see four main dangers with autonomous weapons. The first is the problem of misidentification. When choosing a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toys? Between civilians fleeing a conflict site and rebels retreating tactically?

The problem here is not that machines will make such mistakes and people will not. That is, the difference between a human error and an algorithmic error is like the difference between sending a letter in the mail and tweeting. The scale, scope and speed of robotic killer systems – controlled by a single continent-wide targeting algorithm – could make misidentifications by individuals, such as recent US drones strike Afghanistan they just look like rounding errors for comparison.

Autonomous weapons expert Paul Chare uses the metaphor of the escaped gun to explain the difference. The avoidance pistol is a defective machine gun that continues to fire after the trigger is released. The pistol continues to fire until the ammunition runs out, because, so to speak, the pistol does not know it is making a mistake. Escape weapons are extremely dangerous, but fortunately they have human operators who can disconnect the ammunition or try to steer the weapon in a safe direction. Autonomous weapons by definition do not have such protection.

It is important that the weapon AI must not even be defective to cause the effect of the escaped weapon. As shown by numerous studies of algorithmic errors in different industries, the best algorithms – working in the designed way – can generate internally correct results that nevertheless spread terrible errors rapidly among populations.

For example, an identified neural network intended for use in Pittsburgh hospitals asthma as a risk reducer in cases of pneumonia; image recognition software used by Google identifies African Americans as gorillas; and a machine learning tool used by Amazon to rank job candidates systematically assigns women negative results.

The problem is not just that when artificial intelligence systems make mistakes, they make massive mistakes. That is, when they make a mistake, their manufacturers often do not know why they did it and therefore how to fix it. The problem with black box of AI makes it almost impossible to imagine a morally responsible development of autonomous weapon systems.

Distribution problems

The next two dangers are low- and high-class proliferation problems. Let’s start with the bottom. The military, which is developing autonomous weapons, is now assuming it can contain and control the use of autonomous weapons. But if the history of weapons technology has taught the world anything, it’s this: Weapons are spreading.

Market pressures could lead to the creation and widespread sale of what could be considered the equivalent of autonomous weapons. Kalashnikov assault rifle: killer robots that are cheap, effective and almost impossible to hold while circulating around the world. Kalashnikov autonomous weapons could fall into the hands of people outside government control, including international and domestic terrorists.

High-end distribution is just as bad. Nations could compete to develop increasingly devastating versions of autonomous weapons, including those capable of installation of chemical, biological, radiological and nuclear weapons. The moral hazards of escalating arms mortality would be exacerbated by escalating gun use.

High-end autonomous weapons are likely to lead to more frequent wars, because they will reduce two of the main forces that have historically prevented and shortened wars: concern for civilians abroad and concern for their own soldiers. Weapons are likely to be equipped with expensive ones ethical managers designed to minimize incidental damage, using what UN Special Rapporteur Agnes Calamard called The Myth of the Surgical Strike to quell moral protests. Autonomous weapons will also reduce both the need and the risk to their own soldiers by dramatically changing cost-benefit analysis which the countries suffer as they start and maintain wars.

Asymmetric wars – that is, wars fought on the land of nations that do not have competing technologies – are likely to become more frequent. Think of the global instability caused by Soviet and American military interventions during the Cold War, from the first proxy war to kickback tested around the world today. Multiply this by any country that is currently striving for high-end autonomous weapons.

Undermining military laws

Finally, autonomous weapons will undermine humanity’s last step against war crimes and atrocities: the international laws of war. These laws, codified in treaties, reached as early as 1864. Geneva Convention, are the international thin blue line separating the war with honor from the massacre. They are based on the idea that people can be held accountable for their actions even in wartime, that the right to kill other soldiers in battle does not give the right to kill civilians. A prominent example of someone who is taken care of Slobodan Milosevic, a former president of the Federal Republic of Yugoslavia, who has been indicted on charges of humanity and war crimes by the UN International Criminal Tribunal for the Former Yugoslavia.

But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who will be judged? The weapon? The soldier? The soldiers’ commanders? The corporation that made the weapon? NGOs and experts in international law are concerned that autonomous weapons will lead to serious reporting gap.

To hold a soldier criminally responsible to deploy an autonomous weapon that commits war crimes, prosecutors will have to prove both actus reus and mens rea, Latin terms describing a guilty act and a guilty mind. This would be difficult from a legal point of view, and probably also unfair from a moral point of view, given that autonomous weapons are inherently unpredictable. I believe that the distance between the soldier and the independent decisions taken by autonomous weapons in rapidly evolving environments is simply too great.

The legal and moral challenge is not facilitated by shifting the blame up the chain of command or back to the place of production. In a world without regulations that mandate meaningful human control autonomous weapons, there will be war crimes without war criminals being held accountable. The structure of the laws of war, together with their deterrent value, will be significantly weakened.

A new world arms race

Imagine a world in which military, insurgent groups, and international and domestic terrorists can deploy a theoretically unlimited lethal force at theoretically zero risk at times and places of their choice without legal liability. This is a world where something is inevitable algorithmic errors that the plague of even technology giants like Amazon and Google could now lead to the elimination of entire cities.

In my opinion, the world must not repeat the catastrophic mistakes of the nuclear arms race. He should not lunaticize in madness.

[Get our best science, health and technology stories. Sign up for The Conversation’s science newsletter.]

James Dawes, Professor of English, Macalester College

This article has been republished by The conversation under a Creative Commons license. Read on original article.





Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button