Humanity cannot afford the risk to give the right to murder machines. This is not a phrase from the fictional universe of the Terminator movies, in which, according to the Director James Cameron, decentralized network of neuroprocessor supercomputers Skynet orders the destruction of the human race on 21 April 2011.
This is the call of the UN special Rapporteur on extrajudicial killings, executions without due process or arbitrary executions, Christoph Heyns. 30 may in the Council on human rights UN Haines argued for a global moratorium on the production and placement of lethal Autonomous robotic systems (Lethal Autonomous Robots, LARs).
"If the drones still require a "man behind the Board", which decides about the use of lethal force, LAR are equipped with computers that decide who will become their goal," worries South African Professor of law. "War without reflection is mechanical slaughter," he said, referring to the thinking human. "As the decision to take someone's life requires at least some discussion, the decision to give a machine the task of destroying human beings requires that [humanity] took a collective pause," warns Haines his colleagues on the human rights body of the UN. He doubts the ability of even the most perfect of artificial intelligence to follow the main rule of modern warfare is to distinguish between legitimate military targets.
The prospect of using artificial intelligence in war ― it's the near future, moreover, this future has already arrived, and the three laws of robotics by Isaac Asimov in the course of its occurrence, no one asked.
"Many people seem unaware that the Autonomous defense system existed for 20 years already and repeatedly by mistake took a human life, says Professor of military law at the school of law. R. Dedman at southern Methodist University in Dallas (Texas) Chris Jenks, an opponent of the ban on development in military robotics. ― During both "wars in the Gulf" Patriot missile system mistakenly identified the enemy and shot down an American plane in one war and British second. And five years ago in South Africa, too, there was a tragic case where a so-called "gun-robot" is out of control, killing nine South African soldiers".
The first fully Autonomous drone (project Condor) was developed by the Agency for defense advanced research projects USA (Defense Advanced Research Projects Agency, DARPA) in 1988. The Israeli army for more than 20 years has adopted automatic unmanned aerial vehicles (UAVs) Harpy ― flying bombs, designed to destroy radar stations. The U.S. Navy is the Phalanx anti-missile system with automatic detection, tracking and destroying targets, including anti-ship missiles and aircraft. And ground forces have land-based version of this complex, known as C-RAM (Counter Rocket, Artillery and Mortar). And only two months ago, the U.S. Navy launched into the air UAV X-47B, able to perform their flight plan, in which the intervention of the person is necessary only in the most emergency situations.
Haines in his report cites several examples where having more associations with the era of "terminator". For example, the South Korean robot guards of the military division of the company Samsung — Samsung Techwin. They are serving in the demilitarized zone on the border of the two Koreas, equipped with infrared sensors and monitor the movement of people. And although their actions are directed by the operator-the person, the robots and auto service. Automatic UAV designed and British defence industry is equipped with the technology of "stealth" drones, Taranis is able to perform Intercontinental flights, bringing on Board various weapons to defeat both air and ground targets. The first test flight of the Taranis should make this year.
However, as assured at the hearings in the house of Commons on 17 June, the Minister of state British Foreign Minister Alistair bird, London has no intention to develop an automatic robotic system for use in the war. And the U.S. in November 2012 through the official Directive of the Pentagon has declared its intention to maintain "appropriate levels of human involvement with [making the decision] about the use of force". This undeclared moratorium will remain in the next ten years. And by the end of this period, we argue in the US, the Pentagon already will be concerned about the reflection coming from China threats ― the need for an adequate response requires the continued development in the field of military robotics.
However, from a full-fledged artificial intelligence, self-determining their actions in the theater of military operations, the above system is still quite distant. So both in Washington and in London to talk about the need to keep human hands on the remote control can only by virtue of the fact that in reality no human control to do is simply impossible. However, as experts say the U.S. National defense University in Fort McNair, "technology in recent years developed so rapidly, [...] the creation of fully Autonomous systems it is actually likely in the next few years."
IBM is already testing a Watson supercomputer with the total RAM to 16 terabytes. Operating 100 statistical algorithms and has access to 200 million pages of structured and unstructured information, he can understand questions posed in natural language, which allowed him to win at winners American prototype of the intellectual game "jeopardy". Since February 2013, Watson is officially involved in the diagnosis and treatment of oncological diseases, in so-called "big data", statistical information from oncolink, the generalization of which for doctors-people would be unbearable.
Other Corporation — Google — on the basis of developments of the artificial intelligence Lab at Stanford University has created prototypes of fully Autonomous vehicles whose on-Board computer keeps track of hundreds of environmental parameters, including the facial expressions of drivers of cars moving in the neighborhood. A dozen of these vehicles have done by August 2012 the total path 500 thousand kilometers without a single accident on the roads of Nevada and California, two of the three US States where free movement of such devices already legally permitted. Both cognitive systems self-learning, which, I assure them the developers, can be considered a revolutionary breakthrough in robotics.
These developments take place not only in private companies ― a pioneer in the development of Autonomous robotic systems is the same U.S. defense Agency DARPA and other research centers in the Pentagon: the Office of naval research research laboratory of the U.S. Army. The DARPA grant helped Google in 2005 to start the development of its unmanned vehicles.
"In the next 25 years the Ministry of defence will focus the activities of its laboratories and industry in the following areas: intelligence, surveillance and reconnaissance (ISR triad), suppression of air defence of the enemy, the destruction of air enemy defenses, electronic attack, combat surface ships [enemy], anti-submarine defense, anti-mine defense, maneuver "ship-purpose", communications support and derivatives [programs] in these areas," can be read in the "road map" of the Pentagon on creating UAVs for the years 2005-2030.
Among promising areas of research military robots: development of biomimetic (i.e., mimicking living beings) systems underwater surveillance, whose method of movement is similar, for example, with an electric eel; unmanned surface vehicles submarine defense; a fully Autonomous reconnaissance drones based on the already accepted on arms of the UAV Global Hawk, "swarms" of small robotic drones working together (this technology is based on "swarm intelligence" is known as Proliferated Autonomous Weapons); heavy Crusher unmanned ground vehicles.
By 2020, the authors suggest another road map of the Pentagon on the creation of a self-propelled ground-based robots that are products of their development reach the level of "full autonomy" from the person in the conduct of hostilities, and by 2030, assured their colleagues from the U.S. air force, "machine possibilities will increase to such an extent that people will be the weakest component in a wide range [of defense] systems and processes".
Main advantage of LAR ― their ability in fractions of a second to decide which would be the human operator, it took far more time. Now UAVs to perform combat missions under such overload that can be fatal to humans. The lack of "human factor", argue the defenders of the use of robots, not only reduces the chance of error. LAR exclude human emotions: they do not act out of fear, revenge or sadistic cruelty. They do not depend on the quality of the satellite data, which now operate drones; their tasks will not prevent a hacker attack the enemy. They will not take wrong decisions based on "most probable scenario" — as happened, for example, in 1988, when the American missile cruiser shot down an Iranian passenger aircraft, because in times of war (6th US fleet then defended Kuwaiti tankers from possible attack from Iran or Iraq, who fought with each other), the commanders took the Board, for an attacking Iranian fighter jet. Someone like, for example, the Director of the laboratory of mobile robotics at the Technological Institute of Georgia Ronald Arkin believes that the use of robots on the battlefield, or rather will reduce the non-military losses, compared to conventional military actions that lead people. But he's in the conversation with "Tape.ru" make a reservation ― this effect is possible only if the LAR is used "appropriately and in limited circumstances."
For example, he says, robots are much more reliable probes ― devoid of fear and instinct of self-defense they can be indispensable in intelligence. Since the logic "shoot first, then you are" not applicable to them, they are able to get nearer to the enemy, spending less time on the collection and analysis of information. Another plus is that the robots inaccessible sensors — infrared, acoustic, positioning capability using synthetic aperture, which also maximizes the amount of collected real-time data. "At least, it is possible that in the not too distant future LAR will be better able to distinguish between combatants and civilians, with longer distance and greater accuracy than people do that will also reduce the number of civilian victims of the conflict," ― says in a conversation with "Tape.ru" Professor Jenks. "LAR are not a threat as such, and we have time to think about what are the appropriate conditions for their use ― that is, we must answer the question of when, how and where can be used the robots", says Arkin.
In his report, and in public speeches Hanes makes similar assumptions, but emphasizes that the main risk of the use of LAR is a potential violation of the rules and customs of warfare. The UN special Rapporteur asserts that the use in hostilities of Autonomous robots presents a risk that the actions of machines, no one would bear direct legal liability. If tuning their equipment can't guarantee how their device in a given situation, not to mention commanders of the armed forces, giving the order to send them on the battlefield. Haines is not alone in his concern ― his concern is shared by experts from the International Committee for the control of robots for military purposes (ICRAC), and a leading human rights organization Human Rights Watch has launched a special campaign to "Stop killer robots".
However, say opponents of the moratorium, existing international conventions and do not require that someone be made responsible for the violation of the principles of warfare, in their absence this responsibility is borne by the state.
If LAR not able to commit human errors, they do not possess the free will given to humans by nature endowed. But devoid of human feelings, Haines disagrees, they will be deprived and the human experience: the robots are good only in that in the inner circle of their specialization; they are limited by the information that they put their sensors, the result is as if the man talks about the world being in the tunnel where you cannot see the full picture, he argues in his column in The Guardian. "Human beings are fragile, imperfect and in fact may behave "not properly", but at the same time, they are able to rise above the minimum requirements imposed by legalized murder," says Haines. And if processors, which are equipped with LAR, are designed primarily for quantitative analysis ― recognition and definition of the objectives according to the results of the comparison with predetermined developer characteristics, the assessment of "collateral damage", including the possible loss of civilian lives when attacking, the problem is qualitative analysis, what machines still can't, said the UN special Rapporteur.
This approach, argues with him Jenks, ignores the "problem of the nature of human judgment". "One of my colleagues once said that the robot has not claimed a single human life during the Second world war, or in Rwanda, or in Cambodia, humanity has done this itself," says an American scientist. Eventually the conversation boils down to is purely psychological reaction to the fact that from now on, killing is the prerogative not only of the human race. "We can't give the right to make lethal decisions to machines," concludes the conversation with "Tape.ru" this approach is Professor of robotics at Carnegie Mellon illa Reza Nourbakhsh, who came recently to Moscow. He, being the senior member of ICRAC, supported the call of Human Rights Watch to ban the use of LAR as the UN Convention banned the use of anti-personnel mines. "You know the answer to this question," he replies when I ask him whether he is ready to subscribe to the proposal of the special Rapporteur Haynes on a moratorium on the development of combat robots.
- 29-05-2012Drugs in the service of the Third Reich
- 12-09-2010Many experts believe the best tank Merkava main battle tank in the world
- 12-09-2010The Minister of defence of Germany introduced draft large-scale reform of the armed forces
- 21-04-2001To the question about the war of the fourth sphere