Thursday 30 May 2013

Beranda » » Best News - Keeping a Human Finger on the Killer Robot's Trigger - Pacific Standard

Best News - Keeping a Human Finger on the Killer Robot's Trigger - Pacific Standard

Israel's vaunted Iron Dome anti-missile system is a current example of a human-on-the-loop weapon system that could easily evolve into into a human-out-of-the-loop one.

We told you about the loving robots and existential-threat robots, and now it looks like the United Nations is triangulating between those poles as it urges humankind to be careful about developing autonomous warrior robots. The concern is driven less by a future Terminator, and more about the present  explosive growth of drone warfare, which wedges open the door to increasingly automated killing.

Christof Heyns, the U.N.'s special rapporteur on extrajudicial, summary or arbitrary executions, has urged the world's militaries to pause in producing such "lethal autonomous robotics" until they can get a handle on both the as-yet unwritten international law surrounding killer robots and the ethical (and ultimately existential) concerns presented by the unchecked advancement of technology.

In an annual report to the U.N. delivered last month, Heyns wrote:

If left too long to its own devices, the matter will, quite literally, be taken out of human hands. Moreover, coming on the heels of the problematic use and contested justifications for drones and targeted killing, [lethal autonomous robots] may seriously undermine the ability of the international legal system to preserve a minimum world order.

The key aspect, in Heyns' view, is that word "autonomous." From sticks and stones to poison gas and nuclear weapons, mankind finds ways to improve lethality. But someone always had to be ready to wield those systems, even if the effort required was no more than pushing a button or turning a key. Human action is no guarantee that nothing bad—in the sense of beyond the bad that was already intended—will happen, as demonstrated from friendly fire incidents to the infamous Baghdad Apache strike immortalized via Wikileaks. Automated killing machines might actually reduce some of those losses, yet their inability to comprehend nuances of proportionality, much less process things like compassion (or, the be fair, as Heyns points out, "revenge, panic, anger, spite, prejudice or fear") should make mankind shudder. An added fear: much like dumb old landmines, autonomous systems might not know when the war is over.

As Human Rights Watch pointed out last November in its report, "Losing Humanity: The Case against Killer Robots," we're seeing a rapid evolution from "human-in-the-loop weapons" through "human-on-the-loop weapons" to "humans-out-of-the-loop weapons."

Fully autonomous weapons, which are the focus of this report, do not yet exist, but technology is moving in the direction of their development and precursors are already in use. Many countries employ weapons defense systems that are programmed to respond automatically to threats from incoming munitions. Other precursors to fully autonomous weapons, either deployed or in development, have antipersonnel functions and are in some cases designed to be mobile and offensive weapons. Militaries value these weapons because they require less manpower, reduce the risks to their own soldiers, and can expedite response time.

Leading the charge, says HRW, is the United States, which is "coming close to producing the technology to make complete autonomy for robots a reality." Once in place, according to HRW and to Heyns, such killing machines in no way could observe the promulgated rules of, umm, civilized warfare, much less Isaac Asimov's fictional Three Laws of Robotics (No. 1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.)

This U.S. (and its allies) dominance can be a force for good, since it suggests that Pandora's box could yet be shut while we figure out what ills lie inside. The opportunity is fleeting, though.  Heyns offers the rapid spread of drones as a key data point illustrating why robot moratoria are needed now.

This is further complicated by the arms race that could ensue when only certain actors have weapons technology. The current moment may be the best we will have to address these concerns. In contrast to other revolutions in military affairs, where serious reflection mostly began after the emergence of new methods of warfare, there is now an opportunity collectively to pause, and to engage with the risks posed by [lethal autonomous robotics] in a proactive way.

One example of how fast this has evolved comes from the U.N itself. In 2010, Heyn's predecessor as special rapporteur, Philip Alston (who we profiled here), also addressed "robotic technologies" in carnage. Alston noted that many in the human rights community "see advances in robotics as an exotic topic that does not need to be addressed until the relevant technologies are actually in use;" he himself merely suggested "urgent consideration" of the implications of robots, both for war and peace.

Heyn's request for "a pause," as he puts it, in the headlong pursuit of robotic weapons system, contrasts with the sterner, if generally much less likely, recommendations from Human Rights Watch or the always vigilant computer scientist Noel Sharkey, head of the International Committee for Robot Arms Control. HRW called for an all-out ban on "the development, production, and use of fully autonomous weapons" at the national and international levels as well as reviews of technologies that might lead to those weapons and establishment of a roboticists code of conduct that presumably would include plenty of pacific intentions.

There has been work on developing ethical and legal frameworks, although not always with the haste urged by HRW and Heyns. In December, for example, Kenneth Anderson and Matthew Waxman of the Hoover Institution rejected the idea that there is a "crisis" at hand and favored "the gradual evolution and adaptation of long-standing law-of-war principles" to address the new robotic realities. But they do call for addressing them, and with the U.S. in the lead, lest other agencies grab the high ground on governance. And beyond Sharkey, academics, both independently and for the Pentagon, have been pondering the ethics of autonomy for years (although concerns about liability seem to fall short of an Augustinian plane). The Pentagon proper issued guidelines just in November.

And back to those friendly robots, the ones that people fall in love with, that Robert Ito described for us last fall. Yes, they are becoming socially adept, and we might yet figure out an algorithm for compassion. But those will not be the Roombas the Pentagon is buying.

31 May, 2013


-
Source: http://news.google.com/news/url?sa=t&fd=R&usg=AFQjCNF1f-OTrMCa6lgyEXLc-umQmNo3jA&url=http://www.psmag.com/legal-affairs/keeping-a-human-finger-on-the-killer-robots-trigger-58897/
--
Manage subscription | Powered by rssforward.com