Abdicating Murder to the Machines: the Politics of Autonomous Killer Robots

This essay shall examine the near-future phenomenon of autonomous killer robots, explore its implications for defence policy, and detail what an “autonomous lethality” revolution could look like using some of the possibilities explored in science fiction. This essay will use the term “autonomous killer robots” interchangeably with “killer robots” and the concept of “lethal autonomy”.

Have we been here before?

Of course, when it comes to examining new transformative technologies, it pays to analyze possible precedents. There are many who like to compare autonomous killer robots to heat-seeking missiles, or even cannonballs flying out from cannons – the natural conclusion of military technology’s long trajectory of disassociating conscious human control and initiative from the act of killing, stemming all the way from when Man first decided to throw a rock at an enemy instead of engaging in a brutal, personal and risky melee.

In some ways, this analysis has merit. One could certainly look to history to predict the implications of such new military technologies. The social shock that killer robots release in the future could be very similar to that experienced by WWI soldiers who had hoped to experience honourable person-to-person combat but were instead appalled by dehumanizing, uncaring artillery fire from far away which spelt their doom. Alternatively, in more racially charged or ultra-nationalist societies, an autonomous weapons edge could be seen as an affirmation of identitarian supremacy, similar to how the Gatling Gun was seen as an icon of white civilizing missions in colonial adventures.

However, it remains highly problematic in other ways as well. While the previous advances in military technology indeed disassociated and distanced the killer from his victim, the sheer scale of this new leap in disassociation causes a whole new host of ethical, political, tactical and strategic challenges and opportunities which will be explored below.

Where will they emerge first?

To understand the implications of killer robots, it is important to first understand what they are likely to look like and where they are likely to be in the near future.

Certainly, we are nowhere close to Terminator-style killer robots roving Earth’s cities as ruthless assassins. While we should not underestimate how quickly technology can accelerate in short periods of time, we should also keep in mind that we still live in a time where autonomous driving is still very much a work in progress.

Yet, in the work of prediction, it is important to consider that there remain some domains and battlefields which would uniquely suit the strengths and weaknesses of lethal autonomy. Guarding the Korean Demilitarized Zone is a simple task, without the demands of movement or complex unknown terrain (unlike urban settings for instance), or even the need to distinguish between friend and foe (any party crossing the DMZ could be considered an active combatant). It only requires patience, alertness and perpetual readiness – traits that a military AI would have in spades. Indeed, this was what the Samsung Techwin SGR-A1 Sentry Guard Robot was initially designed around, though performance tests were inconclusive and the status of the SGR-A1 today remains murky.

In addition, such AI is likely to thrive in domains such as the air and the open ocean, where there exist few distractions apart from occasional wildlife. In such domains, the biggest demands can be the mental stresses of maintaining constant alertness in a three-dimensional space where attacks can come from all directions in gruelling flights that can stretch over hours, or the need to quickly switch from high-endurance patrols to high-intensity air-to-air combat while maintaining communications and alertness of geographical location, communications and the possibility of encountering even more hostile fire from the air, ground or seas.

With regard to wider, more long-term peacetime operations, these kinds of gruelling duties can be particularly intensive and expensive for naval vessels such as aircraft carriers. The economic aspect should not be underestimated – sailors on such ships are given far higher salaries than their land based counterparts due to their extended stays and in the case of carriers, a vast infrastructure of resupply and deliveries for everything from food to mail must be constructed to assist the ship.

For underwater forces, the value of autonomy could lie less within unlocking new unique AI combat capabilities than with the ability to operate unmanned vehicles without the need to constantly receive stealth-compromising remote commands. Not having to rely on commands from a command station would allow unmanned underwater vehicles to go for deeper, more extended missions into hostile waters without having to lift radio silence. Reportedly, lethal autonomy has been considered critical towards a Russian unmanned submarine designed to spread radioactive contamination in enemy coastal areas – a mission too personally dangerous to risk skilled personnel on and too regionally potentially catastrophic (in case of a premature radioactive release in friendly waters) to leave in the untested hands.

With regard to conventional ground forces, however, the AI revolution is likely to be far less pronounced. For one, the regular footsoldier can be far cheaper and far more replaceable, thus reducing the cost advantage that AI enjoys over, for instance, highly trained and expensive fighter pilots. In addition, ground forces typically handle occupation duties. Such duties, which involve establishing good relationships with local elites and broader populations in initially low-trust scenarios with substantial cultural and language gaps, are likely to be far too difficult for military AI to accomplish better than humans in the near future.

New Tactical and Strategic Opportunities and Threats

New tactics could also become far more viable with AI. Swarming tactics, in particular, could prove particularly pivotal towards cluttering radar or attacking in perfectly simultaneous, multidirectional offensives across wide fronts. Other swarm tactics could be similar to the living structures that ants can create in floods using their own bodies (perhaps the final stage of military engineering stemming from Caesar’s overnight fort constructions). Of course, swarming techniques could theoretically be possible under human control – a human controller could be manoeuvring massive swathes of small units in a manner similar to that depicted in Ender’s Game. However, particularly within the Air domain, this is likely untenable, as the speed required to manage a large number of individually vastly complex and capable machines is likely to be far beyond the skills of any human being. Hence, for true swarming tactics to be achieved, AI is likely to be critical. Militaries around the world have already grasped this concept, and have rushed to initiate swarming AI specifically. Curiously enough, the US has approached this by using brain scans of Starcraft players as a foundation from which AI can be developed.

Strategically, there could also be large strategic leaps that work hand in hand with tactical innovation. A shift from a few powerful fighter jets designed to protect precious pilots could be conducted in favour of many small lighter drone-style aircraft, which would further strengthen the value of swarm tactics. Costs savings otherwise spent on the broad logistics network needed to support an international manned force could also be redirected for wider force restructuring.

Can killer robots reduce war atrocities?

Furthermore, one might want to consider the argument of preventing uniquely human war atrocities. However, to understand why killer robots could stop war atrocities, it is important to understand how such atrocities occur.

Many atrocities are borne of battle exhaustion, panic or misdirected and ill-contained feelings of vengeance. When accelerated with the stresses of irregular guerilla warfare, cultural gaps and communication breakdowns with occupied peoples, these can lead to disastrous events. To make things worse, these tendencies can easily be exponentially intensified by instigation from superiors. Often in history, middle-ranked officers (or civilian figures) seeking to commit an atrocity, either out of Machiavellian terror tactics or ideological fervour, issue orders through innuendo and spoken word, deliberately leaving their names and signatures out due to a keen awareness of potential prosecution. These orders were then carried out of their subordinates who faced the unenviable position of either carrying out war crimes directly or being replaced, responsible for insubordination or otherwise crippled in their military careers as “unreliable”.

These two factors – breakdowns of discipline and mental states as well as the instigation of devious and wily officers (which sometimes do not even consist a majority within the officer corps) are primary causes of the most horrific war crimes in human history, such as the Bataan Death March, the Rape of Nanking and the Rape of Berlin.

Killer robots are uniquely equipped to handle both factors. Of course, robots do not face the mental wear and tear of prolonged combat. Even if some programming glitches occur in combat scenarios that were not phased out in previous tests and simulations, they can be set to respond to such unexpected errors through shutting down completely instead of embarking on shooting rampages. At the very least, killer robots will not seek to hide gradually increasing errors and malfunctions – something regrettably common due to the military machismo that frequently dominates soldier culture. In addition to removing the risk of deadly military outbursts, lethal autonomous robots also help restrict (or at least punish) war atrocities through a different mechanism – through forcing superior officers to issue clear directives with regard to war crimes. This issue can be resolved by killer robots, which will have no patience or mental space for career coercion or incitement by innuendo, and will likely require clear directives for any actions out of the usual military norm.

Thus, machines excel when it comes to clear documentation, accountability and recollection, often an issue when it comes to pursuing war crimes in tribunals decades after the fact. This is particularly important when considering how war crimes can often be instigated by only a handful of superior officers who break the chain of command to manipulate troops into committing atrocities, evident in the 1942 Bataan Death March which happened under troops technically controlled by Masaharu Homma by truly instigated by Masanobu Tsuji (who masked his commands with the vague term “orders from HQ”). However, it is worth noting that this effect might be counteracted by the fact that war criminals might find it easier to silence robots than human beings. After all, a robot can often be disappeared or replaced far more easily than a human being.

Moral Quandaries and Machine Confusion

In many ways, autonomous robots can be considered ultimately compliant of international law, especially (but not necessarily) if they are programmed in accordance with an international standard framework.
Yet, laws, even international laws, are not meant to be taken as complete guidelines for human behaviour, and exceptions, caveats and nuances must always be made. After all, courts exist for a reason – to fully internalize all the special circumstances of every case for human evaluation. Without even the vaguest capacity for human judgement, killer robots run the risk of making catastrophic mistakes.

This can be seen even in the example of the Korean DMZ, where special circumstances can often arise, such as when in 2017 a desperate North Korean soldier ran across the DMZ to reach the South, shot five times by his Northern brethren in the process.
To a military AI, the visual profile of a soldier charging across the border, accompanied by sound profiles of gunfire, might appear like a war scenario in which lethal force would be more than justified. (Note: This would be even worse for the abovementioned SGR-A1 system specifically, which needs a voice-recognized access code to override lethal protocols.)

In addition to the humanitarian tragedy this kind of incident could potentially cause, one must also remain conscious of the strategic costs of killing off a potential willing intelligence source coming from an immensely militarized and secretive section of an already generally militarized and secretive country. One should not underestimate the political costs such incidents could cause either – such killings can cause much outrage in domestic populations (particularly true in South Korea where the enemy is seen as a brother people) and serve as useful propaganda material for keen-eyed adversaries. There also lies the escalation issue – increasing amounts of gunfire in hotspots such as the DMZ or India-Pakistan border always carries the risk of sparking an escalatory cycle (particularly once bodies start hitting the ground) that neither side wants.

Other examples can easily be extrapolated. Whether to shoot (and shoot to kill, in particular) child soldiers, distinguishing between friend and foe in irregular combat and other complex grey-area dilemmas can dominate the modern battlespace. For these contexts, AI might struggle and the associated human lines of accountability and review might struggle even more. After all, who is to be held responsible in the event that unnecessary lethal force is deployed? The staff officer who made the decision to launch the weapons months ago? The engineer who decided against yet another test or simulation? A manufacturer who might have possibly made a mistake? With present military tribunals already often accused of bad legal process and nepotism, such questions are likely to worsen military due process, killing its legitimacy in the eyes of both international audiences and the domestic civilian public.

What can science fiction tell us?

At the very least, science fiction hammers home the importance of clarity and the risk of unintended consequences involved in the programming of these robots. Fictional examples, such as the MCU’s Ultron and HAL 9000 serve as useful guides on how conflicting priorities can lead to horrible outcomes. Even frameworks such as Isaac Asimov’s Three Laws of Robotics (obviously unusable in a warfighting context) can be shown to be circumvented through in examples such as the I, Robot and Aliens films. When used to guide an AI with no sense of overarching morality, objectives and rules need to be thoroughly foolproof and constantly reviewed for possible contradictions or loopholes.

In addition, one must keep in mind the trade-offs of granting autonomous weapons wider degrees of intelligence. Technological solutions are not necessarily always the right issue to technological problems. If scientists respond to the issue of battlefield moral dilemmas by granting killer robots the capacity for moral reasoning, it opens up a whole new can of worms thoroughly explored throughout fiction in instances where robots gain self-awareness.

Can we stop it?

Ultimately, autonomous lethality is unlikely to ever be challenged in terms of development.

In the short-term, states with more developed AI have already accelerated research, and have even begun attempts at export. Chinese firm Ziyan, for instance, has exported the Blowfish A3 to multiple Middle Eastern governments. As noted in a previous article, exports will likely accelerate research even more, as combat data from different terrain, environments can be used to patch flaws and test weapons.

In the long-term, it is unlikely for there to be a mutual international consensus against killer robots given that killer robots lack any coherent overarching victimization experience. Unlike landmines or chemical weapons, there are no signature wounds characteristics of the weapon like blown-off legs or scarred lungs. Indeed, it can be difficult to even tell whether a weapons platform, such as a drone or a tank, is controlled by a human or an AI. Without a coherent opponent to encode into iconography and rally against, it is unlikely that movements to control the development of lethal autonomy. With such conditions, it would be unwise to make comparisons between campaigns to ban chemical or biological weapons with any attempts to ban autonomous killer robots. Rather, it would be more akin towards attempting to ban a targeting computer or even a rifle scope.

Even if state constraints could be universally adopted, the general development of AI will carry on, meaning that even if AI is not designed around weapons, then AI will still, in the long-run, grow intelligent enough to operate weapons.

Conclusion

Of course, when it comes to our present reality, AI will likely first enter the military through avenues other than the frontline combat. Tasks such as equipment repair, logistics, or even recognizing military installations from satellite imagery will suit our AI comrades far more than complete frontline combat duties.

If living through the nuclear age has taught us anything, it is that we do not need to be restricted by the binary between complete non-proliferation and total reckless military technology spread. Norms can be set. Graduated verification processes and enforcement mechanisms can be developed. Through setting up such systems, the advantages of using killer robots (such as accountability and preventing atrocities) can be preserved while the disadvantages can be minimized.

Further Reading

Carpenter, C. (2013, July 3). Beware the Killer Robots. Retrieved from https://www.foreignaffairs.com/articles/united-states/2013-07-03/beware-killer-robots
Cebul, D. (2017, December 20). The Future of Autonomous Weapons Systems: A Domain-Specific Analysis. Retrieved from https://www.csis.org/npfp/future-autonomous-weapons-systems-domain-specific-analysis
Freedberg, S. J. (2019, March 11). Should We Ban ‘Killer Robots’? Can We? Retrieved from https://breakingdefense.com/2019/03/should-we-ban-killer-robots-can-we/
Fryer-Biggs, Z. (2019, September 3). Coming Soon to a Battlefield: Robots That Can Kill. Retrieved from https://www.theatlantic.com/technology/archive/2019/09/killer-robots-and-new-era-machine-driven-warfare/597130/
Garcia, D. (2014, May 10). The Case Against Killer Robots. Retrieved from https://www.foreignaffairs.com/articles/united-states/2014-05-10/case-against-killer-robots
Makichuk, D. (2019, November 8). Is China exporting killer robots to Mideast? Retrieved from https://asiatimes.com/2019/11/is-china-exporting-killer-robots-to-mideast/
Scharre, P. (2017, November 14). We’re Losing Our Chance to Regulate Killer Robots. Retrieved from https://www.defenseone.com/ideas/2017/11/were-losing-our-chance-regulate-killer-robots/142517/
Sulmeyer, M., & Dura, K. (2018, September 5). Beyond Killer Robots: How Artificial Intelligence Can Improve Resilience in Cyber Space. Retrieved from https://warontherocks.com/2018/09/beyond-killer-robots-how-artificial-intelligence-can-improve-resilience-in-cyber-space/
Tucker, P. (2019, November 5). SecDef: China Is Exporting Killer Robots to the Mideast. Retrieved from https://www.defenseone.com/technology/2019/11/secdef-china-exporting-killer-robots-mideast/161100/
Ware, J. (2019, September 24). Terrorist Groups, Artificial Intelligence, and Killer Drones. Retrieved from https://warontherocks.com/2019/09/terrorist-groups-artificial-intelligence-and-killer-drones/
West, D. M., & Karsten, J. (2019, May 10). It’s time to start thinking about governance of autonomous weapons. Retrieved from https://www.brookings.edu/blog/techtank/2019/05/10/its-time-to-start-thinking-about-governance-of-autonomous-weapons/
Work, R. O., Scharre, P., & Atherton, K. D. (2018, November 15). Are Killer Robots the Future of War? Parsing the Facts on Autonomous Weapons. Retrieved from https://www.cnas.org/press/in-the-news/are-killer-robots-the-future-of-war-parsing-the-facts-on-autonomous-weapons

One response to “Abdicating Murder to the Machines: the Politics of Autonomous Killer Robots”

  1. As Salam Alaikum,
    Ramadan Mubarak. The current PK gov is horrible. I Support PTM they want peace for each citizen, which Pakistani gov is not giving, as they murder their people for raising their voice against injustice. 😞
    RIP Arif Wazir, May Allah grant him Jannat Ul Firdos and give him the highest rank in Jannah.

    Like

Leave a comment

Design a site like this with WordPress.com
Get started