15 June 2016
I recently had the chance to visit the United Nations Office at Geneva, with some funding from Lancaster University Law School (my PhD is funded by the Arts and Humanities Research Council through the North West Consortium Doctoral Training Partnership). I was there to observe the Informal Meeting of Experts 2016 on Lethal Autonomous Weapon Systems (LAWS), which is discussed under the auspices of the Certain Conventional Weapons Convention (CCW).

The meeting covered a series of topics, with an expert panels mostly made up of academics and military officials/advisers, followed by questions and comments by State delegates, and sometimes civil society NGOs. There was quite a lot of variety in the views taken, some short usually making clear what points a particular delegation agreed or disagreed with, while others were somewhat longer. Nonetheless, some were very interesting and gave explanations about States’ policies. Quite often, it seemed States gave their position in order to find out who they can do deals with later on, in order to move forward on the issues.

The NGOs were all hoping for a pre-emptive ban on LAWS, or ‘Killer Robots’, with support from some States (including a few that gave their support at the meeting), but most States seemed either apathetic, or didn’t want a ban because they saw the potential military utility of a fully-autonomous weapon system, even if they aren’t currently considering developing them. Indeed, both the US and UK mentioned several times their own desires to retain human control over future weapon systems.

The military utility of a fully-autonomous weapon system (i.e. a weapon without direct human control, that could choose for itself how to interpret orders, where to go, and who to target) revolves around that fact that due to increasing use of computerisation (on all sides, even in conflicts with Non-State Actors), the tempo of operations in warfare has accelerated in recent years, both in terms of the required speed at which operations need to be completed, and the number of operations that need to be launched to complete goals. If a fully-autonomous weapon system were to be fielded, it could make decisions much faster than humans, or human-machine teams (what the UK called the ‘intelligent partnership’), therefore being able to launch and complete missions at a much higher tempo, and the enemy being overcome and dominated.

NGOs and the civil society representatives took the view that this is unethical, immoral, and illegal. Whilst there were some fantastic legal arguments in favour of a ban, notably in relation to how human control over weaponry could be required under the ‘principles of humanity’, as mandated by the Martens Clause. However, most of the NGOs’ comments on legality were clouded by their ethical standpoint, leading to legal analysis that lacked depth, and sometimes misrepresented the law. The most memorable incident of this was when an NGO representative misunderstood one of the expert panellists, who said LAWS are not illegal per se. Meaning there is nothing that specifically prohibits LAWS in international law. The representative, misused this view, implying that the expert thought there were no legal issues with LAWS at all. Which was interesting, considering that the NGOs are actively campaigning for an international legal instrument that would prohibit LAWS.

States made lots of good points, and some not so good ones. The issues of weapons reviews came up quite a lot. During procurement, States are required to perform a review to ensure weapons: are not prohibited by an instrument of international law; that they are not going to cause superfluous injury, unnecessary suffering, or long-term environmental damage; that the weapon can reliably hit targets it is aimed at (for more on this, see William H. Boothby ‘Weapons and the Law of Armed Conflict’).

However, another related issue was trying to get States to actually carry out the weapons reviews they are mandated to do under Art.36 of Additional Protocol 1 to the Geneva Conventions, rather than relying on manufacturer data, or accepting other States reviews as good enough for their own. Several experts mentioned that if States carried out good weapons reviews, there would be no need for a ban on LAWS, as all weapons that weren’t good enough the be legally employed would fail review. Campaigners didn’t think they would be sufficient, because States could easily fall back into their current bad habits, which is a fair position.

Whether or not a ban would hamper private companies developing AI technology was a constant conversation throughout the week. States noting that a ban hampering civilian research would be unacceptable, and AI specialists saying that they were in favour of a ban. This is because lots of AI researchers don’t want their work to be weaponised, and a ban would allow them to march ahead in civilian research, without worrying their research could be used lethally later on. Most interestingly, a robotics industry expert noted that technology available today could be mangled to create a rudimentary autonomous weapon system, if people had the time, resources, and man-power. The major threat from this is that whilst militaries have to go through very long acquisition processes to ensure they’re getting highly-reliable equipment, a terrorist group can use equipment that is unreliable – a terrorist group with 50% successful technology still results in a loss for States. Thinking about the potential for terrorists to have a form of autonomous weapon system, whist militaries don’t, is very important.

Definitions were, as you might expect also very important. Common understandings of ‘autonomous’ ‘lethal autonomous weapon system’, and ‘meaningful human control’ were all discussed, but common definitions eluded the meeting. NGOs noted that in previous treaties, clear-cut definitions weren’t decided upon until the last moment. However, I think the general working definitions used in the room were mostly understood by all parties.

The most common definition used came from the US and described LAWS as ‘A weapon system that, once activated, can select and engage targets without further intervention by a human operator.’  UN Special Rapporteur Christof Heyns agreed with this definition, and most of the NGOs have a similar position focussed on what they refer to the ‘critical functions’ of a weapon system, i.e. to select and engage targets. Human Rights Watch take an approach of looking at autonomy through different levels of human control, defining LAWS as weapon systems where a human is ‘out of the loop’. I think the former is the best approach, after all is these critical functions that separate LAWS from all other weapons, and cause them to be so problematic for law and ethics. However, the latter viewpoint is still useful for considering the level of control that has been delegated to autonomous systems.

Meaningful human control’ was a term repeatedly raised by the NGOs as something that would be required in any weapon systems with autonomy, and they hope would result in humans always making any decisions to use lethal force. The control becomes meaningful when: technology is predictable and reliable enough to carry out a commanders’ orders as instructed; decisions are made by humans who use deliberate judgement, and don’t just blindly follow computer recommendations; there is a framework of accountability. I think this is a good concept, but may be more flexible than NGOs would like. A commander setting out target criteria prior to a LAWS mission could be using human judgement with totally predictable technology, and a clear accountability chain, and would still have meaningful human control, under this definition, despite not actually ‘pulling the trigger’.

At its conclusion, the meeting decided that it would recommend to the body above it (the CCW Fifth Review Conference) to create a Governmental Group of Experts (GGE) to further discuss the issue, and to report back in 2018. Due to requiring a consensus view, the recommendations don’t really say an awful lots of substance, there’s too much compromise for them to say anything of real worth. So I’m not sure if this means the whole issue has been kicked into the long grass, with no chance of a pre-emptive ban, considering that the GGE will take two years to report non-binding recommendations (if the Fifth Review Conference actually create the GGE), and there will probably be years of discussion afterwards. However, this should not take away from the progress at the meeting, these are the first steps forward the UN has made in three years of discussions, and five more States joined with the NGOS in calling for a pre-emptive ban on LAWS during the conference, bringing the total number to 15 of the 121 CCW State parties, and five signatories. Taking all this into account, and the feeling of the room at the conference, I think, eventually, there could be an agreement which would require human control over LAWS. Although, whether this would be as meaningful as the NGOs want, I’m not sure.

 

Joshua's (@JoshGHughes) PhD research concerns the legality of autonomous weapon systems (AWS). He is looking at: how AWS are covered, and their use governed, by international law; legalities of historical and contemporary situations of automated killings by militaries; legal issues specific to AWS; accountability for AWS actions.

If you would like to know more about Joshua's research, you can view his research profile here http://www.research.lancs.ac.uk/portal/en/people/joshua-hughes, or follow the link to his blog The Law of Killer Robots.