Humanity is on the brink of a new era of warfare.
Driven by rapid advances in artificial intelligence, weapons platforms that can detect, target and decide to kill humans on their own, without an officer leading the attack or a soldier pulling the trigger, are rapidly changing the future of conflict.
Officially, they’re called Lethal Autonomous Weapon Systems (LAWS), but their critics call them killer robots. Many countries, including the US, China, the UK, India, Iran, Israel, South Korea, Russia, and Turkey, have invested heavily in the development of these types of weapons in recent years.
A United Nations report estimates that Turkish-made Kargu-2 drones ushered in this new era when they attacked militants in Libya in 2020 amid the country’s ongoing conflict.
Autonomous drones have also played a pivotal role in the war in Ukraine, where both Moscow and kyiv have deployed these unmanned weapons to attack enemy soldiers and infrastructure.
The great public debate
The appearance and development of this type of machine has provoked an intense debate among experts, activists and diplomats from all over the world, who exchange arguments about the potential benefits and risks of using robots. At the same time, the question of whether they should stop here and now increases the tension and remains unanswered.
However, in an increasingly divided geopolitical landscape, can the international community reach some consensus on these machines? Can the moral, legal, and technological threats posed by such weapons freeze them before they dominate the battlefield? Is a blanket ban feasible or is the introduction of a set of regulations a more realistic option? Al Jazeera’s lengthy article addressed these questions, posed them to leading experts in the field, and got some food for thought.
A short first answer is that a complete ban on autonomous weapon systems does not seem likely anytime soon. The world is divided between those who believe that many standards need to be introduced that will essentially “nullify” the scope of their use, and those who “sweeten” the promises of certain victory they make.
Yasmin Afina, a research associate at London-based think tank Chatham House, presented in the House of Commons in March how the US National Security Agency (NSA) at one point misidentified a journalist from Al Jazeera as an Al Qaeda agent. . The incident, which led to the aforementioned journalist’s name being placed on the US persons of interest list, was revealed by a 2013 document leak by NSA contractor Edward Snowden.
The surveillance system behind the act is not lethal, however it can cause death, in this particular case of the journalist, in other circumstances of many others, Athena argued.
The possibility that LAWS could trigger an escalating series of events resulting in mass deaths concerned Toby Wells, an expert in artificial intelligence at the University of New South Wales in Sydney, Australia. “We know what happens when we pit complex electronic systems against each other in an uncertain and competitive environment. It is called… the stock market,” he wrote in his report to the House of Lords.
But this does not mean, according to him, that researchers should stop developing the technology behind automatic weapons systems, since it can have important benefits in other fields.
For example, the same algorithm is used in car safety systems to avoid collisions with pedestrians. “It would be morally wrong to deny people the opportunity to reduce traffic accidents,” he continues.
One possibility would be to address them in the same way that chemical weapons were contained, that is, through the United Nations Chemical Weapons Convention, which prevents their development, production, stockpiling and use. “We can’t put Pandora in the box, but these measures have generally succeeded in reducing misuse in fields around the world,” he concludes.
The benefits of Artificial Intelligence
In essence, AI-driven autonomous weapons systems have great benefits from a military standpoint. For example, they can carry out some missions without the use of soldiers, thus reducing casualties.
Proponents stress that these systems can also reduce human error in decision making and eliminate bias, while their accuracy can, in theory at least, reduce losses.
For some other experts, however, the dangers of LAWS outweigh the benefits of using them, as potential technical failures, violations of international law, and ethical concerns about machines that make life-and-death decisions cannot be ignored.
And who is responsible?
At the center of all this is the burning question of who is to blame for any evil that happens, that is, who will be responsible for their failure with catastrophic results.
For example, if robots commit a war crime, will the commanding officer overseeing the conflict be held to account, or the superior who decided to involve the machines in the first place? Could it be that the one who should sit on the bench is the one who built them?
All of this “represents a huge gap in the political debate on the subject,” add researchers Vincent Bulanin and Marta Bo from the Stockholm International Peace Research Institute (SIRPI).
For them, what is needed is “to specify exactly which weapon or scenario may be problematic.”
In simple words, two different regulations must be elaborated on the basis of which some will be repealed and others will be used as long as they meet specific standards in their use.
“The million dollar question is which of them can fit in both baskets.”
In any case, for Wells, who goes beyond whether the regulations should be political or legal and the lack of trust between the different states, the most basic of all problems is that critical decisions on the battlefields will be made by machines. They lack human empathy. .
“It is a lack of respect for human dignity,” he concludes.