Home » Blog » The use of artificial intelligence in war
Defense Economy Energy Featured Global News National Security Science Technology World News World War

The use of artificial intelligence in war

Getty Image

The capabilities of artificial intelligence (AI) systems evolve swiftly and influence almost every sector of society. We are mostly used to the civilian uses of AI, such as image recognition to unlock smartphones, voice assistants that tell us the weather, or that new Instagram filter your friend has just used. However, such systems have also been implemented in applications not too familiar to many of us. One of them is a military application. Taís Fernanda Blauth analyses the legal aspects of these applications. She briefly answers some questions concerning autonomous weapons.

Text: Taís Fernanda Blauth, Photos: Henk Veenstra

Taís Fernanda Blauth
Taís Fernanda Blauth

AI-based or AI-enabled weapons are not the futuristic robots that you might know too well if you are a sci-fi fan, such as the ones you might have seen in The Terminator. But the risks linked to the use of AI in war can still cause widespread damage and have profound ethical, legal, and socio-political implications.

AI in weapons

Technology advancements are not only being implemented in hardware and equipment that we use in our daily lives. They have been continuously implemented throughout history, in weapons and devices used in warfare. The same goes for AI technology. Many systems already use AI to some degree, but the next big step envisioned by many is the development and adoption of fully autonomous weapons systems, known as ‘Lethal Autonomous Weapons Systems’ (LAWS).

What are LAWS?

LAWS are the systems that, once activated, can track, identify, and attack targets without further human action. The term ‘autonomous’ used in this context is not ideal, considering that AI systems do not possess moral capability. In addition, such algorithms need vast amounts of data, and the models need to be trained extensively to perform as desired. However, this is the most well-known term and it is the one used in discussions about the topic within the United Nations framework.

Autonomy in weapons

AI technology has been incorporated into weapons systems for quite some time, and some semi-autonomous systems are already in use. There are different levels of autonomy in weapons. This has been the reality for decades. One example is the close-in weapons system (CIWS), common in air defence, which uses radars to identify and track incoming threats. Once a threat is detected, a computer-controlled fire system can select and autonomously attack it. A well-known example of a CIWS is the Dutch ‘GoalKeeper’. This type of system is set to defend a specific region, and it can only operate in simple and structured environments. Even though it could also be considered as an autonomous weapon, its range of deployment and its applications are limited.

Blauth: one of the challenges of LAWS: the difficulty in verifying when a weapon has operated autonomously, without human oversight.
Blauth: one of the challenges of LAWS: the difficulty in verifying when a weapon has operated autonomously, without human oversight.

Are there LAWS already?

There was no public information regarding the offensive use of autonomous weapons until recently. In March 2021, the UN released a report suggesting that, in 2020, a drone airstrike in Libya was conducted by an autonomous weapon without human control. This represents one of the challenges of LAWS: the difficulty in verifying when a weapon has operated autonomously, without human oversight.

Should robots be allowed to kill?

As mentioned earlier, an AI system does not have morality, which also means it cannot understand and see humans as moral subjects. When an autonomous weapon is deployed in warfare, it is unable to reflect and comprehend the value of life and, thus, the value of the lives of the ‘targets’ in front of it. People become simple data points, potential targets that must be eliminated. Algorithms can do extraordinary things, but battlefields are complex environments that require a high level of moral and ethical evaluation. Given that such weapons would be deployed in complicated and unanticipated scenarios, would it be acceptable to let machines decide who should be killed or not? This is one fundamental and vital question that needs to be addressed and widely discussed.

Widespread concerns

The concern about the development and use of these weapons is widespread. The initiative ‘Campaign to Stop Killer Robots‘ was launched in 2013, which is a coalition of non-governmental organizations that advocate for a ban on LAWS. Later, in 2015, AI experts, roboticists, and relevant personalities from the industry endorsed an open letter also calling for a ban on such autonomous weapons.

Regulating LAWS

Several non-governmental organizations, civil society, government representatives, and industry members have raised the importance of regulating LAWS. Some support the view that a complete ban is the way forward, given the risks. Others argue that there should be at least a requirement for meaningful human control in the deployment of these weapons. At the moment, the topic is under consideration by governments within the framework of the United Nations. State parties to the Convention on Conventional Weapons (CCW) engaged in negotiations annually from 2014 until 2016, when they decided to establish a Group of Governmental Experts. The group has been meeting since 2017. However, to date, no agreement has been reached, so no regulation has been established.

Blauth: I will analyse the legal aspects of the use of AI in military applications, including issues involving international humanitarian law
Blauth: I will analyse the legal aspects of the use of AI in military applications, including issues involving international humanitarian law

How does my research contribute to the topic?

In my PhD, I will analyse the legal aspects of the use of AI in military applications, including issues involving international humanitarian law (IHL). More specifically, I will discuss the applicability of the IHL principles of proportionality and discrimination. I will also evaluate the ethical aspects of LAWS to discuss important and fundamental questions regarding the use of these weapons. An in-depth understanding of such aspects is essential to evaluate the possibilities for regulation on the topic. As I am doing my PhD programme at an interdisciplinary faculty, Campus Fryslân, I am able to combine my background in law and international politics with my research.

Source :

Translate

Advertisement

Advertisement