Legal evaluation of the attacks caused by artificial intelligence-based lethal weapon systems within the context of Rome statute

Published on Sep 1, 2021in Computer Law & Security Review2.98
· DOI :10.1016/J.CLSR.2021.105564
Onur Sari (KSU: Kent State University), Sener Celik
Source
Abstract
Abstract null null Artificial intelligence (AI) as of the level of development reached today has become a scientific reality that is subject to study in the fields of law, political science, and other social sciences besides computer and software engineering. AI systems which perform relatively simple tasks in the early stages of the development period are expected to become fully or largely autonomous in the near future. Thanks to this, AI which includes the concepts of machine learning, deep learning, and autonomy, has begun to play an important role in producing and using smart arms. However, questions about AI-Based Lethal Weapon Systems (AILWS) and attacks that can be carried out by such systems have not been fully answered under legal aspect. More particularly, it is a controversial issue who will be responsible for the actions that an AILWS has committed. In this article, we discussed whether AILWS can commit offense in the context of the Rome Statute, examined the applicable law regarding the responsibility of AILWS, and tried to assess whether these systems can be held responsible in the context of international law, crime of aggression, and individual responsibility. It is our finding that international legal rules including the Rome Statute can be applied regarding the responsibility for the act/crime of aggression caused by AILWS. However, no matter how advanced the cognitive capacity of an AI software, it will not be possible to resort to the personal responsibility of this kind of system since it has no legal personality at all. In such a case, responsibility will remain with the actors who design, produce, and use the system. Last but not least, since no AILWS software does have specific codes of conduct that can make legal and ethical reasonings for today, at the end of the study it was recommended that states and non-governmental organizations together with manifacturers should constitute the necessary ethical rules written in software programs to prevent these systems from unlawful acts and to develop mechanisms that would restrain AI from working outside human control.
References46
Newest
#1Agnieszka Szpak (UMK: Nicolaus Copernicus University in Toruń)H-Index: 4
Along with the rapid development and proliferation of autonomous robotic weapons, machines are beginning to replace people on battlefields. The use by the USA of Predators or Reapers and other unmanned aerial vehicles in Afghanistan, Pakistan and other places in the world clearly signals a distancing of soldiers from their targets. In this article I concentrate on fully autonomous weapons. The theses of the article are as follows: the use of autonomous weapons would be contrary to the basic and ...
Source
Source
#1Roger Clarke (ANU: Australian National University)H-Index: 29
Abstract Artificial Intelligence (AI) is enjoying another of its periodic surges in popularity. To the extent that the current promises are fulfilled, AI may deliver considerable benefits. Whether or not it does so, however, AI harbours substantial threats. The first article in this series examined those threats. The second article presented a set of Principles and a business process whereby organisations can approach AI in a responsible manner. Given how impactful AI is expected to be, and the ...
Source
#1Roger Clarke (ANU: Australian National University)H-Index: 29
Abstract This article reviews the nature, the current state and possible future of Artificial Intelligence (AI). AI is described both in the abstract and in four forms that are currently evident not only in laboratories but also in real-world applications. Clarity about the public's concerns is sought by articulating the threats that are inherent within AI. It is proposed that AI needs to be re-conceived as `complementary artefact intelligence', and that the robotics notion of `machines that thi...
Source
#1Roger Clarke (ANU: Australian National University)H-Index: 29
Abstract The first article in this series examined why the world wants controls over Artificial Intelligence (AI). This second article discusses how an organisation can manage AI responsibly, in order to protect its own interests, but also those of its stakeholders and society as a whole. A limited amount of guidance is provided by ethical analysis. A much more effective approach is to apply adapted forms of the established techniques of risk assessment and risk management. Critically, risk asse...
Source
#1Themistoklis Tzimas (UoM: University of Macedonia)H-Index: 1
The article analyzes the role of human rights in relation to Artificial Intelligence. The main goal is to identify how human rights can contribute into a new international treaty, attempting to regulate the advances and the functions of AI, both at the present, narrow field, as well as at the level of general or super intelligence in the future. In order to do so, the article examines issues which are related to the ontology of AI, which determine the transformation of social and subsequently of...
#1Michael Haenlein (ESCP Europe)H-Index: 29
#2Andreas M. Kaplan (ESCP Europe)H-Index: 29
This introduction to this special issue discusses artificial intelligence (AI), commonly defined as “a system’s ability to interpret external data correctly, to learn from such data, and to use tho...
Source
May 2, 2019 in CHI (Human Factors in Computing Systems)
#1Jonathan Grudin (Microsoft)H-Index: 62
#2Richard Jacques (Microsoft)H-Index: 2
What began as a quest for artificial general intelligence branched into several pursuits, including intelligent assistants developed by tech companies and task-oriented chatbots that deliver more information or services in specific domains. Progress quickened with the spread of low-latency networking, then accelerated dramatically a few years ago. In 2016, task-focused chatbots became a centerpiece of machine intelligence, promising interfaces that are more engaging than robotic answering system...
Source
Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way fo...
Source
#1Kelley M. SaylerH-Index: 1
Cited By0
Newest
This website uses cookies.
We use cookies to improve your online experience. By continuing to use our website we assume you agree to the placement of these cookies.
To learn more, you can find in our Privacy Policy.