INDIAN LEGAL SYSTEM AGAINST FALSE ALLEGATION Uncategorized Analysing the Possibility of Imposing Criminal Liability on AI Systems

Analysing the Possibility of Imposing Criminal Liability on AI Systems

-By Amishi Aggarwal

A STUDY ON CRIMINAL LIABILITY OF ROBOT | RACOLB LEGAL

(This article is part-I of a two-part series)

Introduction

The increasing role of Artificial Intelligence (AI) in human life and the advancements in its functioning have raised numerous questions. AI entities have attempted to escape labs, made racist comments, injured, and even killed people. Usually, crimes committed by an AI entity have been reducible to humans– it was either used by humans to commit crimes, the programming was faulty, or it was reasonably foreseeable that it may commit a crime if not controlled appropriately. In all these cases, criminal liability could be attributed to its human users through the existing criminal law regime. When an AI is used by a human to commit a crime, both mens rea and actus reus can be attributed to that human, making him liable. When an AI commits a crime which was reasonably foreseeable, criminal negligence can be attributed to the human user.

However, more strenuous questions arise when the act done by an AI cannot be reduced to humans. AI can act unpredictably, autonomously, and unexplainably. Many AI systems rely on technologies which involve a computer program initially created by humans, which further develops (Abbott, p. 330) in response to data without explicit programming. Hence, AI can engage in activities its original programmers may not have intended or foreseen. It often becomes very difficult to determine why or how an AI acted the way it did. Though, it is theoretically possible to explain how an AI may behave in such cases, it becomes impracticable due to the resource intensive nature of such enquiry.[1] It is in these cases, where the AI acts unpredictably and autonomously that the question of imposing criminal liability directly on the AI arises.The existing criminal jurisprudence does not have the rules and regulations to deal with crimes committed by AI systems, which are irreducible to humans. Hence, it becomes important to deal with the issue of imposing criminal liability directly on AI.

Delving into this issue necessitates the pragmatic work of thinking through the possibility of attributing both mens rea and actus reus to AI. An AI entity must possess mens rea and actus reus to be eligible for criminal liability. However, it may not be prudent to impose criminal liability on AI if it has no affirmative benefits, or if the negative consequences of direct criminal punishment on AI outweigh the positive consequences. Moreover, it may not be feasible to impose direct criminal liability on AI if better alternatives that can provide substantially the same (or more) benefits as direct AI punishment exist. In light of this context, this paper would look at the possibility of direct imposition of criminal liability on AI in cases where the crime committed by it is irreducible.

Possibility Of Attributing Mens Rea To AI

In the medieval era, animals were tried for criminal offenses and even awarded punishment. It was only in the eighteenth century that it was established that animals lack the most basic requirement of criminal culpability- mens rea.It has been argued that imposing criminal liability directly on AI would be equivalent to imposing it on animals- as it is not capable of forming the requisite mens rea.[2] However, this argument contemplates AI as a mere machine, which can only function as its human developers or users want it to. Gabriel Hallevy argues that AI can fulfil all mens rea requirements that are required in order to impose criminal liability e.g. knowledge, intent, negligence etc.[3]

Knowledge is defined as a sensory reception of factual data and their understanding (p. 1372) and most AI systems are well-equipped with sensory receptors of sights, voices, physical contacts, touches etc. for such a reception (p. 108).These receptors transfer the factual data received to the central processing units to analyze them in a process which is very similar to the functioning of the human brain. Hence, AI can be argued to meet the requisite knowledge standard for a crime.

Specific intent is the existence of a purpose or an aim that a factual event will occur, due to which the perpetrator commits the offence.[4] AI is generally programmed to function by setting aims and acting towards accomplishing them. In many cases, these aims are set by the AI itself. Hence, specific intent can also be attributed to AI.

Some offenses require feelings to attribute culpability, e.g. hate crimes. AI entities do not have feelings and hence cannot be held liable for these offences. However, a majority of the offenses do not require feelings to attribute culpability. Motive is not a necessary pre-condition for liability. Hence, it is argued that AI can form mens rea for most of the offences.

However, this attribution of mens rea is challenged by the “Chinese Room Argument”. Even if AI has sensory receptors which provide them data that could be processed internally, can it be said that an AI actually comprehends what is being processed? This argument can be explained with the help of the following analogy:

There is an English speaker in a locked room, who does not know Mandarin, with a symbol processing program written in English. Some Mandarin speakers slide a note in Mandarin under the door. The English speaker sends back a reply in Mandarin following the symbol processing program. The Chinese people outside think that the person in the room knows Mandarin when he actually does not even understand what the note by the people or his reply meant; he just followed the program mechanically and generated a reply. Analogically, it is argued that the AI follows a program which is in the computing language, without actually understanding the inputs and its own replies, which can be in the form of actions.

AI recognizes particular situations and responds either as it has been programmed to, or as it has learnt, from experience and observation. John Searle argues that AI, after recognizing a situation, replicates the behaviour of those who have been in the same situation or just responds mechanically according to the rules, without comprehending the meaning of its actions. Due to this incapability of AI to comprehend the meaning and hence the subsequent consequences of its actions, it is argued that AI cannot fulfil the mens rea standards requisite for criminal culpability. This argument is the subject of an inconclusive and highly controversial debate. Hence, mens rea cannot be attributed to AI undisputedly.

Actus reus can be attributed to an AI if the act or omission committed by it is voluntary and it was in control of its mechanism with the freedom to move its parts.

In arguendo, if AI can form the requisite mens rea, the next question which arises is should punishment be imposed on AI systems? The focus of the question shifts from “can we do it?” to “should we do it?” Hallevy argues that when both mens rea and actus reus requirements are met, there is no reason to preclude criminal liability of the AI. While these elements are enough to make the AI eligible for criminal liability, it is imperative to consider if there are affirmative benefits of such criminal punishment of AI and if there are better or at least feasible alternatives to such imposition.


[1]Mireille Hildebrandt, Ambient Intelligence, Criminal Liability and Democracy, 2 Crim L. & Philos. 163, 164-170 (2008).

[2]Id., at 166.

[3] Gabriel Hallevy, Dangerous Robots- Artificial Intelligence vs. Human Intelligence, Dangerous Ideas 205, 210-216 (2018).

[4]Wayne R. LaFave, Criminal Law  733-34 (Thomson, 2003).

[The author is a 2nd year student at NALSAR University of Law, Hyderabad.]