INDIAN LEGAL SYSTEM AGAINST FALSE ALLEGATION Uncategorized Is Punishment for AI Justified?

Is Punishment for AI Justified?

-By Amishi Aggarwal

A STUDY ON CRIMINAL LIABILITY OF ROBOT | RACOLB LEGAL

This is the second part of a two-part series. The earlier link can be accessed from here.

Generally, punishment is justified on the grounds of deterrence, punishment, prevention, reformation etc. Some of the broad reasons behind the imposition of criminal punishment are: firstly, the consequentialist benefits that punishment brings about i.e. increasing the aggregate good in the society through the punishment (e.g. by reducing crime through deterrence or through incapacitation of the wrongdoer via prevention or reformation);secondly, the retributive reasons through which offenders are punished because they are deemed to deserve a punishment proportionate to their culpable actions; and thirdly, the expressive reasons, where punishment purports to communicate the society’s commitment towards certain core social and legal values and express their condemnation of the culpable act.

Owing to the paucity of space and almost universal disapproval of punishment solely based on retribution or expressive reasons, the author shall primarily deal with the consequentialist reasons of punishment.

The paramount purpose of punishment is to reduce criminal activity through creating deterrence.[1] However, AI is not capable of recognizing the similarity of its potential choices and actions to those of others who have been punished for their wrong choices and actions (Abbott, p. 369, 370). Punishment of AI cannot result in deterrence without this recognition. Deterrence discourages wrongful acts by instilling a fear of punishment.[2] As Jeremy Bentham originally argued (Stafford, p. 119), the basis of contemporary deterrence is that crime can be deterred by increasing the certainty and severity of legal punishment as humans are motivated to avoid pain. Due to the irresponsiveness of AI to feelings such as fear and pain, deterrence as a reason for punishment plainly doesn’t stand. Hence, punishment of AI fails to produce offence-specific deterrence as against AI.

It has been argued that though AI is not capable of offence-specific deterrence, as AI is generally not designed to be sensitive to criminal law sanctions – punishment of AI can create general deterrence (Abbott, p. 356) against the developers, users, programmers and provide them the incentives to avoid creating AI that can cause egregious harm without any justification. However, this argument assumes that it was foreseeable while creating the AI that it may cause such harm, which is not the case.

Another justification for punishment of AI is victim-vindication and psychological-satisfaction. Christina Mulligan, for instance, argues that “taking revenge against wrongdoing robots specifically may be necessary to create psychological satisfaction in those whom robots harm” (Mulligan, p. 580). Moreover, it is argued that punishment represents official condemnation of the wrongful act and reaffirms the core legal and social values, along with the interests and rights of the victim. However, there are grave reservations against justifying punishment solely based on these reasons. Punishing AI to placate those who want retaliation for AI generated harms would be akin to giving way to mob justice. The mere fact that the victims desire the punishment would not render the punishment justified.

Rehabilitation is the most widely accepted purpose of criminal punishment, which involves psychological approaches to re-educate and re-integrate the offenders back into society. In the context of AI, rehabilitation may stand for re-programming the AI to the extent that it does not commit a crime again. However, it may be practically impossible to find what exactly to re-programme. Hence, rehabilitation cannot be achieved in the context of AI.

It is also argued that punishment of AI is justified by the reason of incapacitation. It is incapacitated from committing a crime by switching it off or not allowing it to function for a particular period (which is akin to imprisonment for humans).[3] Here, punishment adds to the aggregate benefit by not letting the AI commit any crime during the period of punishment, leading to a consequentialist argument for punishment. However, this kind of punishment is only justified in the cases where the perpetrator is sufficiently dangerous that there is no other alternative except for ‘removing’ the perpetrator from the society for a period of time. The possibility of fulfilment of this ‘sufficiently dangerous’ criterion in the case of AI may be very remote, and hence punishment of AI would only result in harm to the developers or the users.

Another purpose of criminal punishment is restitution, where the convict is supposed to compensate the affected party, usually in the form of money. In the context of crimes committed by AI entities, as Gabriel Hallevy argues, compensation for the victims can be generated using the labour of the AI. The AI can be used for a purpose that will generate money for compensating the victim. However, it must be considered that the user may be in a better position to earn more money from the use of the AI, and hence in a better position to compensate the victim as the law requires. This way, it would ultimately be the individual who would be paying the compensation. However, he may still generate profits along with the money to be paid as compensation, hence benefitting overall and mitigating the loss caused to him by the irreducible AI crime. If the AI is taken away from the owner for some time to generate money to compensate for the harm caused, it may produce less money overall- depriving the owner of the benefits he could have generated along with the money to be paid as compensation.

In sum, the problem with punishing AI is that it is not self-conscious (at least in the philosophical sense). An AI entity does not have any interests and ambitions of its own,[4] because of which it is indifferent to the punishment imposed on it. This does not even qualify as punishment according to Hart’s definition because no interests are adversely affected. AI does not exist on its own and is often owned by someone. The AI, which is de-programmed or destroyed because of the wrongful act it committed, would often involve a lot of material and intellectual resources in its making. The destruction or de-programming or even switching off the AI for a period (as Gabriel Hallevy argues would be imprisonment in the context of AI) has no effect on the AI itself, while it may cause immense loss to its owner. Hence, the real harm on direct punishment of AI is caused to its developer or owner, placing disproportionate burden on him/her.

It can be argued that most forms of punishments have collateral consequences, for instance, conviction of an accused has an impact on his family and other dependants. However, it must be taken into consideration that in the case of humans, conviction at least carries the consequentialist benefits for the society and for the convicted individual himself- as it may involve rehabilitation of the individual, or deterrence against the crime in general, which is not the case when it comes to AI systems.

Hence, destroying an AI (unless it becomes exigent to do so due to its incontrollable or extremely dangerous nature) or imposing any kind of punishment on the AI itself may be a blunt remedy which is likely to harm the individual owning it without having much impact on the AI itself. Therefore, punishing AI would not be justified under the current penal jurisprudence.

Feasible Alternatives

Various scholars have suggested pragmatic alternatives to directly imposing criminal liability on the AI. These alternatives mainly focus on compensating the victims. However, it is acknowledged that mere compensation may not be enough in cases involving unusually dangerous crimes, and someone needs to be held accountable.

It has been recommended that there can be a compulsory registration system for all the AI systems, with a “Responsible Person” for the AI, who could be the AI’s manufacturer, owner or developer according to the nature of the AI. Imposition of vicarious criminal liability is not accepted in most jurisdictions, and hence the only responsibility that can be imposed on this Responsible Person can be that of compensating the victims of crimes committed by the AI he is responsible for. This could be justified by the reason that (Duff, p. 170) there is a duty owed to the society at large to provide special assurances that certain especially serious risks will be mitigated as much as possible.

It has also been recommended that there could be a mandatory fees which these Responsible Persons would be obligated to pay for registration of the AI, and this consolidated fund could be used to compensate the victims of AI crimes as the number of offenses committed by autonomous AI on its own, without any human involvement, negligence and foreseeability are very few.

Some researchers have suggested imposing strict liability on the programmers or users of AI who commit such a crime. However, strict liability amounts to unjustly punishing the innocent and makes them liable for the acts which they could not even have foreseen. Moreover, imposition of strict liability would stifle innovation and beneficial commercial activities, as developers would be discouraged to design AI entities which act autonomously. However, crimes cannot be allowed to go unaccounted for just because criminal liability may pose a barrier to innovation. Strict liability is justifiable as a last resort in cases involving unusually dangerous activities, but it cannot reasonably be said that the use of AI, at least in most of the cases, qualifies as unusually dangerous. Most AI crimes can be accounted for through compensating the victims.

Conclusion

Notwithstanding the Chinese Room Argument, attribution of mens rea and actus reus to AI systems is widely accepted among scholars. The primary problem with imposing criminal liability on AI arises because of its indifference to the same. Imposing criminal liability on AI would not produce any deterrent or rehabilitative impact, while it may act as a preventive measure. Moreover, convicting AI may result in a spill over effect, and consequentially the negative ramifications of criminal liability will have to be borne by its owner, who will suffer from loss caused by such punishment while it has no effect on the AI per se. The existence of feasible alternatives, which have substantial benefits over direct criminal punishment of AI, discourage such imposition of criminal liability.

Imposing criminal liability on AI raises consequentialist questions which cannot be answered well in light of the irresponsiveness of AI to punishment. To be sensitive to censure, a subject needs to be conscious of itself, and such kind of consciousness has not yet emerged in AI entities.

AI, as currently envisioned, is not suitable for direct criminal punishment because it fails to meet the objectives of criminal punishment. AI entities must possess the ability to apprehend the lawfulness of their actions, consciousness and means to guide themselves by the law along with mens rea and actus reus to make direct imposition of criminal punishment on them suitable. Such AI systems have not been made yet, however the realm of science and technology is expanding rapidly and these systems may not be very far away from realisation.

[The author is a 2nd year student at NALSAR University of Law, Hyderabad.]


[1]Zachary Hoskins, Deterrent Punishment and Respect for Persons, 8 Ohio St. J. Crim. L. 369, 370 (2010).

[2]Id.

[3]Hallevy, supra note 3, at 218.

[4] Hildebrandt, supra note 1, at 169.