Legal Personhood of Artificial Intelligence and the Liability Argument

Meera Patel & Mohd Imran

School of Law & Constitutional Studies, Shobhit Institute of Engineering & Technology (Deemed to be University), Meerut (UP), India

DOI: https://doi.org/10.51584/IJRIAS.2025.100900040

Received: 26 September 2025; Accepted: 02 October 2025; Published: 12 October 2025

ABSTRACT

Today, Artificial Intelligence enabled computers, devices, bots, robots etc. are much more sophisticated with enhanced capabilities, with features like machine learning and potential for deep learning, with patters similar to neural networks in human brains, and it is claimed that Artificial Intelligence far exceeds human intelligence in some specific tasks, especially the ones involving computations. The use of Artificial Intelligence has significantly increased in most sectors of the economy, e.g. industrial robots are commonly being used in manufacturing units. Skilled robots are rendering their services in hospitals and restaurants. Sophisticated technology is being used in medical and healthcare sector where Artificial Intelligence enabled robots or machines are performing surgeries. Significant financial trading is being carried out by use of Artificial Intelligence. The music industry is witnessing music being created by Artificial Intelligence. Weaponry is being enabled with Artificial Intelligence. Lawyers and law firms in some of the developed nations are using Artificial Intelligence enabled products to review voluminous documents for purposes of discovery and/or due diligence, filing taxes, etc. and there are several legal expert systems which have been developed to assist lawyers and judges.

The rapid advancement of technology, including drones, driverless and autonomous vehicles, has indeed transformed our world. However, the legal landscape in the world, particularly India, is still catching up with these developments. The legal systems need to evolve to address the concept of legal personhood of A.I. and determine the liability for harm or injury caused by A.I. enabled devices. In this paper, the author will attempt to explore some of the most important legal issues including but not limited to personhood/ autonomy/ agency or whether A.I. can be treated as a subject under the law. The author will also briefly discuss the question and nature of liability to be imposed in the event use of A.I. results in harm or injury to human beings.

Keywords: Artificial Intelligence, drones, driverless vehicles, legal personhood, liability, harm, injury, autonomy, agency

INTRODUCTION

Today, Artificial Intelligence (AI)-enabled computers, devices, bots, and robots have become significantly more advanced with enhanced capabilities such as machine learning and the potential for deep learning, resembling neural networks found in human brains. It is asserted that AI surpasses human intelligence in specific tasks, particularly those involving complex computations. The use of AI has seen substantial growth across various sectors of the economy. Industrial robots are now commonplace in manufacturing units, while skilled robots provide services in hospitals and restaurants. In the medical field, AI-enabled machines perform surgeries using sophisticated technology. AI plays a crucial role in financial trading and has even entered the music industry by creating music autonomously. Furthermore, AI is being integrated into weaponry and is also utilized by lawyers and law firms in developed nations for tasks like document review for discovery or due diligence, tax filing, and the development of legal expert systems to aid lawyers and judges. The rise of self-programming, sentient artificial intelligence capable of replicating human consciousness marks the dawn of a new era. In this evolving landscape, the rights of AI intersect with rights of natural persons/human beings, shaping a novel societal framework. The extensive adoption of AI across various sectors has prompted significant inquiries into the rights and responsibilities pertaining to AI entities and/or their owners. For AI to possess rights independently or be held liable for its actions, the first question that needs to be addressed is whether AI can be legally recognized as a "person" in the traditional and legal sense. This paper shall mainly examine the concept of ‘Personhood’ and thereafter very briefly touch upon the issue of ‘liability’.

Definition Of Artificial Intelligence

There is no universally accepted or standardized definition of AI. Artificial Intelligence includes countless forms of advanced technology, and it can be understood in layman terms to mean the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”[1] The Oxford English Dictionary defines Artificial Intelligence to mean “the capacity of computers or other machines to exhibit or simulate intelligent behaviour.”[2] Black’s Law Dictionary defines it as "software used to make computers and robots work better than humans. The systems are rule based or neutral networks. It is used to help make new products, robotics, human language understanding, and computer vision.[3]

The scientific discourse around Artificial Intelligence began around 1950’s. The term “Artificial Intelligence” is believed to have been first coined in 1955..[4] Alan Turing proposed a test in 1950 to prove a machine as “intelligent,” which involved a human being asking questions to two entities, a human being and a computer. The computer would have passed the intelligence test if the human judge was unable to differentiate the computer from the human.

There are several levels through which AI has evolved which are explained below:

(i).          Artificial Narrow Intelligence (ANI), i.e. Weak AI, can perform identifiable and particular tasks exceptionally well, such as mastering chess or operating self-driving vehicles.

(ii).       Artificial General Intelligence (AGI), also referred to as Strong AI exhibits abilities comparable to humans, including the capacity to learn and execute a wide range of tasks in a way that mirrors human performance.

(iii).     Artificial Super Intelligence (ASI) significantly outstrips human intellect across nearly all areas of cognition. ASI can be described as an intelligence that vastly surpasses human cognitive abilities in practically every domain.[5]

A smart robot, as outlined by the European Parliament, is characterized by several key features. It can communicate and share data with its environment (inter-connectivity), analyze and interpret that information, and in some cases, learn on its own through experience and interaction. It typically has a minimal physical form, can adapt its behavior in response to its surroundings, and is not a living biological entity.[6]

The definition of AI remains fluid due to its ongoing evolution and adaptation to technological advancements. Its definition and understanding will need to be revisited owing to continuously change in its capabilities, which is making it increasingly sophisticated and human-like. Any static definition provided today is likely to evolve as AI's potential is still largely unexplored.

Personhood Of Ai

The question of whether autonomous systems and artificial intelligence should be treated as legal persons has become contentious in recent years. The issue involved is whether autonomous systems or A.I, upon achieving self-awareness, sentience, rationality, and human-like abilities, should be recognized as a person under the law, and if so, whether it can enjoy rights as well as held accountable for the violations. This inquiry raises important ethical, moral, legal, and practical issues.

A.      The evolution of the concept of Personhood

 For much of legal history, the legal system has regarded only human beings as natural legal persons. In earlier times, entities like animals, fetuses, and slaves were not granted legal personhood.[7] More concretely, the traits typically associated with being a natural person include the ability to reason logically, possess consciousness and self-awareness, use language, act with intention, exercise moral judgment, and demonstrate intelligence.[8]

Roman law laid the foundation for the distinction between persons (personae), things (res), and actions (actiones).[9] The term "persona" originates from the Latin "personare," meaning mask, and evolved to denote the roles or qualities of individuals. Over time, "persona" took on the modern notion of an individual human being distinct from roles or statuses.[10] In Roman legal texts like the Corpus Iuris Civilis (Body of Civil law), "persona" was used to denote both roles and individuals, but it did not acquire the technical sense of "legal person" as understood today.[11] Slaves were considered personae as human beings, although their legal personhood was often debated.[12] The concept of companies as independent entities—capable of owning property and entering into contracts—can be traced back to Roman times. Initially, these entities were governmental bodies like states and cities that managed public assets. Over time, this structure evolved to include private enterprises as well.[13] The concept of "caput" was also used to denote legal standing, particularly relevant in hierarchical Roman society.[14] Thus, the contemporary notion of legal personhood, which holds that persons and entities have rights and obligations, was somewhat influenced by Roman law, especially the differentiation between persons and things.

It is believed that Gottfried Leibniz, was the first to link (legal) personhood to the capacity to hold and bear rights, even though non-persons may occasionally be the objects of rights.[15] Immanuel Kant refers to the connection between a right and its associated duty as a “juridical relationship.” According to him, individuals are uniquely positioned to hold both rights and responsibilities, making them the only valid participants in legal relationships.[16] G.W.F Hegel, who was inspired by Kant, believed that “[P]ersonality alone confers a right to things, and consequently that personal right is in essenceright of things.”[17] According to Austin, “Persons are invested with rights and subject to obligations, or, at least, are capable of both.”[18] Austin recognized that legal procedures could create artificial entities and is known for introducing the term "legal person" in English, using it to describe what he called an "artificial person”.[19] Professor Thomas Holland gave a definition of personhood as “Persons are the subjects of Duties as well as of Rights. In persons rights inhere, and against them rights are available.”[20] In his perspective, a (natural) person is understood as a human being whom the law deems capable of possessing rights and responsibilities, while slaves are considered things, specifically as "Objects of Rights and Duties."

Over time, the concept of legal personhood was expanded to include various intangible entities like corporations and trusts. These entities have been granted some rights and responsibilities as humans, marking a significant evolution in our understanding of legal rights and obligations.[21] These can be understood as juristic persons. According to Black’s Law Dictionary (4th edn.), a person may include “artificial beings, as corporations, territorial corporations, and foreign corporations.”[22]  According to it, “Persons are of two kinds, natural and artificial. A natural person is a human being. Artificial persons include a collection or succession of natural persons forming a corporation; a collection of property to which the law attributes the capacity of having rights and duties.”[23]

B.      Personhood under Indian Law

While it may be difficult to list down definition of ‘person’ in each legislation in India, the definition as provided in some of the statutes is as under:

(i).          The General Clauses Act, 1897 (“GC Act”) applies to all Central Acts and regulations, unless they conflict with the subject or context. Under Section 3(42), GC Act, the term "Person" encompasses "any company, association, or group of individuals, whether incorporated or not."

(ii).       The Bharatiya Nyaya Sanhita, 2023 uses the expressions ‘man’ and ‘woman’ under Section 19 and Section 35, BNS respectively,[24] in contradistinction to the word “person” under Section 26, IPC which means a company, association, or body of persons, whether incorporated or not.[25]

(iii).     Several legislations like the Income Tax Act, 1961[26] and The Real Estate (Regulation and Development) Act, 2016,[27] Consumer Protection Act, 2019,[28] etc. define “person” to include an ‘individual’, a Hindu undivided family (HUF), a company, a firm,[29] an association of persons (AOP) or a body of individuals (BOI), whether incorporated or not, a local or competent authority,[30] any corporation established by or under any central or state act or any company including a government company,[31] artificial juridical person,[32] Trust,[33] LLP,[34] Partnership,[35] a cooperative society,[36] any such other entity as appropriate government may specify,[37] any agency, office or branch owned or controlled by any of the above persons mentioned in the sub clauses,[38] any other entity established under a statute and includes a person resident outside India,[39] Some legislations use the word “person” in relation to any factory or premises means a person or occupier or his agent who has control over the affairs of the factory or premises and includes in relation to any substance, the person in possession of the substance.[40]

(iv).      Some legislations use the word “person-in-charge”, e.g. Customs Act, 1962,[41] which is used in relation to a thing, e.g. master of a vessel is the person in charge of a vessel, etc. Here, the term person in charge is used in relation to inanimate things.

(v).        Some legislations use the word “person interested’, e.g. National Company Law Tribunal Rules, 2016, to denoteshareholder, creditor, employee, transferee company and other company concerned in relation to the term or context referred to in the relevant provisions of the act or any person aggrieved by any order or actions of any company or its direction.”[42]

(vi).      Some legislations use the word “fit person” to mean “any person prepared to own the responsibility of a child for a specific purpose and such person is identified after inquiry made in this behalf and recognized as fit for said purpose by the committee … to receive and take care of the child.”[43]

Persons can generally be divided into two categories: natural and legal. Natural persons are human beings, whereas legal persons refer to juristic or juridical entities acknowledged by the law. Examples of legal persons include companies, firms, corporations, institutions, cooperative societies, Hindu Undivided Families (HUF), and Associations of Persons, among others. Notably, various Indian laws employ the phrase "every artificial juridical person," which could potentially include an AI entity if its juristic status were ever legally recognized. Unlike a natural person, a corporation or company cannot act independently and must operate through its agents.[44] Under the "directing mind" theory, a corporation or company can be held accountable for both civil and criminal wrongs it commits.[45]

Apart from the above known categories of “Person”, an idol is a juristic person in India, in the sense that is capable of holding property and is liable to pay taxes through its agents who are responsible for managing the property.[46] The will of the idol as to its location must also be respected.[47] A deity is allowed to sue as a pauper.[48] In Shiromani Gurudwara Prabandhak Committee, Amritsar v. Shri. Som Nath Dass,[49] the Supreme Court observed that the term “juristic person” implies granting legal recognition to an entity as a person, even though it is not a natural human being. In essence, it refers to an artificially created entity that the law treats as a person. It was further observed that as society evolved, individual efforts alone were no longer sufficient to drive progress. To enable broader cooperation and foster social development, larger collective structures like corporations and companies came into being. These entities, known as "juristic persons," emerged out of the practical needs of human advancement. Relying upon Salmond’s  Jurisprudence, Hon’ble Supreme Court observed that a legal person refers to anything other than a human being that the law treats as having a legal personality. This imaginative legal expansion—extending the idea of personhood beyond humans—stands as a remarkable achievement of legal thinking. It then rightly observed that “Legal persons, being the arbitrary creations of the law, may be of as many kinds as the law pleases.” Hon’ble Supreme Court espoused the case of fictional personalities as under:

“14. Thus, it is well settled and confirmed by the authorities on jurisprudence and Courts of various countries that for a bigger thrust of socio-political-scientific development evolution of a fictional personality to be a juristic person became inevitable. This may be any entity, living inanimate, objects or things. …”

In the aforesaid case,[50] Hon’ble Supreme Court recognized the following categories/classes of “artificial persons”:

(i).          legal entities such as corporations, entities formed by personifying groups or sequences of individuals. These individuals who collectively constitute the legal entity are referred to as its members.

(ii).       Entities where personification is applied not to a collective or series of individuals, but to an institution itself—such as churches, hospitals, universities, or libraries—can be granted legal personality. In such cases, the legal personality is attributed directly to the institution, rather than to any specific group of individuals associated with it.

(iii).     cases where the object is a fund or estate designated for specific purposes—a charitable fund, for instance, or a trust estate.

It was further held that when an idol is recognized as a juristic person, it is  endowed with rights and obligations under the law, but it cannot act independently and operates through a designated representative.[51] Like a guardian is appointed for a minor, a Shebait or manager is appointed to act on behalf of the idol.

In a recent pronouncement, Mohd. Salim v. State of Uttarakhand & Ors.,[52] Uttarakhand High Court held that the river Ganga and its tributary, Yamuna, are juristic/living entities. It would be interesting to note that the judgment conferring ‘personhood’ status to the river Ganga came merely a week after New Zealand had granted legal personhood to the Whanganui River. However, consequently, the Supreme Court on 7th July, 2017 stayed the order passed by Uttarakhand High Court and the matter is pending adjudication.[53]

C. AI as Person

Giving AI and autonomous systems legal personhood would include treating them like natural, legal or juristic persons with rights and obligations, possibly even holding them responsible for their deeds. But this choice also begs important considerations about accountability, ownership, liability, safety and ethics. At the moment, neither artificial intelligence nor robots are considered "legal subjects" in any nation's criminal or civil codes. Initial discussions have started to delve into several topics regarding robots and AI, such as the idea of giving them "electronic personhood." An example of such a regulatory system may involve giving robots a "status of electronic persons with specific rights and obligations," which could include things like taxation and social security contributions, according to a report from the European Parliament.[54]

Personhood is a legal fiction that gives people and things (associations, corporates, etc.) certain rights and obligations. Certain individuals might contend that (natural) personhood encompasses traits like self-awareness, rational thinking, and the capacity to feel emotions (such as pain and pleasure). These qualities set persons apart from mere objects or animals..[55] Some authors advocate for grant of legal personhood to A.I.[56] Some proponents argue that AI and autonomous systems exhibit qualities like intelligence, creativity, and autonomy that could justify granting them legal personhood. [57] The argument is that if society can grant legal personhood to corporations—artificial entities brought into existence by legal frameworks—then it should also consider whether AI and autonomous systems, which often display comparable or even higher levels of intelligence and autonomy, might deserve similar legal recognition.[58]

From a law and society viewpoint, the idea of a legal subject must be understood in connection with its broader context, lived experiences, and relationships within the legal system—it cannot be seen in isolation. Presently, most robots or AI softwares are regarded as the property of an owner, whether an individual or a corporation. However, these perspectives may need reevaluation as AI is becoming more human. Humanoid robots endowed with AGI could closely resemble humans, prompting questions about whether "robot rights" should be established to govern legal and societal aspects. Granting a robot the status of a “legal subject” entails assigning rights and responsibilities.[59] Hilary Putnam was likely one of the earliest thinkers to propose that robots might possess psychological traits comparable to those of humans—a theory known as "psychological isomorphism." [60] From this perspective, he explored the possibility of granting civil rights to robots, ultimately suggesting that the idea of civil rights for such forms of artificial intelligence should not be dismissed outright. Nevertheless, merely displaying human-like traits such as expressive speech or other human-like behaviors does not automatically imply entitlement to rights and privileges. However, a robot is an independent body which functions based on the control of data using heuristics.[61]

In a recent turn of events, Hanson Robotics' humanoid robot Sophia, powered by AI, was granted citizenship in Saudi Arabia and acknowledged by its creator as a "living, emotional being. This clashes with Saudi Arabia's citizenship norms, which are granted through birth, marriage, or naturalization under specific conditions—requirements that Sophia does not meet. Likewise, Japan’s issuance of a residence permit to the chatbot Shibuya Mirai runs counter to the country’s residency laws.[62]

The legal personhood of AI and autonomous systems is criticized for creating serious ethical and legal conundrums.[63] Among the main arguments against personhood are the entities' lack of traditional human characteristics such as moral agency, consciousness, and emotions, as well as the possibility that personhood status could unintentionally result in less legal responsibility for the entities' human creators or owners. Additionally, recognizing AI as legal persons poses a significant challenge, as it would demand a major transformation in existing legal and ethical systems. The potential outcomes of such a shift are uncertain and difficult to predict. In Indian context, “Idols” are recognised as juristic persons, and expressions like “artificial juridical person” has been used in some legislations, which may pave the way for the applicability of such legislations to AI, if at all AI is recognized as a juristic person. Some authors are of the view that instead of conferring complete personhood to AI alongwith the umbrella of full rights, “a solution could be the creation of a status that assign to AI some rights, that should not be the same as the human, but that will be able to completely regulate the interactions between AI and human beings, safeguarding the rights of the latter.”[64] However, in such cases, the human agent in control of the AI software may exercise rights for or on behalf of AI and such agent to be also held accountable/liable for lapses.

Given the contentious nature of issue involved, and the possibility of wide and irreversible ramifications of conferring “Personhood” to AI, it is important that the regulatory framework around AI (including the ethics) are first developed before conferring any recognition, if at all, to the AI.

Liability Of Ai

Giving AI and autonomous systems legal persons or status would have significant ramifications for accountability and liability. The development and application of AI and autonomous systems could be impacted, since operators and designers would have to be mindful of the potential legal repercussions. The onus or liability would probably fall on the creators and operators of AI and autonomous systems if they were recognized as legal entities but denied legal personhood.[65]

Several critical issues may arise regarding assigning blame or liability. Unlike traditional cases of human error, determining fault can be challenging when harm results from an AI system. The decision-making processes of advanced AI, particularly those employing deep learning, often lack full transparency and predictability. The involvement of multiple stakeholders further complicates the attribution of legal liability in AI-related incidents. These stakeholders typically include developers, manufacturers, owners, and users. Determining which party holds responsibility in the event of an incident is complex and cannot be standardized; it depends on the specifics of each case and evolves within the common law system as precedents are established. Nevertheless, current AI lacks the moral capability to comprehend and comply with legal instructions, and the issues concerning individual liability will only arise if ‘personhood’ is conferred to A.I. Meanwhile, the humans managing and controlling a corporation responsible for developing AI software may be held accountable. Some authors, therefore, propose alternate frameworks to be developed:

(i).          Creating regulations and policies to guide the development and use of AI and autonomous systems, setting up liability structures to address accidents and damages caused by AI, and establishing industry-wide standards for the ethical development and implementation of AI..[66]

(ii).       Fixing legal accountability of individuals who create and implement artificial intelligence and self-governing systems.[67]

(iii).     Applying current product liability legislation to AI and autonomous systems as products.[68]

In the aforesaid frameworks, the nature of liability (penal or strict or vicarious) would also need to be explored.

A.    Existing Approaches to Liability

Negligence and product liability currently serve as the primary routes for redress. Developers, manufacturers, or deployers of AI may be liable if harm is attributable to defective coding, insufficient testing, or inadequate warnings.[69] In the United States, courts and commentators have debated whether harms caused by autonomous vehicles fall within conventional product liability rules.[70] Yet, negligence-based liability requires foreseeability and intelligibility of errors, which are often absent in “black-box” systems reliant on deep learning.[71]

Product liability regimes also struggle with adaptive AI. Traditional doctrines assume that a product’s risks can be assessed at the time of release, but self-learning systems evolve through user interaction and exposure to new data.[72] This blurs the distinction between “defect” and expected learning behaviour, leaving consumers under-protected in high-stakes settings such as medical diagnostics or algorithmic credit scoring.[73]

Vicarious liability, by contrast, holds the owner, employer, or operator of AI responsible for harms it causes. This doctrine ensures victims face a solvent defendant, and has parallels in employer liability for employee conduct or principal and agent liability. However, end-users often lack meaningful control or technical knowledge of AI systems, meaning liability may be unfairly shifted away from those with real capacity to manage risk—the developers and corporations.

B.     Strict Liability

Given these limitations, many scholars argue for strict liability regimes in relation to AI.[74] Under strict liability, the operator or developer is responsible for harm regardless of fault. This mirrors existing regimes for ultrahazardous activities such as handling explosives or nuclear energy, where risks are so high that fault-based standards are inadequate.

The rationale is twofold. First, those who profit from deploying AI should also bear the risks it generates (“risk-spreading”).[75] Second, strict liability avoids protracted disputes over negligence, delivering certainty and fairness to victims. For instance, if a fully autonomous car strikes a pedestrian, it is unreasonable to expect the victim to prove a coding error within the manufacturer’s proprietary system.[76]

However, unlimited strict liability may over-deter innovation, particularly for small developers.[77] One proposed compromise is to pair strict liability with mandatory insurance schemes, spreading the costs across the industry while guaranteeing compensation. The European Commission’s 2022 AI Liability Directive reflects this approach, emphasising victim protection and harmonisation across member states.[78]

C.    Electronic Personhood and Capped Liability

The European Parliament’s 2017 Resolution on Civil Law Rules on Robotics floated the creation of “electronic persons” for certain highly autonomous systems.[79] Such entities could be registered, insured, and held liable within capped limits. This model acknowledges AI’s functional autonomy while preventing liability gaps.

The key strength of this approach is conceptual clarity: electronic personhood provides a legal subject directly accountable for harm. Yet critics contend it risks creating a “liability shield” for corporations, enabling them to externalise responsibility onto electronic entities with no assets or moral agency.[80] Legal personhood for AI may obscure the accountability of human actors rather than enhance it.[81]

A workable compromise is a capped liability model: the AI system, as an electronic person, bears primary liability up to an insurance-backed cap, beyond which liability reverts to manufacturers or operators.[82] This layered structure avoids total immunity for developers while still granting AI a legally recognisable status.

D.    Hybrid Liability Models

Given the diversity of AI systems, hybrid liability regimes are emerging as the most realistic option.[83] Such models combine strict liability, vicarious liability, and capped electronic personhood depending on the context.

For example, consider a surgical AI robot in an Indian hospital:

(i)       The hospital may bear vicarious liability as the deploying institution.

(ii)    The manufacturer may be held strictly liable if harm results from design flaws.

(iii)  The AI system itself may carry capped liability through compulsory insurance.

Similarly, in autonomous finance, where trading algorithms cause market disruptions, primary liability may attach to the corporation deploying the system, but capped liability could be attributed to the algorithmic agent. Such sector-specific tailoring recognises that a one-size-fits-all liability model is unworkable.[84]

E.     Feasibility and Risks

Each liability model entails trade-offs. Strict liability maximises victim protection but may discourage innovation. Vicarious liability preserves human accountability but risks burdening uninformed end-users. Electronic personhood provides conceptual neatness but risks moral hazard.[85]

In India, the legal system has historically extended personhood to non-human entities such as idols and rivers,[86] suggesting a jurisprudential openness to electronic personhood. Yet, without robust regulatory infrastructure, prematurely conferring personhood on AI may create confusion rather than clarity.[87] A cautious, incremental approach—starting with insurance-backed strict liability regimes in high-risk sectors—may be the most prudent pathway.

Ultimately, any liability regime must satisfy three goals:

(i)       Certainty and fairness for victims;

(ii)    Accountability for human stakeholders; and

(iii)  Space for responsible innovation.[88]

CONCLUSION

The analysis explored the historical development of the notion of personhood, the rapid progress in AI technology, and the ethical and legal dilemmas arising as AI becomes more autonomous and integrated into society. Our assessment, informed by philosophical perspectives and the current state of AI capabilities, highlights the complex challenges of granting legal personhood to AI. Recognizing AI as a "person" could have profound implications, affecting matters such as accountability, property ownership, contractual capacity, and participation in legal processes.

The journey toward recognizing AI as a legal person encounters significant obstacles. The unpredictable behavior of AI sparks worries about accountability and the practicality of creating a thorough legal structure capable of effectively managing these intricacies. Moreover, extending personhood to entities beyond humans raises ethical considerations, emphasizing the importance of a cautious approach that respects societal norms and human values.

The survey of liability models demonstrates the limitations of existing doctrines. Negligence and product liability frameworks struggle with the opacity and adaptive behaviour of AI. Strict liability offers predictability and victim protection but may inhibit innovation if applied too broadly. Electronic personhood with capped liability provides conceptual neatness but risks creating a liability shield for corporations. Hybrid approaches, tailored by sector and risk level, appear most promising in balancing these competing goals.

As the field of AI continues to evolve, legal systems worldwide will need to evolve and adapt to address “personhood” and “liability” challenges. This may involve developing new regulatory frameworks, updating existing laws, or creating customised AI liability regimes. The goal should be to promote responsible AI development while ensuring adequate protection and recourse for those affected by AI-related incidents. Moving forward, the legal framework for AI personhood should focus on three critical aspects to ensure balanced progress. Firstly, there is a need for dynamic legal frameworks that can adapt to rapid advancements in AI technology.[89] Secondly, given AI's global reach, achieving international consensus and legislation is crucial, involving comparative studies and efforts to harmonize regulations globally.[90] Thirdly, as AI capabilities evolve, developing ethical frameworks is essential to guide the creation and deployment of AI systems in a manner that upholds human dignity and benefits society.[91]

For India, the jurisprudential tradition of recognizing idols and rivers as juristic persons demonstrates a willingness to extend legal personhood beyond human subjects.[92] Yet, extending such recognition to AI at this stage would risk creating more doctrinal confusion than clarity. The priority should instead be the adoption of clear, context-specific liability regimes, beginning with high-risk sectors such as autonomous transport, healthcare robotics, and financial algorithms.

Accordingly, three policy recommendations emerge:

(i)       Mandatory insurance-backed strict liability for high-risk AI applications. This would guarantee victim protection without requiring proof of negligence, while distributing costs across industry actors through insurance markets.[93]

(ii)    Sector-specific hybrid liability frameworks, whereby liability is shared between deployers, developers, and (in limited cases) electronic personhood models with capped liability. Such differentiation avoids over-regulation of low-risk AI while ensuring stringent oversight of high-risk systems.[94]

(iii)  Institutional and regulatory reforms, including the creation of a specialized AI liability tribunal in India to adjudicate complex cases, statutory duties of transparency for AI developers, and alignment of Indian standards with global frameworks such as the proposed EU AI Liability Directive.[95]



[1]‘Personhood’ (Oxford Reference, 31 August 2011) https://www.oxfordreference.com/view/10.1093/oi/authority.20110803095426960 accessed 23 March 2025.

[2] Ibid.

[3] Artificial Intelligence’ (Free Law Dictionary) http://www.freelawdictionary.org/?s=artificial+intelligence accessed 23 March 2025.

[4] John McCarthy and others, ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’ (2006) 27(4) AI Magazine 12.

[5] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (1st edn, Oxford University Press 2014).

[6] European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) P8_TA(2017)0051.

[7] Visa AJ Kurki, A Theory of Legal Personhood (Oxford University Press 2019) ch 2 https://doi.org/10.1093/oso/9780198844037.003.0002 accessed 24 June 2024.

[8] Concept of Personhood’ (Center for Health Ethics, University of Missouri School of Medicine) https://medicine.missouri.edu/centers-institutes-labs/health-ethics/faq/personhood accessed 24 June 2024.

[9] Supra note 9.

[10] Ibid.

[11] Ibid.

[12] Ibid.

[13] Ibid.

[14] Ibid.

[15] Ibid.

[16] Ibid.

[17] Ibid.

[18] Ibid.

[19] Ibid.

[20] Ibid.

[21] Arghya Sen, Artificial Intelligence and Autonomous Systems A Legal Perspective on Granting Personhood and Implications of Such a Decision, DME Journal of Law, 4(1), 15-26. doi: 10.53361/dmejl.v4i01.03

[22] Black’s Law Dictionary, Revised Fourth Edition, St.Paul Minn., West Publishing Co. 1968 (https://heimatundrecht.de/sites/default/files/dokumente/Black%27sLaw4th.pdf, accessed on 24 June 2024)

[23] Ibid.

[24] Section 19, Bharatiya Nyaya Sanhita, 2023 (“The word "man" denotes a male human being of any age; the word "woman" denotes a female human being of any age.”). Indian Penal Code, 1860, defines ‘man’ and ‘woman’ under Section 10, IPC.

[25] Section 26, Bharatiya Nyaya Sanhita, 2023 (“The word "person" includes any Company or Association or body of persons, whether incorporated or not.”). Indian Penal Code, 1860 defines ‘person’ under Section 11, IPC.

[26] Section 2(31), Income Tax Act, 1961.

[27] Section 2(zg), The Real Estate (Regulation and Development) Act, 2016.

[28] Section 2(31), Consumer Protection Act, 2019.

[29] Section 2(24), Wildlife Protection Act, 1972.

[30] Rule 2(g), Noise Pollution Regulation and Control Rules, 2000.

[31] Section 2(q), Human Immuno Deficiency Virus and Acquired Immune Deficiency Syndrome (Prevention and Control) Act 2017.

[32] Supra note 28 & 30. Section 2(s), Prevention of Money Laundering Act, 2002, Section 2(24), Prohibition of Benami Property Transactions Act, 1988, Section 3(23), Insolvency and Bankruptcy Code, 2016, Section 2(q), Human Immuno Deficiency Virus and Acquired Immune Deficiency Syndrome (Prevention and Control) Act 2017.

[33] Section 3(23), Insolvency and Bankruptcy Code, 2016.

[34] Ibid.

[35]Ibid.

[36] Supra note 29 & 30.

[37] Supra note 29.

[38] Section 2(s), Prevention of Money Laundering Act, 2002.

[39] Supra note 34.

[40] Rule 2(2), Environmental Protection Rules, 1986.

[41] Section 2(31), Customs Act, 1962.

[42] Rule 2 (18), National Company Law Tribunal Rules, 2016.

[43] Section 2(28), Juvenile Justice Act, 2015.

[44] V.D. Mahajan, Jurisprudence And Legal Theory ,338 (Eastern Book Company: 2016).

[45] Tesco Supermarkets Ltd. v. Nattrass., UKHL 1 AC 153, 33 (1971).

[46] Yogendra Nath Nasker v. Commissioner of Income Tax, AIR 1969 SC 1089; Ram Jankijee Deities v. State of Bihar, (1999) 5 SCC 50; Shiromani Gurudwara Prabandhak Committee, Amritsar v. Shri. Som Nath Dass (2000) 4 SCC 146 : AIR 2000 SC 1421.

[47] Pramatha Nath Mullick v. Pradyumna Kumar Mullick and Anr., AIR 1925 PC 139.

[48] Moorti Shree Behari Ji v. Prem Dass, AIR 1972 Allahabad 287.

[49] Supra Note 48.

[50] Ibid.

[51] Ibid.

[52] Mohd. Salim v. State of Uttarakhand 2017 SCC Online Utt 367.

[53] The State of Uttarakhand & Ors. V. Mohd. Salim & Ors., SLP(Civil) 16879/2017, Supreme Court.

[54] Delvaux,M. (2017), Reportwith recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), Technical report, Committee on Legal Affairs, European Parliament. Available at: http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+REPORT+A8-2017-0005+0+DOC+PDF+V0//EN [accessed 23 June 2024].

[55] Flanigan, J. (2017). Philosophical Methodol­ogy and Leadership Ethics. Leadership, 1-24. https:// doi.org/10.1177/1742715017711823. (last visited 24 June 2024).

[56] Rothblatt, Martine. Virtually Human: The Promise—and the Peril—of Digital Immortality (New York: St. Martin’s Press, 2014).

[57] Wallach, Wendell and Colin Allen. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press, 2009. Online edition, Oxford Academic, 1 Jan. 2009, https://doi. org/10.1093/acprof:oso/9780195374049.001.0001.

[58] Ibid.

[59] Stamatis Karnouskos, The Interplay of Law, Robots and Society in an Artificial Intelligent Era, LLM Thesis, UMEA University (2017).

[60] H. PUTNAM, Machines or Artificially Created Life?,  Journal of Philosophy, Vol 61., n. 21, American philosophical Association, 1964.

[61] Maruerite E Gerstner, Liability Issues With AI Software, 33 SANTA CLARA L.REV. 239 (1993).

[62] Ibid.

[63] Bryson, J.J. Patiency is not a virtue: the design of intelligent systems and systems of eth­ics, Ethics Inf Technol 20, 15–26 (2018). https://doi. org/10.1007/s10676-018-9448-6.

[64] Ibid.

[65] Supra note 23.

[66] Ibid.

[67] Ibid.

[68] Ibid.

[69] Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts (Springer 2013) 45–47.

[70] Bryant Walker Smith, ‘Automated Driving and Product Liability’ (2017) 2017 Mich St L Rev 1, 12.

[71] Thomas Burri and Fredrik von Bothmer, ‘The New EU Legislation on Artificial Intelligence: A Primer’ (SSRN, 21 April 2021) https://ssrn.com/abstract=3831424 accessed 30 September 2025.

[72] Hannah Yee-Fen Lim, Autonomous Vehicles and the Law: Technology, Algorithms and Ethics (Edward Elgar 2018) 63.

[73] Ryan Calo, ‘Artificial Intelligence Policy: A Primer and Roadmap’ (2017) 51 UC Davis L Rev 399, 421.

[74] Christiane Wendehorst, ‘Strict Liability for AI and Other Emerging Technologies’ (2020) 11 J Eur Tort L 150, 152.

[75] Matthew U Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies’ (2016) 29 Harv J L & Tech 353, 365.

[76] Supra Note 72.

[77] Supra Note 74.

[78] European Commission, Proposal for a Directive on Liability for Artificial Intelligence COM (2022) 496 final.

[79] European Parliament, Resolution on Civil Law Rules on Robotics (2017/2103(INL)) para 59.

[80] Andrea Bertolini, ‘Artificial Intelligence and Civil Liability’ (2019) 25 AI & Society 293, 297.

[81] Nathalie Smuha, ‘Beyond the Individual: Governing AI’s Societal Harm’ (2021) Internet Policy Review https://policyreview.info/articles/analysis/beyond-individual-governing-ais-societal-harm

 accessed 30 September 2025.

[82] Supra Note 82.

[83] Christopher Markou and Simon Deakin, Is Law Computable? Critical Perspectives on Law and Artificial Intelligence (Hart 2020) 212.

[84] CERRE (M Peitz et al), Liability Rules for the Age of Artificial Intelligence (CERRE Report, March 2021) 28.

[85] Supra Note 73.

[86] Shiromani Gurudwara Prabandhak Committee v Som Nath Dass (2000) 4 SCC 146

[87] Mohd Salim v State of Uttarakhand Writ Petition (PIL) No 126 of 2014 (Uttarakhand HC, 20 March 2017), stayed by Supreme Court, 7 July 2017.

[88] Supra Note 75.

[89] Jose Gabriel Carrasco Ramirez, From Autonomy to Accountability: Envisioning AI’s Legal Personhood, Applied Research in Artificial Intelligence and Cloud Computing  6(9) 23.

[90] Ibid.

[91] Ibid.

[92] Supra Note 88.

[93] Supra Note 76.

[94] Supra Note 86.

[95] European Commission, Proposal for a Directive on Liability for Artificial Intelligence COM (2022) 496 final; Ryan Calo, ‘Artificial Intelligence Policy: A Primer and Roadmap’ (2017) 51 UC Davis L Rev 399, 430. European Commission, Proposal for a Directive on Liability for Artificial Intelligence COM (2022) 496 final; Ryan Calo, ‘Artificial Intelligence Policy: A Primer and Roadmap’ (2017) 51 UC Davis L Rev 399, 430.