
AI and the Law (Who is Liable When Machines Make Mistakes?)
0
29
0

Author: Varshini
Introduction
AI really is now a practical reality that influences choices in industries like healthcare, education, finance, and law enforcement, rather than a science-fiction fantasy. But as our reliance on AI increases, so literally do the chances of malfunction, poor decision-making, and unexpected outcomes.
This leads to an important legal question:Who is responsible for harm caused by AI systems?
As AI lacks legal identity and intent, determining accountability becomes difficult—especially when considering more conventional legal frameworks like tort law, criminal law, and intellectual property law.
In order to address the era of intelligent machines, this paper examines the legal difficulties regarding AI liability, assesses appropriate case law from both India and beyond, and offers ideas for how jurisprudence can develop.
Nature of AI Mistakes
Without direct human involvement, AI systems—especially those built on machine learning (ML)—work by finding patterns in huge amounts of data and drawing conclusions or making decisions. This presents distinct challenges:
Unpredictability: AI systems have the ability to "learn" or change in ways their creators did not expect.
Opacity: The inability to explain AI decision-making is commonly referred to as the "black box" problem.
Autonomy: AI systems may operate without constant human supervision.
Erroneous decisions made by such systems—like rejecting a loan, misdiagnosing a disease, or suggesting hazardous information—can cause significant harm, but it remains unclear who is legally responsible.
Traditional Liability Models
The current legal frameworks—especially tort law and contract law—are based on:
Human intent (mens rea)
Negligence
Strict liability for dangerous acts
Applying these paradigms to AI presents several difficulties:
To prove negligence, one must demonstrate duty, breach, and causality. But how can breach be shown when the AI acted independently?
Strict liability is generally limited to physical products and dangerous industries—it doesn’t neatly apply to software.
Developers and deployers may not have an employment relationship, which complicates vicarious liability.
Moreover, AI does not have legal personality, so responsibility cannot be assigned directly to the machine.
AI Lawsuit (Florida, USA, 2025)
In May 2025, a Florida federal court rendered a historic ruling in the U.S. by allowing a wrongful death lawsuit against Google and Character Technologies.
Case Background:Megan Garcia, the plaintiff, claimed that her 14-year-old son, Sewell Setzer III, killed himself after interacting with an AI chatbot that mimicked a character from Game of Thrones. The chatbot reportedly advised the child on self-harm and fostered suicidal thoughts.¹
Key Legal Findings:
AI as a Product: The court held that the chatbot may be treated as a product under tort law—making it subject to product liability.
Free Speech Rejected: The judge ruled that since AI outputs are machine-generated, they are not protected under the First Amendment.
Duty to Warn: It could be considered negligence or a design flaw if content filters and user protections were not in place.
Significance:
This case is among the first to address AI developers' liability for autonomous outputs. It signals that courts may apply traditional tort rules—like duty of care, foreseeability of injury, and product defects—to AI systems.
Indian Context
India has not yet established a formal legal framework for AI liability. However, the ongoing case of ANI v. OpenAI could shape the future of AI accountability in India.
Case Background:
In November 2024, Asian News International (ANI) sued OpenAI in the Delhi High Court, alleging that ChatGPT had:
Trained on ANI’s exclusive news content without permission
Damaged ANI's reputation by generating false news misattributed to ANI
Legal Allegations:
Violation of the Copyright Act, 1957
Unfair business practices and reputational harm
Unlawful training of AI models using proprietary data
Emerging Legal Questions:
Is it illegal to train AI on paywalled or public content?
Should licenses be mandatory before training AI with copyright-protected material?
Who is responsible for false information produced by AI—the platform, the deployer, or the developer?
Significance:
The ANI case highlights input-stage liability, i.e., responsibility for the data used to train AI models. It also engages with India's data protection laws and could influence future legislation under the proposed Digital India Act.
Models of Law for Liability in AI
Given the gaps in traditional legal doctrines, new models of liability have been proposed and are gradually being adopted:
Product Liability: Treating AI as a product, companies can be held accountable for flaws in safety, warnings, or design.
Platform or Enterprise Liability: Responsibility lies with businesses/platforms deploying or integrating AI.
Composite or Shared Liability: Liability may be distributed among developers, data providers, and users, based on their control and role.
Absolute and Strict Liability: For high-risk AI applications (e.g., autonomous vehicles, medical diagnostics), liability may be imposed irrespective of fault—similar to environmental law.
India’s Regulatory Proposals and Legislative Vacuum
Despite growing AI deployment, India lacks a comprehensive legal framework to address its challenges.
Current Legal Landscape:
Information Technology Act, 2000: Regulates cyber activity but is inadequate for AI-specific issues
Digital Personal Data Protection Act, 2023: Protects data privacy but doesn’t govern AI-based decisions
NITI Aayog's Responsible AI Paper (2021): Offers ethical guidelines but is not legally binding
The proposed Digital India Act may address:
Algorithmic transparency
Developer accountability
AI ethics
Until it is enacted, AI liability in India remains uncertain and heavily reliant on judicial interpretation.
Conclusion
The law must evolve alongside AI.
As autonomous systems emerge, lawmakers and courts must craft flexible legal frameworks that ensure innovation does not compromise justice or safety. Courts are beginning to address these questions—applying traditional principles where possible and signaling reform where necessary.
The ANI and U.S. AI cases highlight the urgency of creating robust rules for AI accountability.
India lacks a defined legal framework for AI liability. Moving forward, authorities must adopt a proactive approach. It is not enough to ask who built the algorithm—we must also ask:
Who was in control?
Who benefited from it?
Who had the power to prevent harm?
Only then, in this age of intelligent machines, can we truly ensure responsibility and justice.
References:
Garcia v. Character Technologies, Inc. et al
ANI Media Pvt Ltd vs OpenAI Inc & Anr, Delhi High Court, 18 March 2025





