Home » Blog » The Ghost in the Machine: Navigating the Crisis of “Causative Authorship” under Section 2(d) of the Copyright Act 1957

The Ghost in the Machine: Navigating the Crisis of “Causative Authorship” under Section 2(d) of the Copyright Act 1957

Authored By: Susrita Pal

University of Mumbai,Thane Sub-Campus

For nearly seventy years, India’s Copyright Act of 1957 has stood as a guardian of the human mind. Its core belief is simple: the law should reward that unique “spark” of human creativity.

This idea was set in stone by the Supreme Court in the landmark Eastern Book Company v. D.B. Modak case. It was a turning point that moved us away from the old “sweat of the brow” rule—where just working hard was enough—toward a more sophisticated “modicum of creativity” standard. Basically, the court decided that for something to be copyrighted, it needs a tiny bit of creative soul, and that the soul must come from a human.

But here we are in February 2026, and things have changed. The rise of Generative AI has essentially pulled “creation” away from “consciousness.” When an LLM or a digital generator spits out a masterpiece, it shakes the very foundation of Section 2(d) of the Act. We’re now seeing a massive friction between a law written for humans and the reality of machine-made art. My argument is that we need to stop obsessing over “biological authorship” and start looking at “causative authorship.” We need to recognize the human behind the prompt as the real conductor of the machine. The biggest legal wall we’re hitting is Section 2(d)(vi), which defines an author as the person who “causes the work to be created.” Back in the 90s when this was written, computers were just “dumb tools”—like a digital pen. The human made every single choice. Today, AI makes its own probabilistic decisions on lighting, syntax, and style. This breaks the old chain of command. If the AI is doing the “creative heavy lifting,” traditional logic (like we saw in the U.S. case Thaler v. Perlmutter) suggests the work has no human author and belongs to everyone—the public domain.

India first really felt this tension during the RAGHAV AI (Suryast) mess. The Copyright Office granted co-authorship to the AI before quickly backpedalling and withdrawing it. This “flip-flop” is exactly what’s being fought over right now in the Delhi High Court with ANI Media v. OpenAI. The court is trying to figure out if machine learning is just “Fair Dealing” or if it’s a total theft of a media house’s rights.

These cases show that our courts are exhausted, trying to jam 21st-century tech into a mid-20th-century law. This refusal to acknowledge AI-assisted work has created what I call a “Legitimacy Gap”—a space where the law completely ignores the human grit and effort it takes to steer these complex machines.

To move past the rigid “Man vs. Machine” debate, we need to embrace a framework centred on Significant Human Input (SHI). This isn’t just a theory; it’s built on three very real pillars of legal logic that reflect how we interact with technology today.

  1. The “Liability-Right” Symmetry (The Equity Argument)

The most striking inconsistency in the current Indian legal regime is found in the IT (Amendment) Rules, 2026. These rules impose strict obligations on the “deployer” or “user” of AI to ensure that Synthetically Generated Information (SGI) does not violate public order or individual rights.

The Argument: Under the Principle of Equity, if the legal system identifies a human prompter as the Subject of Liability (i.e., they are “human enough” to be sued or prosecuted for an AI’s harmful output), it is legally incoherent to deny that same human the status of a Subject of Rights (i.e., they are not “human enough” to own the copyright).

If a prompter exercises enough “control” over the AI to be held responsible for a defamatory output, that same “control” should be viewed as the causative force required for authorship under Section 2(d)(vi).

We cannot decouple responsibility from ownership without creating a lopsided legal landscape that punishes innovation while refusing to protect it.

  1. The “Test of Creative Proximity”-

In Eastern Book Company v. D.B. Modak, the Supreme Court emphasized that creativity is a matter of degree. I propose that we evaluate AI works through a “Test of Creative Proximity.” a. Distant Proximity (Public Domain): If a user provides a generic, single-sentence prompt (e.g., “Write a story about a warrior”), the AI makes the vast majority of “creative choices.” Here, the human is too distant from the final expression to be called an author.

  1. Close-Proximity (Copyrightable): If a user engages in “Prompt Engineering”—providing 500-word constraints, stylistic parameters, iterative feedback, and post-generation edits—the human is in “Close Proximity” to the output. In this scenario, the AI is not the creator; it is the execution engine. The “modicum of creativity” resides in the human’s ability to refine the machine’s potential into a specific, intended result. Denying protection here would be equivalent to denying a photographer copyright because the camera’s internal software automatically adjusted the exposure and focus.
  2. The Analogy of the “Intellectual Orchestrator”-

We must look at Section 2(o) regarding “compilations” and “derivative works.” In traditional law, a human who selects 50 public-domain poems and arranges them into a new book receives copyright for the selection and arrangement. When a human prompter generates 100 variations of an image and selects the one that perfectly matches their artistic vision, they are performing an act of Selection and Curation. The Bombay High Court’s stance in Arijit Singh v. Codible Ventures LLP (2024) suggests that the law must protect the “human core” of a creation. By recognizing the prompter as an Intellectual Orchestrator, we align the Copyright Act with modern creative practices, ensuring that the “Skill and Judgment” of the director is rewarded, even if the “performers” are lines of code.

The journey through the corridors of the Copyright Act, 1957, and the evolving landscape of 2026 brings us to a singular, uncomfortable realization: our legal system is suffering from a “Humanity Bias” that no longer serves the interests of innovation. For decades, we have clung to the notion that creativity is a uniquely biological phenomenon—a “divine spark” that exists only within the synaptic firings of the human brain. This view, championed in the era of Eastern Book Company v. D.B. Modak, served us well when computers were merely glorified typewriters.

However, we are now in an era where the “hand” that draws is made of code, and the “eye” that perceives is a neural network. If we continue to insist on “Biological Authorship,” we are effectively sentencing most of the future human creativity to the public domain. This is not just a legal technicality; it is an economic and cultural crisis. When we deny copyright to an artist who uses AI to manifest a complex vision, we are not “protecting human dignity”—we are devaluing the human intent that drove the machine. As analysed in Part 3, the most glaring logical flaw in our current framework is the disconnect between the IT (Amendment) Rules, 2026, and the Copyright Act. We have created a “Schizophrenic Jurisprudence” where the State treats a human prompter as an “Author” for the purposes of punishment, but as a “non-entity” for the purposes of protection. If a prompter directs an AI to create a work that happens to infringe on another’s privacy or reputation, the 2026 Rules are swift to strip away “safe harbour” protections and hold the human accountable.

The law recognizes the human’s “agency” in causing the harm. Why, then, do we refuse to recognize that same agency when the output is a masterpiece? To maintain this paradox is to violate the fundamental Principle of Equity. A legal system that only recognizes a human’s presence in the “room of shadows” (liability) but ignores them in the “room of light” (ownership) is a system that has lost its moral compass. Authorship should be granted if the human can demonstrate a “chain of creative command.” This includes the documentation of iterative prompts, the setting of specific seeds or weights, and the post-generation curation that transforms a raw output into a finished expression.

We must protect the sanctity of the public domain by excluding “low-effort” generations. If a user simply types “a beautiful landscape” and hits enter, they have not “caused” the creation in a meaningful sense; the AI’s training data has. A compelling statutory parallel can be drawn from the definition of authorship in cinematograph films.

Under Section 2(d)(v), the ‘Author’ is the Producer—the entity that makes the ‘arrangements necessary’ for the work. This suggests that Indian law already recognizes a form of ‘Managerial Authorship,’ where the person who provides the vision and resources is the owner, even if they do not physically execute every frame. By this logic, the AI Prompter acts as a ‘Digital Producer’; by providing the prompt, the seed, and the parameters, they make the ‘arrangements necessary’ for the output to exist, thus satisfying the causative requirement of Section 2(d)(vi).” While the SHI test is a vital immediate fix, the long-term solution may lie in a sui generis (unique) category of rights.

Perhaps a “Digital Authorship” right that lasts for a shorter duration—say, 20 years instead of the standard “life plus 60″—would balance the need for incentive with the need to keep the public domain healthy. This would prevent the “Copyrighting of Everything” by massive AI firms while ensuring that independent creators can still monetize their AI-assisted works.

In conclusion, the “Ghost in the Machine” is not the AI; it is the human intent that survives through the algorithm. The law must stop looking at the “tools” and start looking at the “architects.” Whether we are looking at the Delhi High Court’s deliberations in ANI v. OpenAI or the Bombay High Court’s protection of personality in Arijit Singh, the message from the judiciary is clear: The human must remain at the centre of the law.

We are at a crossroads. We can either retreat into a defensive, outdated human-centrism that stifles our digital economy, or we can embrace “Causative Authorship.” By doing the latter, we ensure that the Indian Copyright Act remains a living document—one that protects the “modicum of creativity” wherever it may be found, even if it is channelled through a prompt. The future of Indian IPR depends on our ability to see the human hand in the digital brushstroke.

BIBLIOGRAPHY-:

Statutes and Regulations-

  1. The Copyright Act, 1957, No. 14, Acts of Parliament, 1957 (India).
  2. Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, Gazette of India, pt. II sec. 3(i) (Feb. 11, 2026).

Indian Case Laws-

  1. ANI Media Pvt. Ltd. v. OpenAI OPCO LLC, CS(COMM) 1028/2024, (Del. H.C. Nov. 19, 2024) (India).
  2. Ankit Sahni v. Registrar of Copyrights (RAGHAV AI/Suryast Case), (2023) (Unreported/Pending Review) (India).
  3. Arijit Singh v. Codible Ventures LLP, COM IPR SUIT (L) No. 23443 of 2024, (Bom. H.C. July 26, 2024) (India).
  4. Book Company v. D.B. Modak and Anr. AIR (2008) SUPREME COURT 809

International Case Law-

  1. Thaler v. Perlmutter, 687 F. Supp. 3d 140 (D.D.C. 2023).

Institutional Reports

  1. Dep’t for Promotion of Indus. & Internal Trade, Ministry of Com. & Indus., Working Paper on Generative AI and Copyright (Working Paper, Dec. 2025) (India).

Sites Referred-

India Code: https://www.indiacode.nic.in.

SCC Online: https://www.scconline.com.

Indian Kanoon: https://indiankanoon.org.

MeitY: https://www.meity.gov.in.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top