The Current State of AI Regulation in India
Technology & Digital Platform Law

The Current State of AI Regulation in India

Gagan Sharma··14 min read

Timeline

May 2027

SDF obligations expected

  • Mandatory impact assessments and independent audits
  • Algorithmic fairness assessments become enforceable

Feb 2026

IT Amendment Rules 2026 (SGI)

  • SGI legally defined; 3-hour takedown mandate
  • Loss of Section 79 safe harbour for non-compliance

Nov 2025

DPDP Rules notified

  • Algorithmic fairness assessments for SDFs
  • Consent manager registration framework

Aug 2025

RBI FREE-AI Report

  • Seven principles, 26 recommendations across six pillars
  • Board-approved AI policies and bias audits proposed

Jun 2025

SEBI AI consultation paper

  • Six-pillar framework for market participants
  • Human-in-the-loop for robo-advisory and algo trading

Early 2025

IndiaAI Safety Institute

  • Technical standards and safety benchmarks for AI
  • Part of the IndiaAI Mission

Dec 2024

RBI AI expert committee formed

  • AI governance for financial sector
  • Human override capability over AI decisions proposed

Nov 2024

ANI v. OpenAI filed

  • First major Indian copyright-AI case at Delhi HC
  • Four key questions on AI training and fair dealing

Nov 2023

CCPA Dark Patterns Guidelines

  • 13 dark patterns identified under Consumer Protection Act
  • Applies to AI-driven personalisation and recommendations

Aug 2023

DPDP Act enacted

  • Data Fiduciary obligations for AI systems
  • Penalties up to Rs. 250 crore for non-compliance

Mar 2023

MeitY announces Digital India Act

  • Proposed replacement for the IT Act, 2000
  • Paused before 2024 elections; not yet revived

For the better part of the last decade, the Indian government's position on AI regulation was to not regulate it at all, preferring to let the industry grow and figure out the rules later. As recently as 2023, official policy statements were explicitly pro-growth and anti-regulation when it came to AI.

That position has changed, though not through a single dramatic piece of legislation like the EU's AI Act. What has happened instead is a series of moves across multiple fronts: new administrative rules with hard compliance timelines, existing criminal and data protection laws being applied to AI-specific harms, and institutional bodies being set up to develop standards and coordinate policy. None of these individually amount to a comprehensive AI law, but taken together, they add up to something that any company building or deploying AI in India needs to take seriously.

What follows is a practical breakdown of where things stand as of early 2026, organised around five areas that matter most if you're building or deploying AI products in India.

The IT Amendment Rules, 2026

Notified in February 2026, these are the most direct attempt by the Indian government to regulate AI-generated content. The rules amend the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and they introduce obligations that are specific, time-bound, and backed by real consequences.

What counts as AI-generated content now has a legal definition

The rules introduce the concept of Synthetically Generated Information (SGI), defined in Rule 2(1)(wa) as audio, visual, or audio-visual information that is artificially or algorithmically created, generated, modified, or altered using a computer resource, in a manner that it appears to be real, authentic, or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as, indistinguishable from a natural person or real-world event. This is the first time Indian law has formally defined this category. The significance is that until now, terms like "deepfake" had no legal meaning. Regulators and courts were working without a shared vocabulary, and that gap has now been addressed.

Labelling and traceability requirements

Intermediaries offering SGI creation tools must now label all synthetically generated content with a prominent marker that ensures prominent visibility and is easily noticeable and adequately perceivable, or, in the case of audio content, through a prominently prefixed audio disclosure. Beyond the visible label, the rules also require permanent provenance metadata to be embedded in the content, to the extent technically feasible, so that its origin can be traced. The idea is that even if a deepfake is downloaded, re-uploaded, and shared across platforms, the metadata trail should survive.

Takedown timelines that will test operational readiness

This is the provision that has generated the most discussion. Under the amended Rule 3(1)(d), intermediaries are now required to take down unlawful content within 3 hours of receiving actual knowledge through a court order or a reasoned written intimation from an authorised government officer. This applies to all unlawful content, not just SGI, but its practical significance is most acute in the SGI context given the speed at which synthetic content can spread. For content involving intimate images, impersonation, or artificially morphed images, Rule 3(2)(b) requires action within 2 hours. Compare this with the 36-hour window under the earlier rules, and it becomes clear that this is not an incremental change but a fundamentally different compliance setup, requiring round-the-clock moderation teams and pre-approved escalation protocols.

The 3-hour clock starts from receipt of notification, not from when someone internally reads it. If you operate a platform with user-generated content, automated intake and triage systems are now effectively mandatory.

What happens if you miss the deadline

The consequence is loss of safe harbour under Section 79 of the IT Act, which is a far more serious outcome than any fine or warning. Without safe harbour protection, the platform is treated as the publisher of the content, meaning direct exposure to criminal prosecution, civil defamation claims, and regulatory action. For intermediaries, safe harbour is the foundational legal protection that makes their business model viable, and losing it threatens the very basis on which they operate.

For a detailed analysis of the IT Amendment Rules, including the full text of the key provisions and their operational implications, see: The IT Amendment Rules, 2026: What They Say and What They Mean for Your Business.

How Existing Laws Apply to AI

India's broader regulatory approach to AI is not built on AI-specific legislation. Instead, the government and regulators are applying existing laws to AI-related harms. The logic is that if an AI system causes the same type of harm that a human could cause, the existing legal framework should be capable of addressing it, and three statutes in particular are doing most of the heavy lifting.

Digital Personal Data Protection Act (DPDPA), 2023

Any entity that collects or processes personal data using AI systems is classified as a Data Fiduciary under the DPDPA. The obligations are familiar to anyone who has worked with data protection regimes: informed consent, purpose limitation, data minimisation, and the right to erasure. What's specific to AI is that larger entities, those classified as Significant Data Fiduciaries, will be required to conduct algorithmic fairness assessments, annual data protection impact assessments, and independent audits. The DPDP Rules were notified in November 2025, and these SDF-specific obligations are part of a phased rollout expected to come into force by May 2027. So the statutory basis and the implementing rules are both in place, but companies have a window to prepare before the enhanced obligations become enforceable.

Bharatiya Nyaya Sanhita (BNS), 2023

The BNS replaced the Indian Penal Code in 2023, and while it doesn't use the word "AI" anywhere, its technology-neutral drafting means several provisions apply directly to AI-enabled offences. Cheating by personation covers voice cloning and identity fraud using AI-generated likenesses, the organised crime provisions can reach large-scale AI-driven fraud networks, and defamation applies regardless of whether the defamatory content was written by a person or generated by a model. For prosecutors, the BNS provides usable tools without needing to wait for AI-specific criminal law.

Consumer Protection Act, 2019

The Central Consumer Protection Authority has been active in using this law against manipulative design practices in digital services. In November 2023, the CCPA notified the Guidelines for Prevention and Regulation of Dark Patterns, 2023 under Section 18 of the Act, identifying 13 specific dark patterns including false urgency, basket sneaking, confirm shaming, forced action, subscription traps, drip pricing, and disguised advertisements. These guidelines apply to all platforms systematically offering goods or services in India, including e-commerce entities, advertisers, and service providers. The CCPA has already taken enforcement action under these guidelines, and in June 2025 followed up with an advisory requiring all e-commerce platforms to conduct self-audits within three months and submit self-declarations confirming their platforms are free of dark patterns. When these dark patterns are driven by AI — for example personalised pricing algorithms or manipulative recommendation systems — the same enforcement framework applies. Beyond dark patterns, the Act is also relevant to misleading advertising claims about AI capabilities and to algorithmic bias in consumer-facing services like lending, insurance underwriting, and healthcare recommendations.

The Intellectual Property Questions

Copyright and AI is probably the area where Indian law is least settled. The Copyright Act, 1957 recognises only natural persons as authors. The straightforward implication is that works generated entirely by AI, without meaningful human creative input, have no author and no copyright owner, and effectively fall into the public domain.

For businesses building on top of generative AI, this creates real uncertainty. If your product generates text, images, or code, do you own the output? Can a competitor freely copy it? The answer right now is unclear, and it's likely to stay that way until either the courts or the legislature provide clarity.

ANI v. OpenAI

The most important case in this space is ANI Media Pvt. Ltd. v. OpenAI Inc. (CS(COMM) 1028/2024), being heard by Justice Amit Bansal at the Delhi High Court. Filed in November 2024, the case has grown well beyond a bilateral dispute. The Digital News Publishers Association, the Federation of Indian Publishers, and the Indian Music Industry have all intervened, arguing that their members face similar risks from unauthorised use of copyrighted content for AI training. The Court has appointed two amici curiae and framed four key legal questions: whether storing copyrighted data for training amounts to infringement, whether generating user responses using that data constitutes infringement, whether such use qualifies as "fair dealing" under Section 52 of the Copyright Act, and whether Indian courts have jurisdiction given that OpenAI's servers are located in the US. No interim injunction has been granted so far, and the Court has observed that the matter is "largely academic" in nature, suggesting it does not see urgency for interim relief. However the case is decided, it will set the template for how copyright and AI interact in India for years.

Separately, the DPIIT has released a working paper examining the use of copyrighted material for AI training, and the Commerce Ministry has constituted a panel of eight experts to assess whether the Copyright Act is robust enough to address the challenges posed by AI. Together, these moves suggest that a policy position is being developed, and legislative intervention in the form of amendments to the Copyright Act is a real possibility in the near to medium term.

Institutional Architecture

The effectiveness of any regulatory framework ultimately depends on the institutions responsible for implementing and enforcing it. India has created a few AI-specific bodies and is relying on existing sectoral regulators to fill in the gaps.

IndiaAI Safety Institute (AISI)

Set up in early 2025 as part of the IndiaAI Mission, AISI's job is to develop technical standards, safety benchmarks, and testing protocols for AI systems deployed in India. It's modelled loosely on the UK's AI Safety Institute. The institute is still in its early stages, and it remains to be seen how much influence it will have on actual policy and enforcement, though the fact that India now has a dedicated body for AI safety evaluation is itself a significant institutional development.

AI Governance Group (AIGG)

The AIGG is an inter-ministerial body responsible for coordinating AI policy across government departments. In theory, it ensures that MeitY, the Commerce Ministry, the Health Ministry, and others are aligned on AI governance. In practice, inter-ministerial coordination is one of the harder problems in Indian governance, and it's too early to assess whether AIGG is making a meaningful difference.

Sectoral regulators

The RBI and SEBI have both moved to establish AI governance frameworks for their regulated entities, though each is at a different stage.

The RBI constituted an expert committee in December 2024, and in August 2025 released the FREE-AI Report (Framework for Responsible and Ethical Enablement of Artificial Intelligence). The report lays down seven guiding principles covering trust, transparency, accountability, fairness, explainability, and safety, supported by 26 actionable recommendations across six pillars including governance, risk management, data infrastructure, and consumer protection. Among the key proposals: regulated entities would be required to implement board-approved AI policies, include AI-related disclosures in annual reports, conduct bias audits, and maintain human override capability over AI-driven decisions. The FREE-AI framework is currently advisory, and its practical impact will depend on whether the RBI codifies these recommendations into binding Master Directions or circulars. SEBI released a consultation paper in June 2025 proposing a principles-based framework built on six pillars: model governance, data privacy, fairness, transparency, accountability, and cybersecurity. It requires market participants to designate senior management with technical expertise to oversee AI deployments, conduct periodic audits and report accuracy results to SEBI, and maintain human-in-the-loop oversight for high-impact decisions like robo-advisory, portfolio rebalancing, and automated order routing. The consultation period closed in July 2025, and final guidelines are expected in due course.

The Gaps and Challenges

It would be misleading to present India's current framework as complete or fully thought through, because there are real gaps that anyone operating in this space needs to be aware of.

AI detection technology is not reliable enough

The IT Amendment Rules assume that platforms can identify synthetically generated content with reasonable accuracy. The reality is that current detection tools produce a significant number of false positives. Legitimate satire, commentary, and research content risk being incorrectly flagged and removed. For platforms under pressure to meet 3-hour takedown deadlines, the incentive is to over-remove rather than risk losing safe harbour. This creates a real tension with free speech principles that has not been adequately addressed in the rules.

Compliance costs fall unevenly

The operational requirements of the 2026 rules, particularly the takedown timelines, assume a level of infrastructure that large platforms can afford but smaller ones cannot. A bootstrapped Indian startup running a user-generated content platform does not have Google's moderation infrastructure or Meta's AI signal detecting capability. There is a legitimate concern that these rules, while aimed at the biggest platforms, will disproportionately burden the smaller Indian companies that arguably need the most room to grow.

The question of redrafting the IT Act

The IT Act, 2000 was written for a very different technological era, and every subsequent amendment, including the 2026 rules, is an attempt to make a 25-year-old statute work for problems its drafters could not have anticipated. There is a growing view among lawyers and academics that at some point India will need to consider either a comprehensive redraft of the IT Act or a new digital-age legislation altogether. In March 2023, MeitY formally announced plans for a Digital India Act (DIA) to replace the IT Act, conducted multiple rounds of stakeholder consultations, and released a detailed presentation outlining the proposed law's scope covering AI, cybercrime, platform accountability, and online safety. That process was paused ahead of the 2024 general elections and has not been revived since. The government's current position is that existing laws are broadly sufficient and that the preference is to avoid introducing new legislation unless it becomes unavoidable. There is something to be said for that approach — India already has a dense regulatory landscape, and there is a legitimate argument that layering on more legislation in a fast-evolving technology space risks doing more harm than good. At the same time, the current framework of amendments and rules built on top of a 25-year-old statute does have its limits, and whether those limits will require a more fundamental legislative exercise is a conversation that is likely to continue in the years ahead.

So where does this leave us?

India's approach to AI regulation is fragmented and still evolving, but the obligations it creates are real and enforceable. Building or deploying AI in India now requires a multi-statute compliance exercise spanning the IT Amendment Rules for content obligations, the DPDPA for data handling, the BNS for criminal exposure, consumer protection law for user-facing claims, and whatever sector-specific guidance applies to your industry. These regimes operate concurrently, and the regulatory trajectory is clearly towards more oversight rather than less. Companies that build compliance into their products and processes now will be better positioned than those that wait for enforcement actions to force their hand.

I will keep this article updated as the law develops. If you are working through any of these issues and want to discuss, feel free to reach out.


For the full text of the IT Act, DPDP Act, Copyright Act, and related regulations, see the Legal Resources page.

Share:

Stay Informed

Get updates on developments in AI regulation, data privacy, and corporate law in India. New articles delivered to your inbox.

No spam. Unsubscribe anytime.

Related Articles