Synthetically Generated Information Under Indian Law: What the IT Amendment Rules 2026 Actually Require
Technology & Digital Platform Law

Synthetically Generated Information Under Indian Law: What the IT Amendment Rules 2026 Actually Require

Gagan Sharma··16 min read

India now has its first statutory definition of AI-generated content. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified on 10 February 2026 via G.S.R. 120(E), introduce the term Synthetically Generated Information into Indian law and build an entire compliance architecture around it. For technology platforms, AI tool providers, and digital businesses operating in India, this is not a future problem. It is a current one.

Having worked through the compliance implications of these rules from the inside, mapping them against product features, engineering timelines, and operational realities, the gap between what the rules say on paper and what compliance looks like in practice is significant. This article is an attempt to bridge that gap. For a broader picture of where AI regulation in India stands, across advisories, pending legislation, and sector-specific rules, I have written a separate overview that provides that context. This piece focuses specifically on the IT Amendment Rules and what they require of technology businesses right now.

What "Synthetically Generated Information" Actually Captures

Rule 2(1)(wa), inserted by the 2026 amendment, defines Synthetically Generated Information as audio, visual, or audio-visual information that is artificially or algorithmically created, generated, modified, or altered using a computer resource, in a manner that such information appears to be real, authentic, or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as, indistinguishable from a natural person or real-world event.

Read that definition carefully. Two things stand out.

First, the definition is limited to audio, visual, and audio-visual content. A chatbot generating a text response, a coding assistant producing source code, or an AI tool drafting an email would not fall within scope on a plain reading. Nor would a purely synthetic music track without an accompanying visual element. What falls squarely within scope is content like an AI-generated video of a public figure appearing to make a statement they never made, or a synthetically altered photograph placed in a news context. The rules target content that can deceive by appearing to depict real people or real events, which is a narrower frame than the broad "AI-generated content" label that much of the commentary has assumed.

Second, the threshold is not about how different the output is from its inputs but about how the output appears to a viewer. Content falls within scope if it appears real, authentic, or true and could be perceived as indistinguishable from a natural person or real-world event. This is a perception-based test rather than a process-based one. To illustrate: an AI-generated image of a clearly fantastical scene, say a dragon flying over Mumbai, would likely fall outside scope because no reasonable viewer would perceive it as a real-world event. The same image generation tool producing a realistic photograph of a political leader at an event that never occurred would be squarely within it.

The proviso to Rule 2(1)(wa) includes three express carve-outs. The first covers routine or good-faith editing, formatting, enhancement, technical correction, colour adjustment, noise reduction, transcription, or compression that does not materially alter the substance, context, or meaning of the underlying content. In practice, this means that applying an Instagram-style filter, using a noise reduction tool on a podcast recording, or auto-enhancing a photograph's brightness would not trigger SGI obligations. The second covers the routine creation of documents, presentations, PDFs, educational or training materials, and research outputs, provided these do not result in the creation of any false document or false electronic record. A corporate presentation assembled using an AI design tool, or a training module generated for internal use, would fall here. The third covers the use of computer resources solely for improving accessibility, clarity, quality, translation, description, searchability, or discoverability, which would include AI-powered captioning, audio descriptions for the visually impaired, or machine translation of existing content.

These carve-outs provide meaningful safe ground, but the boundary between what is carved out and what falls within scope remains fact-specific. If you are operating or building a platform that enables content creation or modification, the prudent assumption is that the definition will be interpreted broadly when the content in question is capable of being mistaken for real.

Who These Rules Apply To, and Why the Distinction Matters

The rules create obligations at three levels, and confusing them is a mistake I see frequently.

Intermediaries that offer SGI creation tools, meaning platforms whose computer resources enable, permit, or facilitate the creation, generation, modification, or alteration of information as synthetically generated information, carry specific due diligence obligations under Rule 3(3). These include labelling, metadata embedding, and preventing the creation of unlawful SGI. In concrete terms, this covers platforms offering AI image generators, video synthesis tools, face-swap features, AI voice cloning services, or any product that lets a user generate realistic audio-visual content. If a social media platform integrates an AI avatar creation feature, or a design tool adds an AI image generation capability, Rule 3(3) applies to that feature.

All intermediaries, meaning every platform that falls within Section 2(1)(w) of the IT Act, are subject to the general due diligence requirements, including the three-hour takedown obligation under Rule 3(1)(d) for unlawful content upon receiving actual knowledge. A community forum with 5,000 users, a B2B collaboration tool that allows file sharing, or a regional news aggregator with comment sections all fall within this category.

Significant Social Media Intermediaries, meaning platforms with 50 lakh or more registered users in India, carry additional obligations under Rule 4(1A), including requiring user declarations on uploaded content, deploying automated verification measures, and ensuring SGI is labelled before publication.

The operational distinction matters enormously. A small platform with 20,000 users that does not offer AI creation tools has a relatively lighter compliance burden, though it must still comply with the general takedown obligations. An intermediary offering AI generation features must implement labelling and metadata. An SSMI needs all of the above plus user declaration workflows, automated verification systems, and a team capable of responding to takedown orders within three hours. The compliance cost differential is orders of magnitude.

I have seen mid-stage startups assume these rules only target large social media companies, which is incorrect. If your product offers AI content creation features, the Rule 3(3) obligations apply regardless of your scale.

The Four Core Obligations

1. Labelling

Rule 3(3)(a)(ii) requires that intermediaries offering SGI creation tools must ensure that every piece of synthetically generated content that is not otherwise prohibited under sub-clause (i) is prominently labelled in a manner that ensures prominent visibility in the visual display, is easily noticeable and adequately perceivable, or, in the case of audio content, through a prominently prefixed audio disclosure. For SSMIs, Rule 4(1A)(c) separately requires that where a user declaration or technical verification confirms content is synthetically generated, the SSMI must ensure it is clearly and prominently displayed with an appropriate label or notice.

What the rules do not specify is the exact form of this label. There is no prescribed text, mandatory icon, or required placement, which appears to be deliberate since MeitY wants platforms to implement labelling in a manner appropriate to their medium. A video platform might overlay a persistent on-screen badge; an image generation tool might watermark the output; an audio tool might prepend a spoken disclosure. The flip side of this flexibility is that there is no safe harbour of "we used the prescribed label." You will need to make a judgement call on what prominent visibility and adequate perceivability mean for your product, and defend that judgement if challenged.

Rule 3(3)(b) adds that the intermediary shall not enable the modification, suppression, or removal of any label displayed in accordance with these provisions. The label, once applied, must be permanent.

2. Metadata Embedding

Rule 3(3)(a)(ii) also requires that SGI content shall be embedded with permanent metadata or other appropriate technical provenance mechanisms, to the extent technically feasible, including a unique identifier to identify the computer resource of the intermediary used to create, generate, modify, or alter such information. This obligation falls on intermediaries that offer SGI creation tools, not only on SSMIs.

This is technically non-trivial. For a platform that generates AI images, this might mean embedding EXIF or XMP metadata at the point of creation. For a video synthesis tool, it could involve container-level metadata in the output file. The work sits at the content processing layer, not the display layer, and for platforms handling millions of uploads daily it represents a significant engineering undertaking. The qualifying phrase "to the extent technically feasible" provides some room, but it is not a blanket exemption. The rules do not prescribe a specific technical standard for this metadata, leaving the choice of implementation to the platform.

3. User Declaration

SSMIs must, prior to the display, uploading, or publication of any information on their platform, require users to declare whether such information is synthetically generated. Rule 4(1A)(a) frames this as a mandatory pre-publication requirement for SSMIs specifically.

The practical question that comes up in every compliance discussion: what happens when users lie? The rules address this through the verification requirement in Rule 4(1A)(b), which requires SSMIs to deploy appropriate technical measures, including automated tools or other suitable mechanisms, to verify the accuracy of such declarations. A false declaration does not automatically strip the intermediary of safe harbour protection, but the Explanation to Rule 4(1A) clarifies that the SSMI's responsibility extends to taking reasonable and proportionate technical measures to verify the correctness of user declarations and to ensure that no synthetically generated information is published without such declaration or label.

4. Automated Verification

Rule 4(1A)(b) requires SSMIs to deploy appropriate technical measures, including automated tools or other suitable mechanisms, to verify the accuracy of user declarations, having regard to the nature, format, and source of the information.

Here is the tension the rules do not resolve: current detection technology for synthetically generated audio and video content is improving but remains imperfect. The rules require deployment of automated verification but provide no accuracy threshold and no safe harbour for good-faith detection failures. The proviso to Rule 4(1A) states that where an SSMI knowingly permitted, promoted, or failed to act upon SGI in contravention of the rules, it shall be deemed to have failed to exercise due diligence. The word "knowingly" introduces a mens rea element, but the practical line between knowledge and negligent failure to detect will inevitably be tested.

The Three-Hour Takedown Regime

This is the provision that has generated the most anxiety among technology companies, and rightly so.

Rule 3(1)(d), as amended by G.S.R. 120(E), requires all intermediaries to remove or disable access to unlawful information within three hours of receiving actual knowledge through a court order or a reasoned written intimation from an authorised government officer. This is not limited to SGI but applies to all categories of unlawful content specified in the clause, and the amendment reduced the previous 36-hour window to three hours. Separately, Rule 3(2)(b), also amended by the same notification, reduces the takedown window for intimate image content and impersonation (including artificially morphed images) from 24 hours to two hours.

Think about what this demands operationally. A 24/7 staffed compliance team with the authority to take removal decisions. Automated workflows to route government orders to the right team within minutes. Pre-built technical capability to remove specific content items across all surfaces, including web, mobile app, and API, within the window. Testing and drill protocols to ensure the three-hour SLA is actually achievable.

For platforms that have built their compliance infrastructure around 24-hour or 72-hour response windows, this is a fundamental re-architecture of their takedown pipeline.

The three-hour clock starts from receipt of actual knowledge. The rules specify that actual knowledge arises from a court order or a written intimation from an authorised government officer not below the rank of Joint Secretary or equivalent. If your compliance officer's email is unmonitored at 2 AM on a Sunday, that is your problem, not the government's.

Safe Harbour Implications Under Section 79

Section 79 of the IT Act provides intermediaries with protection from liability for third-party content, provided they comply with due diligence requirements. The Amendment Rules effectively make SGI compliance part of that due diligence framework.

This is the real enforcement mechanism. The rules themselves carry limited direct penalties, but non-compliance means potential loss of safe harbour under Section 79, which would make a platform directly liable for every piece of unlabelled SGI content on its servers. For any platform of meaningful scale, that exposure is existential.

The insertion of Rule 3(3) and Rule 4(1A) into the existing due diligence framework makes this explicit: the SGI labelling, metadata, and declaration mechanisms are part of the due diligence that intermediaries must observe to retain safe harbour. The connection between the new SGI obligations and the existing safe harbour framework is not implied; it is structural. Rule 3(1)(ca) and Rule 3(1)(cb), also inserted by the 2026 amendment, further provide that intermediaries must inform users of the consequences of contravening the SGI provisions and must take expeditious action upon becoming aware of violations.

The Gaps That Matter

Three significant gaps in these rules will create problems.

The gap at the open-source and on-device layer. Rule 3(3) does apply to AI tool providers that operate as cloud-based services, since a platform that receives a user's prompt, processes it on its servers, and returns a generated image or video is acting as an intermediary under Section 2(1)(w) of the IT Act and is offering a computer resource that enables SGI creation. The labelling, metadata, and content moderation obligations under Rule 3(3) fall squarely on such providers. Where the gap opens up is with open-source AI models that users download and run entirely on their own hardware. The developer of an open-source image generation model that is distributed for local use does not receive, store, or transmit any electronic record on behalf of the user, and is arguably not acting as an intermediary at all. Content generated using such locally-run models would arrive on a hosting platform without any embedded metadata or labelling, and the hosting platform would bear the full compliance burden for content it had no role in creating. This gap is structural and not easily addressed within the intermediary-focused architecture of these rules.

Individual creator liability is unclear. A user who creates a deepfake video of a business rival and uploads it to a social media platform with a false declaration that it is not synthetically generated faces no specific penalty under these rules. The platform might lose safe harbour, the government can order takedowns, but the individual creator's liability sits in a grey zone between existing criminal provisions (Section 66D of the IT Act for cheating by personation using computer resources, and the defamation provisions under the BNS) and the new framework. Prosecution would require mapping the specific SGI creation to an existing offence, such as impersonation or defamation. That mapping is not always straightforward, particularly where the SGI does not neatly fit an existing offence category.

Smaller platforms offering AI tools face disproportionate burden. The labelling and metadata obligations under Rule 3(3) apply to any intermediary that offers SGI creation tools, regardless of size. For a three-person startup that has integrated an AI generation feature into its product, building the labelling, metadata, and content moderation infrastructure is a non-trivial engineering cost. The rules make no accommodation for platform size below the SSMI threshold. There is no exemption, no simplified compliance track, no extended timeline for smaller operators offering these capabilities. This is a design choice that will force some smaller platforms to choose between removing their AI features and accepting the compliance burden.

Practical Compliance Steps

Having worked through these requirements in practice, the compliance path is reasonably clear even if the rules themselves leave interpretive questions open.

The first step for any technology business is to audit your product for SGI exposure: identify every feature that allows users to upload, share, or generate content, and map which of those features could involve SGI. This includes the obvious surfaces like post creation and file uploads, but also less obvious ones such as profile picture uploads, in-app messaging with media attachments, user-submitted reviews with photos, or embedded content from third-party integrations. I find that most companies underestimate how many content touchpoints they actually have.

For SSMIs, the highest-impact compliance step is implementing a user declaration workflow as required by Rule 4(1A)(a). This can be as simple as a checkbox or toggle requiring users to state whether uploaded content is synthetically generated, logged with a timestamp and user identifier. Even platforms below the SSMI threshold may find it prudent to implement a voluntary declaration mechanism, since it establishes an audit trail that matters if safe harbour is ever challenged.

Labelling requires more design thought. You will need to build visible labels for SGI content, test them for visibility across your product surfaces, and, crucially, document your design rationale. If your approach to labelling is ever questioned, contemporaneous documentation of why you made the choices you did is your strongest defence.

For platforms at or approaching SSMI thresholds, the metadata and detection work cannot wait. Metadata integration and automated detection tooling are not weekend projects, and the engineering scope is substantial. Platforms handling personal data in this context should also review their obligations under the DPDP Act, which imposes separate consent and data handling requirements that intersect with the SGI framework in non-obvious ways.

Even platforms well below the SSMI threshold should be watching the broader regulatory direction. Takedown response windows are shrinking, compliance expectations are rising, and building your operations around the assumption that today's requirements are the ceiling is a mistake.

And document everything. Compliance decisions, implementation timelines, technical architecture choices, testing results. If your safe harbour is ever challenged, this contemporaneous record of good-faith compliance efforts is what will matter most.

The IT Amendment Rules 2026 are not perfect. The automated verification mandate assumes a level of detection reliability that current technology does not consistently deliver, and the allocation of liability between platforms and tool providers needs work. These are legitimate criticisms, but they do not change the fact that the rules are in force and that compliance is the price of continuing to operate with safe harbour protection.


For the full text of the IT Act, IT Amendment Rules, and related regulations, see the Legal Resources page.

Share:

Stay Informed

Get updates on developments in AI regulation, data privacy, and corporate law in India. New articles delivered to your inbox.

No spam. Unsubscribe anytime.

Related Articles