Last Updated:
IT Rules 2026 expand regulation to AI-generated content like deepfakes. Social media platforms must label, remove, or restrict deceptive material within timelines or face penalties

News18
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, issued under the IT Act, 2000, amend the 2021 rules and apply to all online intermediaries, including social media platforms, messaging services, content-hosting websites and AI-based platforms.
The amendments, effective from 20 February 2026, significantly expand the regulatory scope by explicitly bringing artificial-intelligence-generated, synthetic and manipulated content—such as deepfakes, voice cloning and algorithmically altered text, images, audio or video—within the definition of regulated online content.
By doing so, the rules remove any ambiguity about whether AI-generated material is covered and treat such content on par with other forms of user-generated information under Indian law.
Who are the new rules applicable to?
Issued under Section 87 of the IT Act, 2000, these rules amend the 2021 IT Rules. They apply to all intermediaries, including social media platforms, messaging apps, video-sharing platforms, AI-driven content platforms, any service that hosts, publishes, transmits, or enables access to user-generated content.
When will they come into effect?
They come into force from February 20, 2026
What are the new definitions?
The rules now explicitly define content that is:
- Generated wholly or partly by AI, algorithms, or automated systems
- Includes text, images, audio, video, or mixed formats
- Covers deepfakes, altered visuals, voice cloning, and simulated identities
This ends ambiguity about whether AI-generated material is regulated—it clearly is.
Which content is regulated?
Content is treated as regulated if it — alters reality in a deceptive way; is capable of misleading users about facts, identity, or events; is presented as authentic without disclosure
What does it mean for social media companies and platforms?
Platforms must now take active responsibility, not just reactive steps.
a) Reasonable efforts to prevent violations
Intermediaries must:
- Prevent hosting or circulation of unlawful content
- Use automated tools, human review, or other appropriate measures
- Regularly review their systems to reduce misuse
Failure to act despite knowledge is treated as non-compliance
b) Prohibited content categories
Platforms must act against content that violates:
- Indian laws (criminal, civil, regulatory)
- National security and public order
- Court orders or government directions
- User safety and dignity (harassment, impersonation, deception)
What are the rules on takedown and access restriction?
Mandatory response to orders when directed by courts, government authorities under valid legal powers
Platforms must remove content, disable access, restrict visibility
Failure or delay is treated as a violation of the rules.
What if there is a delay?
The amendments shorten and clarify response timelines, signaling that delays can attract penalties. Partial compliance is not sufficient.
What is the “3-hour window”? When does it apply?
The 3-hour window is an emergency compliance timeline built into the amended IT Rules framework. It applies in exceptional, high-risk situations, not routine complaints.
When an intermediary receives a lawful direction relating to content that poses an immediate and serious risk, such as:
Threats to national security or public order
Risk of violence, riots, or mass harm
Content linked to terrorism, child sexual abuse material, or severe impersonation
Time-sensitive misinformation likely to cause real-world harm
In such cases, platforms are expected to remove, block, or disable access within 3 hours of receiving the direction.
This window exists because waiting 24 hours can be too late for rapidly spreading digital harm.
The 3-hour window is not optional and not advisory. Failure to act within this period is treated as prima facie non-compliance, even if action is taken later.
What is the labelling clause?
The labelling clause relates mainly to AI-generated, synthetic, or manipulated content.
Intermediaries must ensure that users are not misled into believing synthetic or AI-generated content is real, especially when:
A real person’s identity, voice, image, or likeness is used
Content could influence public opinion, trust, or behaviour
Content is presented as factual or authentic
Platforms may comply by:
Labeling content as “AI-generated”, “synthetic”, or “manipulated”
Adding contextual warnings
Reducing visibility or distribution if labelling is not possible
Removing the content if it is deceptive or harmful
Who is responsible?
Responsibility is shared:
Users must not deliberately misrepresent synthetic content as real
Platforms must take reasonable steps to detect, flag, or label such content once they are aware of it
What happens if platforms don’t follow these clauses?
For missing the 3-hour window:
Immediate loss of safe harbour protection
Potential criminal or civil liability
Strong grounds for court or government enforcement action
For violating the labelling requirement:
Content treated as misleading or unlawful
Mandatory takedown or restriction
Repeated failure can count as systemic non-compliance.
What are the time limits?
24 hours – Rapid response obligations: This is the most frequently triggered deadline.
It applies to content affecting public order, safety, or sovereignty. Complaints involving illegal, deceptive, or harmful content. Initial response to serious user grievances
What must be done?
Acknowledge the issue
Take interim action (remove, restrict, downrank, or block access)
Begin formal review
Platforms cannot wait for full internal evaluation before acting.
36 hours – Certain government directions
Where a lawful government order specifies this window (carried forward and reinforced from the 2021 framework), intermediaries must:
Remove or disable access to content within 36 hours
Report compliance if required
Failure counts as non-compliance under due-diligence obligations.
72 hours – Information assistance to authorities
When lawfully required, intermediaries must provide information, data assistance, user or content details (as permitted by law). This applies mainly to investigations and law-enforcement cooperation.
24 hours – Grievance acknowledgement
For user complaints filed through the platform’s grievance system, the complaint must be acknowledged within 24 hours. Silence or automated non-response is treated as failure of the grievance mechanism
15 days – Final grievance resolution
Platforms must:
Decide and communicate a final outcome within 15 days
Explain reasons for action or inaction
Take corrective steps if violation is found
Unresolved or ignored complaints weaken the platform’s compliance record.
“Without delay” – Self-detected violations
If a platform detects illegal or prohibited content through AI tools or internal review. Becomes aware of violations through any source
Immediate / ongoing – Repeat offender action: For accounts repeatedly violating rules, platforms must take timely escalatory action. Continued tolerance can be treated as systemic failure
No fixed hour count is given, but enforcement must be prompt and proportionate.
As specified in order – Court directions: Courts may set custom timelines depending on urgency. These override general timelines. Even shorter compliance windows can be imposed.
What is expected of the companies?
Intermediaries are allowed—and expected—to deploy automated moderation tools, use AI detection systems for harmful or synthetic content, combine automation with human oversight
However, tools must be proportionate, over-removal without review can still be challenged.
What information is to be given to users?
Platforms must:
Clearly inform users about prohibited content
Explain consequences such as content removal, account suspension, reduced reach or visibility.
Users must be able to understand why action was taken
What is the action against repeat or serious offenders?
Intermediaries may suspend or terminate accounts, restrict posting or sharing features, limit visibility of content
Especially in cases of repeated violations, serious harm or deception, coordinated misuse.
Is there a grievance redressal framework?
Platforms must maintain an effective grievance mechanism, act on complaints within prescribed timelines, escalate unresolved issues appropriately.
Non-response to grievances counts against compliance.
What is Safe harbour under Section 79?
It protects intermediaries from liability for user content only if they follow the rules.
Safe harbour is lost if:
• Due diligence is not followed
• Platforms knowingly allow illegal or harmful content
• Orders are ignored or delayed
Once safe harbour is gone, the platform can be directly sued or prosecuted
Do they align with other laws?
The rules explicitly align with Bharatiya Nyaya Sanhita / criminal laws, cybersecurity laws, consumer protection laws and intellectual property laws.
February 10, 2026, 17:55 IST
Read More

