&format=webp&quality=medium)
Govt tightens AI rules: India has taken a major step to curb the growing threat of deepfakes and misleading AI-generated content online, with the Centre notifying stricter rules for social media platforms. From February 20, users scrolling through Instagram, YouTube, Facebook, X or other digital platforms may begin to see clearer labels on content that has been created or altered using artificial intelligence.
The government has tightened the rules for online platforms, asking them to clearly flag AI-generated and fake content and move faster when such material is harmful or illegal.
The push comes as deepfakes and AI-made videos and audio clips are being misused more often from scams and misinformation to impersonation and fake explicit content.
What changes now is straightforward: if a video, image or audio is created or altered using AI and made to look real, it must carry a clear label. The aim is to make sure people can instantly tell what’s real and what isn’t, instead of being misled.
The rules also mention that platforms must embed permanent technical markers, such as metadata or identifiers, so that such content cannot quietly be passed off as genuine later.
In other words, once something is labelled as synthetic, it cannot be “cleaned up” or reposted without the tag.
In another significant shift, the government has shortened the window for action against certain kinds of unlawful content.
Social media companies will now be expected to remove or disable access to flagged illegal posts within three hours in specific cases. This is aimed at preventing harmful deepfakes or obscene synthetic material from spreading rapidly before authorities or platforms can respond.
Officials have been increasingly concerned about how quickly manipulated content goes viral, especially during sensitive events such as elections, communal incidents or financial scams.
The government has also made it clear that platforms cannot rely only on user reporting.
Intermediaries will now have to deploy automated tools and filters to detect illegal AI-generated material - particularly content involving child sexual abuse, non-consensual intimate imagery, obscene posts or fraudulent impersonation. It shows how fast AI misuse is spreading, with fake and harmful content travelling quicker than old-school moderation can handle.
Impersonation is now a big focus. Deepfakes aren’t just about false information anymore - they’re being used to copy real people’s faces, voices and identities to fool others. Under the new rules, platforms will have to act quickly on such content and can suspend accounts that keep breaking the rules. Victims who complain are also likely to see tougher action.
The rules don’t stop with platforms. Users are on the hook too. Social media companies will now have to regularly remind users - at least once every three months that using AI to create illegal or harmful content can lead to penalties or even criminal action.
The government has linked these violations to serious laws, including provisions related to cybercrime, child protection, harassment and fraud.
For significant social media intermediaries - large platforms with wide reach - the rules are even stricter.
Users may be required to declare whether the content they upload is AI-generated. Platforms must also take “reasonable technical steps” to verify such declarations.
If synthetic content is confirmed, it must be labelled before it is shown publicly.
This move comes at a time when fake AI content is becoming harder to spot and easier to spread. Around the world, governments are stepping in as deepfakes fuel political controversies, online scams and high-profile impersonations. India has seen its share too, with repeated warnings about doctored videos and audio being used to mislead people or ruin reputations.
By making labels mandatory and pushing platforms to act faster, the government is trying to make online content more trustworthy again - so users can tell what’s real and what isn’t before the damage is done.