Synthetic Media and Deepfakes - Explained for All
By Vasundara Arunn, Akanksha Singh, Geetha Raju and Prof. B. Ravindran
Recently, Indian celebrities and public figures have started taking decisive action against creators of deepfakes and synthetic media who misuse their image, voice, and videos. From film icons and influencers to figures of authority such as the Reserve Bank of India Governor and the Finance Minister, AI-generated impersonations have crossed into sensitive territory, demanding swift legal and policy responses. For instance, Actor Nagarjuna’s recent court victory prohibiting the unauthorised use of his persona, of which Vibhav Mithal, CeRAI’s Associate Research Fellow, was part of the legal team. Along with Amitabh Bachchan’s earlier success in protecting his publicity rights, it reflects how India is shaping a strong framework to defend identity in the digital age.
In this blog, we will explore how deepfakes differ from synthetic media and why that distinction matters. Technically, every deepfake can be considered as a synthetic media, but not all synthetic media is a deepfake. We’ll unpack this difference through four key lenses:
● Technical: What kinds of data and AI models are used to generate and detect them?
● Legal: How do courts interpret and regulate each in terms of rights and liability?
● Global: How are different countries and social media platforms developing laws, standards, and policies to address deepfakes and synthetic media?
● Human: How do users perceive authenticity, trust, and consent in this new digital ecosystem, and how do they recognize and respond to misuse and harm related to deepfakes?
What is Synthetic Media?
● Any form of media that is created or manipulated using AI and other automated methods is commonly referred to as synthetic media.
● These include AI-generated images, videos, audio, art, and text, as well as deepfakes, lip-syncs, and edits of various facial attributes.
● The advancements in generative AI, including the development of Deep Learning techniques and Generative Adversarial Networks (GANs), have enabled a sharp rise in AI-generated media worldwide, with the market for synthetic media growing year by year.
How is it used?
Synthetic media has been used in a myriad of ways – recreational and otherwise. Used to generate a variety of content from memes to images and videos, they may be created by individuals as well as large companies and corporations (such as film production companies or the marketing wing of various businesses) to create compelling graphics and visuals.
Today, the proliferation of synthetic media in the personal and professional spheres has sparked numerous debates worldwide on the effect it has on spreading misinformation, as well as concerns regarding safety and authenticity, bringing us to one of the most contentious forms of synthetic media – deepfakes.
What are Deepfakes?
● Deepfakes are a form of synthetic media.
● The term “deep”+ “fake” refers to images, videos, and audio that have been modified/generated by AI.
● Deepfakes typically use neural networks to generate combinations of existing media, though the term may be used to refer to a wider array of media-manipulation techniques.
Why are Deepfakes made?
● Low-risk or zero-risk deepfakes include those created only for entertainment purposes or as a unique way of conveying a message. Deepfake technology may be used to create humorous or compelling audio, images, videos, and even digital clones. For example, Disney has investigated the use of deepfake technology to assist in visual effects and used it to revive the younger versions of various actors, as have other studios in films such as The Irishman. Also, there is an ongoing trend of reimaging songs in the voice of various singers via voice cloning.
● Similarly, deepfakes may be used for marketing purposes by firms and advertisers looking to reduce the time and/or cost of real-time video making.
● Sometimes, deepfakes of the deceased are created to help relatives or friends cope with the loss. While such “DeathTech” is an inventive method of bringing comfort to some, it carries ethical concerns regarding consent and long-term effects on mental health. There has also been discussion and controversy around using similar techniques to resurrect dead actors.
● Deepfakes may also be event-based, i.e., used in reflection of larger world events as with other forms of online media. For example, during the 2024 Indian Elections, the Deepfakes Analysis Unit (DAU) observed that almost a third of the content they received was related to the election.
● Deepfakes may be politically motivated, used to promote certain candidates and defame others; they may be used to spread misinformation for political gain.
● Similarly, they may be used to create panic during times of distress and disaster, such as wars, natural calamities, etc.
● A common misuse of deepfakes is in propagating virtual personal crimes such as cyberbullying, creating non-consensual pornography, sextortion, and other cyber attacks. Children have increasingly become targets of deepfake abuse, prompting 18 U.S. states to pass legislation specifically banning sexual deepfakes involving minors. Disturbingly, women are also disproportionately affected by the deepfake crisis. In South Korea, there has been a surge in non-consensual, explicit deepfakes targeting school- and college-aged women.
● Financial scams are not far behind. Deepfakes enable the easy orchestration of financial scams that may victimise unsuspecting users of financial services.
How are Deepfakes created?
The answer to that question lies in this: how/why are so many deepfakes being made today? The short answer is that the creation of deepfakes need not always require skill and expertise.
That is not to say that all Deepfakes are created by amateurs. At the technical level, Deepfakes are created using the generative techniques of GANs, autoencoders, and diffusion models. The technology is trained on existing media to create realistic-looking media. The development of such machine learning models requires resources and skill, which makes them time-consuming. However, several software programs are readily available today, making the generation of synthetic media, with its benefits and risks alike, easy for the unskilled.
Common software used to create Synthetic Media:
NOTE: The tools below have been listed purely for informational purposes and must be used responsibly.
Faceswap, Momo Inc’s Zao, InVideo AI, OpenAI’s DALL-E, Midjourney, Google’s Veo, and OpenAI’s Sora are some common AI-based software that can be used to create deepfakes. Plenty of apps and websites these days also offer the option to create AI-generated audiovisual content, including Adobe Firefly, Synthesia, etc.
What are some examples of the risks of deepfakes?
While deepfakes can be used to create relatively innocuous content, their dangers cannot be ignored. A Pi-Labs report observes that while deepfakes were initially used mostly for prank videos and celebrity spoofs, from 2020-23, the technology has been used for political manipulation and pornography, eventually branching into financial fraud. Below are listed some real-life instances of deepfakes being used for malicious purposes.
Non-consensual Pornography:
Indian Context
Pi Labs noted that India was the 6th most vulnerable country to deepfake adult content and further observed that 98% of online deepfake content was explicit in nature. They also saw a 500% increase in deepfake adult content since 2022.
A 2024 Wired article on Meta’s action on deepfake porn cases of an American and an Indian celebrity highlights the involvement of the Meta Oversight Board to handle the cases and bring out stringent policies for the platform and a difference in treatment for similar situations.
International Context
Sensity AI’s Oct. 2020 Report discovered that nonconsensual AI-generated explicit images of over 100,000 women had been generated and circulated.
A 2024 Humanise AI study on deepfake tools noted that around 61.2% of AI deepfake tools were NSFW (not suitable for work) in nature and that the top 3 categories of NSFW tools experienced significant website traffic (> 7 million visitors).
A March 2024 Twicsy Report found that 86% of social media influencers worldwide were victims of deepfake pornography. 90% of these victims were women.
Political Manipulation:
Indian Context
In November 2023, a video of LTTE chief Prabhakaran’s daughter, presumed dead 20 years ago, appeared in which she was a middle-aged woman talking about Tamil rights. The content was only later identified as AI-generated.
A 2024 paper by UT Austin examined the use of AI in the Indian elections. Although there wasn’t as much traditional deepfake misinformation as predicted, voice cloning and AI-generated content were prevalent and, in most cases, weren’t regulated or labeled.
During the 2024 Indian Elections, two viral videos showed Bollywood stars Ranveer Singh and Aamir Khan campaigning for the Congress party. Both filed police complaints saying these were deepfakes, made without their consent.
International Context
During the 2024 US Elections, deepfake technology was utilized to make phone calls that mimicked the voice of Joe Biden. These calls attempted to discourage people from voting for him.
In 2022, the Russia-Ukraine war inspired multiple deepfakes - such as those of Russian President Vladimir Putin announcing peace and of Ukrainian President Volodymyr Zelenskyy surrendering.
In 2020-21, profiles with GAN-generated photos went against Belgium’s 5G restrictions to uphold Chinese companies.
Financial Fraud:
Indian Context
In their second quarter report, the DAU observed that most AI-manipulated media they received attempted to use the identity of famous figures to promote fake gaming apps and financial platforms.
In Jan 2025, financial advisor Akshay Tanna filed a case for impersonation of his name. According to the lawsuit, the defendants made use of deepfake technology to generate a video of him offering dubious financial advice.
In Jan 2024, cricketer Sachin Tendulkar sued a gaming company for using a deepfake of him to endorse the brand.
In July 2024, actor Anil Kapoor took 16 individuals to court over the violation of his personality rights and copyright. The defendants had used his image for monetary gain and multiple had created deepfakes of the actor.
International Context
In 2024, deepfakes of Elon Musk promoting fraudulent financial advice were circulated.
In Hong Kong, an MNC lost $25 Million to a deepfake scam in 2024.
In May 2024, Mark Read, CEO of advertising giant WPP, was impersonated via an AI-generated deepfake—using a cloned voice, YouTube footage, and a fake WhatsApp account—to fool a colleague in a Microsoft Teams meeting into revealing sensitive details and money.
Other Malicious Information:
Indian Context
Journalist Rajat Sharma, in May 2024, filed a PIL urging the government to take action against deepfakes after an AI-manipulated video of him giving inaccurate medical advice was circulated.
International Context
In 2024, a high-school teacher created deepfake audio of the school principal making racist comments.
How has the world reacted?
Governments across the world have responded to the threat posed by deepfake technology. Some examples include:
● The EU has several policies that are used to combat deepfakes, including the EU AI Act – one of the first comprehensive pieces of AI legislation worldwide, which classifies deepfakes as “limited risk” AI systems and requires transparency disclosures, mandates mitigation of AI-generated disinformation. Apart from this, the EU also has the General Data Protection Regulation and various copyright laws.
● South Korea, a country hit particularly hard by the rise of deepfake pornography, introduced reforms in 2024 to combat deepfake incidents in the country.
● The United States is another country impacted heavily by deepfakes, with a 3000% increase in deepfake-related fraud between 2022 and 2023; in the US, different states have their state-specific legislation on deepfakes and manipulated media.
● China has implemented comprehensive regulations known as the “China Administrative Provisions on Deep Synthesis in Internet-Based Information Services (Deep Synthesis Provisions). These provisions, effective since Jan 2023, require deep synthesis service providers to ensure data security by strengthening data management and personal information protection; transparency by disclosing management rules, platform conventions and service agreements; content management by labelling deepfake content and dispelling false information; and technical security by conducting security assessments and algorithm reviews.
● In Australia, the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 amends the Criminal Code Act 1995 to strengthen offences targeting the creation and non-consensual sharing of sexually explicit material online, including material that has been created or altered using AI technology (commonly referred to as ‘deepfakes’).
● The United Kingdom introduced the Online Safety Act 2023; this Act establishes a regulatory framework to enhance the safety of internet services in the UK. It requires service providers to identify, mitigate, and manage risks of harm, particularly for vulnerable individuals, from illegal and harmful content (especially for children). It grants new powers to OFCOM as the regulator. The Act ensures services are safe by design, prioritises child protection, upholds freedom of expression and privacy, and promotes transparency and accountability.
Various social media platforms also have their own policies towards deepfakes.
● Instagram requires creators sharing certain types of AI-generated content – such as photorealistic media – to label it as such before posting, with the app itself automatically labeling unlabelled AI content. Meta, too, has similar labels for AI-generated content on their platform.
● Meta also has community guidelines describing what constitutes permissible content; these guidelines apply to multiple platforms of theirs, including Meta, Instagram, Messenger, and Threads. These platforms also remove misinformation in certain cases, sometimes partnering with independent organisations to do so. Such policies may be applied to AI-generated content.
● YouTube has a similar approach to AI-generated content, requiring creators to label realistic videos and, in certain cases, automatically labelling the videos themselves. Their privacy policy further allows individuals to request the takedown of AI-generated or altered media of themselves.
● X, meanwhile, has an authenticity policy that addresses fraud and misleading deepfake material. The platform’s community notes feature similarly allows for public fact-checking.
In addition to laws and platform guidelines, technical standards are also emerging to ensure content authenticity.
● The Content Credentials standard proposed by C2PA provides guidelines for embedding pertinent details in digital media to ensure provenance and enable traceability. The standard elucidates the method and details to be attached as metadata to digital media. The details may include the creator, creation date, and any modifications to the content. The Creations Assertions Working Group, as part of DIF, proposed the CAWG Metadata standard, which can be used alongside the C2PA technical specification. Another content credentials standard, ISO/CD 22144, developed by the International Organization for Standardization (ISO), is under development.
● The ISO has published one part of the planned three-part standards for building trust around digital media, specifically images. The JPEG Trust Part 1: Core foundation standard provides a framework for embedding metadata into images as trust indicators. These trust indicators can be used to assess the media’s trustworthiness.
What about India?
While India does not have a single, comprehensive regulatory framework for AI, it has taken steps towards regulating AI by developing the National Strategy for AI in 2018 and launching the IndiaAI mission in 2024. It also makes use of existing legislation to regulate the threats posed by deepfakes.
What can we do to seek redressal, should we face any complications because of deepfake technology?
● In Oct 2025, the Government of India released a draft amendment to the IT Rules. The draft explicitly brings “synthetically generated information” under the purview of the IT Rules. The draft amendment includes: a legal definition of synthetic content, mandatory disclosure requirements, and instructing social media intermediaries to take appropriate technical measures to detect, verify, and label synthetically generated media.
● Check the policies of the platforms where the manipulated media is posted or shared, and report wherever possible. In 2023, social media platforms have been issued a directive by the government to comply with the IT Rules, especially section 3(1)(b), which prohibits any false, explicit, and defamatory content.
● Go through the report released by the IndiaAI mission outlining the state of AI governance and offering recommendations. The report mentions legal provisions that can be used in the case of deepfake-related misdemeanors. For example, the IT Act and the IPC have provisions that can be extended to deepfake attacks related to nonconsensual pornography, identity theft, defamation, forgery, and so on. Additionally, there are several laws that address intellectual property rights, such as the Copyright Act, the Patents Act, and the Trade Marks Act.
How Can We Detect Deepfakes?
A: Manual Detection:
In many cases, the presence of visual artifacts or clues gives deepfakes away. Such artifacts can be detected on careful observation.
Some examples from real-life case studies published by the Deepfake Analysis Unit (DAU) with corresponding artifacts identified with AI-manipulations:
If unsure about the validity of a piece of media, taking the time to verify it goes a long way. Scrutinizing the source of the media, looking for trusted alternate sources, and reverse-searching the media in question are useful for engaging with content critically.
Sometimes, if it is difficult to manually verify whether a given media has been tampered with, like audio tracks, for example, that are tougher to distinguish, one can make use of deepfake-detection tools or services.
B: Software-Based Detection:
● Deep-learning architecture, often employing Convolution Neural Networks (CNNs), can be used in detecting AI manipulation. Some examples of models include XceptionNet, MesoNet, DSP-FWA, F3Net, InceptionResNetV2, RawGAT-ST (for audio manipulation), etc.
● Popular software tools used to detect AI manipulation include Sentinel, Sensity, Oz Liveness, Deepware, Contrails.ai, WeVerify Deepfake Detector, and DuckDuckGoose.
● Additionally, Michael Lanham, in his book Generating a New Reality, suggests using OpenCV algorithms to measure factors like head-tilt, eye movement, etc., following which side-by-side comparisons of the real person and the suspected deepfake can be run.
While software-based methods are useful, they are not infallible or accurate. Detection tools may produce false positives, flagging genuine content as fake, and false negatives, where manipulated media is mislabelled as real. These limitations are compounded by the fact that most models are trained on data from the Global North, making them less effective in regions like the Global South. This happens due to gaps in research and real-world settings, where the ratio of fake to real content varies. To overcome this gap in real time, multiple detection tools are combined with additional verification methods, such as checking the credibility of sources and cross-checking with trusted fact-checkers before concluding if the content is real or fake. This layered approach is widely practiced to navigate the evolving challenges of deepfake detection effectively.
Future Research Directions
● The ethical use of personal data remains a concern in the creation of AI-generated content. It is important to investigate techniques and regulations that protect user privacy and copyright when sourcing data, ensuring consent and transparency.
● It is similarly important to look into stringent liabilities for the misuse and abuse of personally identifiable information, established by regulators who ensure the ethical processing of data
● The development of industry standards, guidelines, regulations, and laws regarding synthetic media must be prioritised. Geographic and cultural factors, such as language and market scenario, must be taken into consideration in framing these. For example, the use of low-resource languages may affect the accuracy and fairness of a system’s output.
● The threat of deepfakes and synthetic media that violate copyright or are otherwise unethical necessitates the development of safe and trustworthy ways of reporting AI incidents that ensure accountability and preserve the anonymity of the reporter.
● Such public reporting of issues related to synthetic media could further be used in a participatory manner to inform and assist policymakers, developers, and regulators in ensuring the safe, responsible, and ethical use of technology.
● In addition to accessible avenues of reporting, it is important to raise public awareness on recognising deepfakes.
● Finally, any future progress in synthetic media must involve constructive collaboration, i.e., concerted efforts by stakeholders at multiple levels – policymakers, tech developers, watchdogs, and users. The employment of people from various backgrounds in the development and deployment of generative AI systems is important.



Fascinating. This is a superb breakdown of deepfakes and synthetic media. The Indian cases clearly show the pressing need for strong, human-centric policy frameworks. Protecting women from the disproportionate impact of malicious synthetic media is a key concern. Excelent analysis.