By Priyanshi Jain & Ashit Srivastava
A recent video of Prime Minister Narendra Modi doing the garba grabbed eyeballs. Even Modi appreciated how well the video was made. Only thing is that the video was a deepfake one and it left the PM “worrying”. He sought the media’s help in educating people regarding the capabilities of Artificial Intelligence (AI) and deepfake technology.
Other celebs targeted by deepfake include actresses Kajol and Rashmika Mandanna, leading to the question of legal safeguards against such deceptive content. In the Mandanna episode, a seemingly innocuous elevator video was manipulated to showcase the actress. It highlighted the potential threats posed by deepfake technology to the reputation and privacy of individuals.
AI deepfake, a portmanteau of “deep learning” and “fake”, represents a sophisticated form of synthetic media where existing images or videos are manipulated to convincingly depict someone else’s likeness. Leveraging powerful techniques from machine learning and AI, deepfakes have become a digital chameleon, seamlessly replacing faces and altering voices to create highly realistic yet entirely fictional content.
This technological marvel, while showcasing the prowess of AI, also poses substantial challenges as it blurs the lines between reality and fabrication, giving rise to concerns about misinformation, privacy invasion and the potential harm it can inflict on individuals and societal trust. As the digital landscape advances, the ethical and legal implications of AI deepfakes remain a subject of intense scrutiny and debate, requiring a delicate balance between technological innovation and safeguards to protect against their misuse.
Defamation becomes a powerful tool for individuals victimised by deepfake videos. In India, both civil and criminal statutes govern defamation, offering a legal recourse for those seeking justice. Section 499 of the Indian Penal Code (IPC) outlines the elements of defamation, providing a basis for individuals to file suits against those responsible for tarnishing their reputation through deceptive synthetic media. Sections 465 (forgery) and 469 (harming reputation) of the IPC also deal with the same.
Privacy laws in India, particularly the right to privacy as enshrined in the Constitution, can be invoked to protect individuals from the unauthorised use of their likeness in deepfake videos. The right to privacy has been recognised as a fundamental right by the Supreme Court of India. Additionally, the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI) under the Information Technology Act, 2000, prescribe standards for the protection of sensitive personal data. Misuse of one’s image may fall within its ambit. However, SPDI Rules are no more relevant as it was superseded by the Digital Personal Data Protection Act, 2023.
The creation and dissemination of deepfake videos often involve cybercrimes, prompting the application of cyber security laws. Sections such as 66C (identity theft), 66D (cheating by personation) and 66E (violation of privacy) under the Information Technology Act, 2000, serve as legal avenues for prosecuting individuals engaged in the malicious creation and distribution of synthetic media.
Section 66C deals with the fraudulent or dishonest use of electronic signatures, passwords, or any unique identification feature belonging to another person. In the context of the Mandanna deepfake, if someone manipulated her likeness or identity in a manner that involves the use of electronic signatures or other unique identification features without her consent, they could be charged under Section 66C.
Section 66D addresses offenses related to cheating by personation through the use of communication devices or computer resources. If the deepfake video of Mandanna involves a scenario where someone impersonates her and this leads to fraudulent activities or deceiving others, Section 66D may apply.
Section 66 E focuses on the intentional capture, publication or transmission of images of the private area of any person without their consent, thereby violating their privacy. If Mandanna’s manipulated video involves the capture or transmission of her private areas without her consent, the perpetrator could face charges under Section 66E.
Additionally, the recent Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021, as amended in 2022, provide for curbing of misinformation and disinformation or impersonation of another person under Rule 3.
Rule 3 (b)(vi) deals with deceiving or misleading the addressee about the origin of the message or knowingly and intentionally communicating any information which is patently false or misleading in nature but may reasonably be perceived as a fact.
Rule 3 (b) (vii) deals with impersonating another person. A combined reading of Rule 3 (b)(vi) and (vii) provides for curbing of misinformation and curbs impersonation of another person. This could to a large extent include deepfake videos. Additionally, Rule 3 needs to be read with Rule 7 of the IT Rules 2021.
Rule 7 says: Where an intermediary fails to observe these rules, the provisions of sub-section (1) of Section 79 of the Act shall not be applicable to such intermediary and the intermediary shall be liable for punishment under any law for the time being in force including the provisions of the Act and the Indian Penal Code.
Rule 7 obligates the digital platforms to not to transmit, display or upload any of the content prohibited under Rule 3. In case the prohibited content is not removed from the platform in a prescribed time period [Rule 3 (2)], the liability will be on the platform, as if the content has been posted by the platform itself. Thus, a combined reading of Rule 3 and Rule 7 creates a space for prohibition on the posting of deepfake videos.
Deepfake videos that involve the unauthorised use of copyrighted material trigger intellectual property laws. The Copyright Act, 1957, guards against infringement on original works, including the misuse of someone’s likeness without permission. Victims and copyright holders can seek legal remedies, including damages and injunctions under the pertinent provisions of the Copyright Act.
However, despite the legal safeguards in place, the identification of deepfake videos remains a significant challenge. The sophisticated techniques used in their creation demand a multi-dimensional approach that combines legal frameworks with technological advancements. As forensic analysis tools and deepfake detection mechanisms continue to evolve, their integration into legal processes becomes imperative for effective identification and prosecution of offenders.
As India grapples with the rising tide of deepfake videos, a comprehensive legal framework is the need of the hour. Existing laws, ranging from defamation to privacy, cyber security and intellectual property, provide a foundation for legal action. However, the effectiveness of these laws hinges on continuous legislative efforts to adapt to technological advancements.
Collaborations between legal experts, technologists and cyber security professionals are pivotal in developing robust mechanisms for prevention, detection and prosecution of deepfake-related offenses. The digital landscape is ever-evolving and India’s legal system must navigate these uncharted waters with agility and foresight.
—Priyanshi Jain is a student at Dharmashastra National Law University, Jabalpur; Ashit Srivastava is Assistant Professor of Law there
The post Legal Loopholes appeared first on India Legal.