Take a step to regulate Deepfakes : Daily Current Affairs

Date: 16/01/2023

Relevance: GS-3: Threats to Indian Cyber Security; Science and Technology- developments and their applications and effects; indigenization of technology and developing new technology.

Key Phrases: Artificial Intelligence (AI), Deepfakes, Artificial Neural Network (ANN), Digital composite, Issues with deepfakes, Section 500 of the IPC, Digital India Bill.

Context:

  • The lack of proper regulations and oversight may lead to misuse of Artificial Intelligence (AI) by individuals, firms and even non-state actors.
  • The legal ambiguity, coupled with a lack of accountability and no regulation can be disastrous at times and calls for regulation of Artificial Intelligence (AI) in India, specifically ‘Deepfakes’ which is an application of AI.

What are Deepfakes?

  • Deepfakes are digital media - video, audio, and images edited and manipulated using Artificial Intelligence to create hyper-realistic digital falsification.
  • Cybercriminals and even mischievous elements use AI softwares to superimpose a digital composite onto an existing video, photo or audio with a high potential to deceive people.
    • Digital composite: Method of assembling multiple media files to make a final media file which consists of elements of different files.
  • Deepfake techniques can be used to synthesize faces, replace facial expressions, synthesize voices, and generate news.
  • Although the technology itself has very wide usage, in recent times its misuse has increased severely.
    e.g. Deepfakes can be used to damage reputation, fabricate evidence, defraud the public, and undermine trust in democratic institutions.
  • Uses of Deepfakes
    • Benefits in certain areas, such as accessibility, education, film production, criminal forensics, and artistic expression.
    • Voice-cloning deepfakes can restore people’s voices when they lose them to disease.
    • Deepfake videos can enliven galleries and museums.

How does Deepfake technology work?

  • Deepfake techniques rely on a deep learning technique called autoencoder, which is a type of Artificial Neural Network (ANN) which contains an encoder and a decoder.
  • The input data is first decomposed into an encoded representation then these encoded representations are reconstructed into new images which are close to input images.
  • Deepfake software works by combining several autoencoders, one for the original face and one for the new face.

Issues with deepfakes

  • Spread misinformation and propaganda
    • Since they are compelling, deepfake videos can be used to spread misinformation and propaganda.
    • They seriously compromise the public’s ability to distinguish between fact and fiction.
  • Defamation using elicit contents
    • Deepfakes Can depict someone in a compromising and embarrassing situation.
    • For instance, deepfake pornographic material of celebrities not only amounts to an invasion of privacy, but also to harassment.
  • Financial fraud
    • Deepfakes have been used for financial fraud, intimidation or blackmailing people in recent times.
    • Scammers recently used AI-powered software to deceive the CEO of a U.K. energy company into thinking he was speaking with the CEO of the German parent company over the phone.
    • As a result, the CEO transferred a large sum of money (€2,20,000) to what he thought was a supplier.
  • Threats to National Security of nations
    • Deepfakes can be used to influence elections at a large scale
      • Recently, Taiwan’s cabinet approved amendments to election laws to punish the sharing of deepfake videos or images.
      • Taiwan is becoming increasingly concerned that China is spreading false information to influence public opinion and manipulate election outcomes, and this concern has led to these amendments.
      • This could also happen in India’s elections too.
    • Use in Espionage
      • Deepfakes can also be used to carry out espionage activities.
      • Doctored videos can be used to blackmail government and defense officials into divulging state secrets.
      • For instance, in March 2022, Ukrainian President Volodymyr Zelensky revealed that a video posted on social media in which he appeared to be instructing Ukrainian soldiers to surrender to Russian forces was actually a deepfake.
    • Production of hateful and inflammatory material
      • In India, deepfakes could be used to produce inflammatory material, such as videos purporting to show the armed forces or the police committing ‘crimes’ in areas with conflict.
      • These deepfakes could be used to radicalize populations, recruit terrorists, or incite violence.
  • Misuse by nonstate actors
    • Deepfakes can be used by non-state actors, such as insurgent groups and terrorist organizations, to show their adversaries as making inflammatory speeches or engaging in provocative actions to stir anti-state sentiments among people.
  • Leading to the ‘Liar’s Dividend’
    • Another concern from deepfakes is the liar’s dividend; an undesirable truth is dismissed as deepfake or fake news.
    • The mere existence of deepfakes gives more credibility to denials. Leaders may weaponize deepfakes and use fake news and alternative-facts narrative to dismiss an actual piece of media and truth.

Why is there a need for legislation?

  • Currently, very few provisions under the Indian Penal Code (defamation) and the Information Technology Act, 2000 (punish sexually explicit material) can be potentially invoked to deal with the malicious use of deepfakes.
    • Section 500 of the IPC provides punishment for defamation. Sections 67 and 67A of the Information Technology Act punish sexually explicit material in explicit form.
    • The Representation of the People Act, 1951, includes provisions prohibiting the creation or distribution of false or misleading information about candidates or political parties during an election period.
    • The Election Commission of India has set rules that require registered political parties and candidates to get pre-approval for all political advertisements on electronic media, including TV and social media sites, to help ensure their accuracy and fairness.
  • All of the aforementioned are insufficient to adequately address the various issues that have arisen due to AI algorithms, like the potential threats posed by deepfake content.

Possible Solutions to counter menace of deep fakes

  • Media literacy efforts must be enhanced to cultivate a discerning public. Media literacy for consumers is the most effective tool to combat disinformation and deepfakes.
  • We also need meaningful regulations with a collaborative discussion with the technology industry, civil society, and policymakers to develop legislative solutions to disincentivizing the creation and distribution of malicious deepfakes.
  • Social media platforms are taking cognizance of the deepfake issue, and almost all of them have some policy or acceptable terms of use for deepfakes. We also need easy-to-use and accessible technology solutions to detect deepfakes, authenticate media, and amplify authoritative sources.

Way ahead

  • Enact a separate legislation for AI
    • The Union government should introduce separate legislation regulating the nefarious use of deepfakes and the broader subject of AI.
    • Legislation should not hamper innovation in AI, but it should recognize that deepfake technology may be used in the commission of criminal acts and should provide provisions to address the use of deepfakes in these cases.
  • The proposed Digital India Bill can also address this issue.

Conclusion

  • To counter the menace of deepfakes, we all must take the responsibility to be critical consumers of media on the Internet, think and pause before we share on social media, and be part of the solution to this ‘infodemic’.

Source: The Hindu

Mains Question:

Q. What is ‘deepfake’ technology? The increased menace of deepfakes necessitates a robust legislation to deal with the malicious use of deepfakes, Discuss. (250 words)