Outlook Magazine: Deepfakes Will Make India's Social Media Problems Much Worse

(Credit: The National)

(Credit: The National)

As India grapples with how to stop the spread of misinformation online, a new generation of digital forgery technologies threatens to upend whatever progress has been made thus far.

Deepfakes are a new suite of technologies that use powerful machine learning to copy and superimpose the voice, face, and speaking style from one person onto another. While early versions produced choppy and unconvincing results, but they’ve come a long way since then. Across the internet, hobbyists have used the technology to create entertainment, pornography, and art – sometimes with devastating personal consequences.

The media has focused on how Deepfakes can be used to spread propaganda or spark military conflicts, but the larger threat is that the technologies can be weaponized and targeted against individuals and businesses.

Exaggeration, taking things out of context, and outright lies have existed long before the internet. What Deepfakes change is that it’s now far cheaper and easier for anybody to create synthetic evidence for whatever purpose they want. In the context of India’s ecosystem, this can be especially dangerous. There’s already the rampant spread of inflammatory statements, doctored videos, and conspiracy theories causing social turmoil. That means that small groups of people without much technical knowledge will be able to develop powerful tools of division.

Take for example the June 2018 attacks on migrant workers in Rainpada. A group of people circulated grainy images of individuals allegedly committing wrongdoings, which was used to justify mob violence. With Deepfakes, a person seeking to cause trouble can now insert a specific person’s face and body movement into a video of a crime. In a similar situation, a hacker can download a CEO’s speeches to create a model of their voice, and use that to damage a brand or give fraudulent orders to an executive. Security firm Symantec found that has happened at least three times already – causing millions of dollars in losses around the world.

With social media and messaging, we’ve seen the risks of adopting a technology without also upgrading the security infrastructure around it. The Indian government recently implicated WhatsApp in the wave of communal violence that have rocked the country over the last few years. The company has prioritized growth over user safety, and as a result, society as a whole has suffered.

In the case of Deepfakes, India can’t afford to repeat that same mistake. These technologies are now available for anyone to download, and it’s too late to reverse course – instead we need to rebuild our defenses. The stakes couldn’t be higher.

Countering Deepfakes will necessitate the creation of new tools, systems, and behaviors. Rising to the occasion will require a massive effort by government, media, and software companies to help users become more skeptical about what they see and thoughtful about what they share. This is no easy task to accomplish, but there are many places where they can get started today. These groups can work together to create mechanisms to report and remove fake content, determine what the acceptable limits of speech are, and establish verified lines of communication for officials to disseminate news.

It is also necessary to provide users a way to verify whether a piece of content is real or not. To do this, manufacturers and developers can harness cryptography to create a shared basis of truth. That involves having our devices digitally sign each photo or video so we have the ability to prove when and by who it was taken.

These measures alone don’t offer a full solution, but rather a starting point. It will require many rounds of incremental improvements to know what’s effective. While it might be tempting to short-circuit the process by declaring the technology illegal, that won’t stop bad actors from causing harm anyway. At the end of the day, technology or regulation alone can’t solve what’s fundamentally a cultural problem.

Original link

Outlook MagazineTarun Wadhwa