'Liar's dividend': The more we learn about deepfakes, the more dangerous they become
- Deepfakes are on the rise, and experts say the public needs to know the threat they pose.
- But as people get used to them, it’ll be easier for bad actors to dismiss the truth as AI forgery.
- Experts call that paradox the “liar’s dividend.” Here’s how it works and why it’s so dangerous.
- See more stories on Insider’s business page.
In April 2018, BuzzFeed released a shockingly realistic video of a Barack Obama deepfake where the former president’s digital lookalike appeared to call his successor, Donald Trump, a “dips–t.”
At the time, as visually convincing as the AI creation was, the video’s shock value actually allowed people to more easily identify it as a fake. That, and BuzzFeed revealing later in the video that Obama’s avatar was voiced by comedian and Obama impersonator Jordan Peele.
BuzzFeed’s title for the clip — “You Won’t Believe What Obama Says In This Video! 😉” — also hinted at why even the most convincing deepfakes so quickly raise red flags. Because deepfakes are an extremely new invention and still a relatively rare sighting for many people, these digital doppelgängers stick out from the surrounding media landscape, forcing us to do a double-take.
But that won’t be true forever, because deepfakes and other “synthetic” media are becoming increasingly common in our feeds and For You Pages.
Hao Li, a deepfakes creator CEO and co-founder of Pinscreen, a startup that uses AI to create digital avatars, told Insider the number of deepfakes online is doubling “pretty much every six months,” most of them currently in pornography.
As they spread to the rest of the internet, it’s going to get exponentially harder to separate fact from fiction, according to Li and other experts.
“My biggest concern is not the abuse of deepfakes, but the implication of entering a world where any image, video, audio can be manipulated. In this world, if anything can be fake, then nothing has to be real, and anyone can conveniently dismiss inconvenient facts” as synthetic media, Hany Farid, an AI and deepfakes researcher and associate dean of UC Berkeley’s School of Information, told Insider.
That paradox is known as the “liar’s dividend,” a name given to it by law professors Danielle Citron and Robert Chesney.
Many of the harms that deepfakes can cause — such as deep porn, cyberbullying, corporate espionage, and political misinformation — stem from bad actors using deepfakes to “convince people that fictional things really occurred,” Citron and Chesney wrote in a 2018 research paper.
But, they added, “some of the most dangerous lies” could come from bad actors trying to “escape accountability for their actions by denouncing authentic video and audio as deep fakes.”
George Floyd deepfake conspiracy
One such attempt to exploit the liar’s dividend, though ultimately unsuccessful, happened last year after the video of George Floyd’s death went viral.
“That event could not have been dismissed as being unreal or not having happened, or so you would think,” Nina Schick, an expert on deepfakes and former advisor to Joe Biden, told Insider.
Yet only two weeks later, Dr. Winnie Hartstrong, a Republican congressional candidate who hoped to represent Missouri’s 1st District, posted a 23-page “report” pushing a conspiracy theory that Floyd had died years earlier and that someone had used deepfake technology to superimpose his face onto the body of an ex-NBA player to create a video to stir up racial tensions.
“Even I was surprised at how quickly this happened,” Schick said, adding “this wasn’t somebody on, like 4chan or like Reddit or some troll. This is a real person who is standing for public office.”
“In 2020, that didn’t gain that much traction. Only people like me and other deepfake researchers really saw that and were like, ‘wow,’ and kind of marked that as an interesting case study,” Schick said.
But fast-forward a few years, once the public becomes more aware of deepfakes and the “corrosion of the information ecosystem” that has already polarized politics so heavily, Schick said, “and you can see how very quickly even events like George Floyd’s death no longer are true unless you believe them to be true.”
Locking down deepfakes is impossible — inoculation is the next best bet
Citron and Chesney warned in their paper that the liar’s “dividend” — the payoff for bad actors who leverage the existence of deepfakes as cover for their bad behavior — will get even bigger as the public gets used to seeing deepfakes.
But banning deepfakes entirely could make the problem worse, according to Schick, who pointed to China, the only country with a national rule outlawing deepfakes.
“Let’s say some very problematic footage were to emerge from Xinjiang province, for instance, showing Uyghurs in the internment camps,” she said. “Now the central authority in China has the power to say, ‘well, this is a deepfake, and this is illegal.'”
Combined with Beijing’s control over the country’s internet, Schick said, “and you can see why this power to say what’s real and what’s not can be this very effective tool of coercion. You shape the reality.”
With an outright ban out of the question, the experts who spoke to Insider said a variety of technological, legal, regulatory, and educational approaches are needed.
“Ultimately, it’s also a little bit up to us as consumers to be inoculated against these kinds of techniques,” Li said, adding that people should approach social media with the same skepticism they would a tabloid, especially if it hasn’t been confirmed by multiple reliable news or other official sources.
Schick agreed, saying “there has to be kind of some society-wide resilience building” — not only around bad actors’ ability to use real deepfakes to spread fake news, but also around their ability to dismiss real news as the product of nonexistent deepfakes.
Source: Read Full Article