Deepfake Technology Enters the Documentary World

https://www.nytimes.com/2020/07/01/movies/deepfakes-documentary-welcome-to-chechnya.html

Version 0 of 1.

When the documentarian David France decided to chronicle the anti-gay and lesbian purges that had unleashed a wave of fear and violence in Chechnya, he needed more than just a camera.

He had seen television news interviews from the Russian, largely Muslim republic: the subjects were dimmed in shadows, their voices digitally altered, but he found it hard to connect with them. “They were horrible stories and there’s no doubt that they were real,” France, an Oscar-nominated activist filmmaker, said. “But I knew I wanted to tell a much more in-depth story about what something as hideous as this meant to the people who survived it.”

How would he shield the identity of at-risk gay and lesbian Chechens fleeing the region via a network of safe houses, while also preserving their emotions, their affections and expressions? France’s provocative solution is “Welcome to Chechnya,” the unsettling film that debuted Tuesday on HBO and involves extensive postproduction work: It marshals advanced computer technology to superimpose supple, completely fabricated faces over 23 hunted individuals.

Blurring out a mob informant? So last century. Say hello to deepfake. Until now, its use has largely been limited to anonymous online mischief of various degrees of severity, like inserting Nicolas Cage’s grin into movies Cage never acted in. But politicians have become targets, too. (The “Get Out” director Jordan Peele tried to raise awareness with an eerie deepfake of Barack Obama.) Even Facebook, not known for its strict policing of content, pledged to ban deepfakes in January. Will a documentary that expressly embraces the technique help matters?

France, a former Newsweek editor turned filmmaker (“How to Survive a Plague,” 2012), bristles at the term, a portmanteau of computer-assisted “deep learning” and “fake.”

“Deepfake changes what people say and do,” France said. “And this changes nothing. It allows my subjects to narrate their own stories. And it restores to them their humanity in a way that would not have been possible under other circumstances.”

Still, why should we trust a documentary that uses deepfakes? France doesn’t see it as opening a new can of worms so much as the same old one. “If you’re saying that others might be less diligent or less ethical, I think that’s true about all documentary filmmaking,” he said. “It’s a matter of trusting the process.”

To watch France’s latest is to be confronted by the moral dimensions of documentary filmmaking — and maybe something deeper, concerning the slippery nature of identity itself. From the start, a disclaimer states that certain subjects have been “digitally disguised” for their safety. Then come the faces themselves, slightly soft, as they were in Martin Scorsese’s de-aged “The Irishman.” Smoking cigarettes, strategizing, looking nervous. We come to care about them. The effect is never quite seamless. Maybe it shouldn’t be.

“It’s an essential part of our total cultural history: the face as window, the core of our identity, the most distinguishing feature of the body,” said Bill Nichols, professor emeritus at San Francisco State University and a pioneer in the field of documentary studies. “When you have a lack of the actual face, in most cases it will induce a sense of disturbance.”

But “Welcome to Chechnya” could represent the introduction of a new kind of “soft mask,” one that hides a subject’s identity while still allowing for a complex emotional attachment. Nichols said he was won over by France’s film. “It’s a huge enterprise but I think it works,” he added. “It allows us to feel that whammy of: This guy was beaten up and tortured and electrocuted and I am seeing him — him being his ‘face’ — bare his soul.”

Where do the new faces actually come from? Ryan Laney, France’s visual effects supervisor and a 30-year veteran whose credits include “Harry Potter and the Chamber of Secrets” (2002), persuaded the director that a deep-learning computer process was the easiest answer, even on a documentary’s tight budget. For “Welcome to Chechnya,” he set up a secret editing suite and remained entirely offline while on the job.

“It was a theory, and then it worked better than we ever expected,” Laney said of the painstaking yearlong visual work, which he likened to brush strokes, facial prosthetics and virtual cheek implants. “There were times when we actually backed off on some areas, the machine did better or sharper, and we were concerned that we weren’t telling the audience that something was going on.”

That “tell” is important, Laney said. He called it an “underblur” or “halo,” intentionally added so you’ll be aware of the manipulation. His method essentially involved filming unrelated subjects in a studio with a nine-camera array capturing their faces from all angles. Their visual information was married on a deep-learning level with the Chechen footage.

The unrelated subjects “were mostly queer activists in New York who I found on Instagram and elsewhere,” France said. “I asked them if they would lend their faces as a human shield to protect the identities of the folks I’d filmed. And they signed on as an act of activism.”

This melding of techniques resulted in an illusion that received high marks from a Dartmouth study group researching the so-called uncanny valley, a metric of empathy for nonhuman faces.

In a sense, “Welcome to Chechnya” is an applied special effect that lasts an entire movie. Until it doesn’t: In one of the film’s more breathtaking moments, the effects drop away after a gay refugee, Maksim Lapunov, reclaims his name — and his real face — at a news conference. “I wanted you to feel what he felt at that moment,” France said.

D. Fox Harrell, professor of digital media and artificial intelligence at Massachusetts Institute of Technology, said there was an educational potential in “synthetic media,” — the term he prefers to deepfake — redeeming it from mere deception. Training viewers to look for even the obvious signs would help, he said. “To have the savvy to negotiate a political media landscape where a video could potentially be a deepfake, or a legitimate video could be called a deepfake, I think those are cases people need to be aware of,” he said.

One creative project that’s come out of Harrell’s program, the Center for Advanced Virtuality, is a seven-minute video titled “In Event of Moon Disaster.” In it, a notorious 1969 speech written for Richard Nixon — meant to be delivered only in the wake of a catastrophic Apollo 11 mishap — is merged with the actual televised 1974 resignation speech, resulting in a spooky bit of alternative history. The piece will go live on July 20 at moondisaster.org.

“I come from a news background, so I love the idea of making something that really could have happened,” said Francesca Panetta, co-director of the video. Speaking by Zoom from London, she summed up the swirl of emotions — nervousness, liberation, even possible enlightenment — set off by deepfake’s arrival in the documentary world.

Panetta recalled talking with her mother, who had sent her bad information about Covid-19. “She would say, ‘But it was texted to me by my friend.’ And the conversation I had with her is what I want people to have after seeing my film: Where did it come from?”

In a way, she said, that discussion is “more important than specific techniques on how to spot deepfakes because the technology is going to get better so fast.”

“At the moment,” she added, “most deepfakes you can spot with the naked eye.”

France isn’t hiding his. He wants you to see them. “All technology has a dual moral purpose,” he argued. “What our film proves is that this can be done in the right way.”