Where the Truth Lies:
Chloe Lizotte on MoMI’s exhibition Deepfake: Unstable Evidence on Screen
I’m crouching on the floor on the periphery of a midcentury-styled living room, squinting at an image of Richard Nixon on an analog television. Seated at his desk in the Oval Office, he gravely tells us that the Apollo 11 astronauts have perished before landing on the moon. “You can see some of the blur around his neck and chin,” I hear a man standing nearby tell his girlfriend. It’s true, but if your attention were divided, Nixon’s emotional tonalities and solemn pauses sound like perfectly natural background noise.
This was the gently disorienting experience of In Event of Moon Disaster, the centerpiece installation of Museum of the Moving Image’s exhibition Deepfake: Unstable Evidence on Screen. Of course, this Nixon speech never happened, but since the video is so realistic, it’s not impossible to mistake a few decontextualized excerpts for an actual address, perhaps about fallen Vietnam soldiers rather than the Apollo astronauts. The piece, which feels all the more authentic for incorporating actual footage from inside the Apollo capsule, was conceived by its directors Francesca Panetta and Halsey Burgund, with the Center for Advanced Virtuality at MIT, for educational as well as artistic purposes. The exhibition also includes a Scientific American featurette that breaks down its production. Panetta and Burgund began with the text of real remarks written by William Safire for Nixon if the astronauts didn’t make it home, then decided to sculpt a deepfake from fittingly somber footage of his resignation speech. Next, they funneled a massive audiovisual archive of Nixon’s speeches into machine-learning technology at two companies, Canny AI for the visuals and Respeecher for the sound, which combined Nixon’s voice with the performance of an actor mimicking his inflections. Alexei A. Efros of the Berkeley Artificial Intelligence Research Lab has compared this process to autocomplete: the vaster the sample set of a real-life subject’s behavior, the more accurately AI will be able to render them saying or doing something they haven’t. Even Nixon would be surprised to find such dramatic possibilities in an arsenal of taped recordings.
The MoMI exhibition seeks to demystify this technology as it evolves. Although Panetta and Burgund have access to cutting-edge AI, the show’s co-curators—Joshua Glick, an Assistant Professor of English, Film & Media Studies at Hendrix College and a fellow at MIT’s Open Documentary Lab, and Barbara Miller, MoMI’s Deputy Curator for Curatorial Affairs—stress deepfakes’ roots in widely accessible technology. The term was coined in 2017, when the Redditor u/deepfakes began posting AI-generated, face-swapped porn videos. Although this technology was starting to become available in segments—a face-swapping tool was popular on Snapchat, Google had developed a free machine-learning library called TensorFlow for researchers, and Adobe debuted their synthetic audio tool VoCo in 2016—these capabilities had never been unified. Crucially, u/deepfakes also shared a downloadable suite of consumer-level editing tools to encourage others to make their own swaps. These videos circulated on the subreddit r/CelebFakes—mega-celebrities like Gal Gadot and Scarlett Johansson were common face-swap targets—until a designated subreddit, titled r/deepfakes, was created to handle the traffic. Reddit shut down r/deepfakes in 2018 and added new rules surrounding “involuntary pornography” to their safety guidelines, but the subreddit already had nearly 90,000 subscribers who could easily flock to other platforms like Discord. Meanwhile, a 2019 study conducted by Deeptrace found that 96% of all deepfakes in circulation were pornographic, involving people who had not consented to the use of their images.
Despite that figure, much of the current deepfake hysteria stems not from questions of consent, but from fears about election manipulation. It has never been as easy to undercut the “truth” of filmed reality, but verifying truth in image-making was never so straightforward. “We wanted to de-escalate some of the alarmist headlines surrounding deepfakes and reflect on the fact that media manipulation is nothing completely new,” Glick told me over Zoom. “It’s taken many forms, and it’s evolved.” The exhibition begins with a full gallery exploring this legacy, beginning in 1899 with “combat” footage shot by the Edison company during the Spanish-American War. Despite the impossibility of lugging heavy filming equipment into war zones, these dramatic reenactments of battles appeared real to late-19th-century viewers, who were still getting used to the baseline concept of moving images. Films like these have always been central to fashioning American self-image: during World War II, the government collaborated with filmmakers like Frank Capra to cinematize their military program, a thread the gallery furthers by excerpting CNN’s around-the-clock coverage of the Gulf War. Elsewhere, there are sensationalistic clips from Geraldo Rivera’s Satanic Panic special, which may seem quaint in retrospect but, when it aired, contributed to a moral panic and was the highest-rated TV documentary of all time. The exhibition also introduces non-AI “cheapfakes” and “shallowfakes,” which speed up, slow down, or recontextualize videos to fabricate a different reality—as one example, Moon Disaster manufactures tension from shaky images of the astronauts’ cabin and cleverly trimmed verbal snippets, implying the Apollo “crash.” In the gallery, we see clips of the Rodney King trial, in which lawyers altered the speed of the footage of King’s beating to sway the jury toward acquittal.
The MoMI show, however, is interested in more than the distortion of recorded evidence. By including the Zapruder film of the JFK assassination and the specter of the moon landing, the show also explores how disillusionment with the state can stoke doubts in unedited footage. Because of this, an important precursor of deepfakes is the 9/11 conspiracy documentary Loose Change, directed by Dylan Avery. The brief clip looped at MoMI emphasizes the film’s seductive style over its pop-science truther narrative. Propelled by a slick synth score, collaged archival footage, and a first-person narration, Loose Change seizes on the Pavlovian appeal of mystery—and so did the way it was circulated over email, which united viewers in a crusade to question the official story. This tone of conspiratorial intimacy is common among a vast ideological array of cinematic counter-narratives, from credible filmmakers like Adam Curtis to amateur crackpots on YouTube; one of Loose Change’s largest backers was Alex Jones, who built a similar atmosphere around InfoWars. All of this led to backlash to a 2005 Popular Mechanics investigation that debunked many of Loose Change’s claims, as though it betrayed Avery’s renegade vision.
Loose Change illustrates how emotional impulse dictates the spread of information on the internet. Social media easily blurs the line between conversation and news, and because of implicit bias and political affiliation, these apps are psychologically resistant to fact-checking. This poses a challenge to tech companies’ content moderation strategies, which currently depend on AI algorithms. Twitter and YouTube use them to scan for “copyrighted” material, which inadvertently leads to the deletion of fair-use, noncommercial fan videos. Many of these platforms also rely on keyword filtering to flag misinformation, but their algorithms’ abilities to parse context are questionable at best, leading to alarmist labels on non-sequitur, satirical tweets. A comparable approach to deepfakes would be disastrous. Data-driven tools require constant maintenance: although Facebook’s detection tool successfully recognized 82 percent of deepfakes from a vast in-house archive, it was only 65 percent accurate when encountering unfamiliar images. Moreover, these tools lack the nuance to separate malicious forms of deepfakery from a video of, say, Queen Elizabeth breakdancing—the MoMI show features this vision in a Channel 4 anti-misinformation PSA, which resembles a very high-budget JibJab video. Finally, these algorithms wouldn’t catch cheapfakes and shallowfakes; Facebook’s 2020 policy to combat deepfakes exclusively covers AI-produced videos. Only a human moderator would be able to sort through these tonal differences—at least, from the technological vantage point of March 2022.
***
Deepfakes are not exclusively a genre of misinformation, but the product of evolving AI technology. “As curators, it was important for us to not advance an argument that AI is malicious and necessarily leads to bad actors who want to deceive people,” Glick explained. “We encouraged people to think about multiple uses of the technology. That could be amateur artists, journalists, filmmakers, or even everyday citizens interested in how this technology lives in the world.”
While surveying these alternative uses, the exhibition examines how their code of ethics is still in flux. For example, David France’s 2020 film Welcome to Chechnya deployed deepfakes to protect the identities of LGBTQ refugees in the Caucasus. In practice, this was a high-tech substitute for pixelating their faces: the filmmakers cast consenting activists to protect the subjects’ identities, but they also maintained a blur around the edges of their faces to keep the effect from looking too seamless. Recent debates have centered on the presence or absence of this kind of sensitivity: last year, Morgan Neville’s Anthony Bourdain documentary Roadrunner attracted controversy for including deepfaked audio of Bourdain reading emails that he wrote but never read aloud. Bourdain’s ex-wife Ottavia Busia insisted on Twitter that Neville had never received permission, and Neville never clarified within the film that this audio was artificial, making viewers uneasy about how difficult it was to detect.
This uncanny realism makes celebrity deepfakes a popular choice for politically minded shorts. Stephanie Lepp’s series Deep Reckonings hopefully situates them as a vehicle for change: her videos imagine moments of extraordinary “moral courage” for those in positions of power. The MoMI show includes such a reckoning with a synthetic Mark Zuckerberg, who apologizes to the camera for being “naïve” in his approach to proliferating misinformation on Facebook. The video calls out for Zuckerberg to see the moral light, hoping that an appeal to reason will compel him to implement systemic reform. That’s quite a gamble, but Zuckerberg did overwrite Facebook and Meta’s AI policy after seeing a deepfake of himself, made by the artist Bill Posters for an installation called Spectre. Posters’s approach was more ironic, modeling people like Zuckerberg and Kim Kardashian gloating about their misuse of data or manipulation of their social media followings.
Lepp’s vision of Zuckerberg has surface-level parallels with the ending of Charlie Chaplin’s The Great Dictator: an impassioned monologue for peace and tolerance in which Chaplin seems to simultaneously appear as his dual characters in the film—a Hitler proxy and a Jewish barber—and as himself. Rather than using this scene to neatly conclude the film, though, Chaplin felt conflicted about the slippery aspects of his acting; as he told his son, “He’s the madman, I’m the comic—but it could have been the other way around.” By unifying all these characters in a triple-performance, the strain of building a better world seems communal, but that realization requires a daunting amount of trust and collaboration. So instead of letting his speech absolve him, Chaplin appears vulnerable, even desperate. Although Deep Reckonings tries to use a fantasy version of Zuckerberg to posit a way forward, it doesn’t reflect on the technology it’s using to do so, nor the work involved to get there. There’s something contradictory about using a deepfake to sincerely appeal to human pathos; after all, this is a technique that exists to deceive. And although it’s important for tech companies to accept responsibility and contribute toward a solution, focusing on a single person or online environment feels like an incomplete diagnosis of the moment. After all, photographic evidence was unnecessary for a politically significant fringe to await the resurrection of John F. Kennedy, Jr.
Some of the best art that captures media misinformation has been satirical; it’s a less tidy and didactic way of encouraging attentive viewing. Chris Morris’s caustic evening news caricature Brass Eye skewered the feverish illogic of moral panics, at one point successfully convincing a Tory MP to warn the House of Commons about a fake party drug called “Cake.” More recently, Tim Heidecker and Gregg Turkington’s On Cinema universe spoofed the conspiracy genre with Xposed, a jewel of a web series in which Michael Matthews reads aloud nonsensical Wikipedia pages to prove that the Loch Ness Monster is real or that the Lincoln assassination demands a second look. (Naturally, the theme song is a symphony of X-Files synths.) The satirical deepfake on view at MoMI verges closer to a PSA: Sassy Justice, a web series pilot from Peter Serafinowicz and South Park creators Trey Stone and Matt Parker. Serafinowicz—co-creator of the BBC educational video parody series Look Around You, and a skilled impressionist—plays a reporter based in Wyoming who warns of the threat of deepfakes…only he is, himself, face-swapped with Trump. The Trump humor and flamboyant mannerisms of Serafinowicz’s character fall back on clichés, but the episode mainly strives to teach viewers to spot deepfakes by using common sense.
Since the form is so new, it makes sense that increased awareness would be an early goal for deepfake practitioners. Interestingly, the most memorable scene in Sassy Justice is not expository, but darkly comedic: Parker’s young daughter is face-swapped with Jared Kushner, fidgeting as a toddler would while denying the Holocaust. The effect pushes the limits of a deepfake’s hyper-realism, and seems to imply that the technology could lead to bizarre, deeply unsettling art. When I interviewed Laurie Anderson last year, she told me she was writing an opera using what she charmingly called the “GAN People”: the convincing synthetic faces created by generative adversarial networks, the fodder for “This Person Does Not Exist.” A PNAS study from February 2022 showed that many people tend to find these fake faces both more realistic and more trustworthy than real ones, likely because they’re the average of facial data. And yet, it’s strange to think of art when each of these uncomfortable revelations induces a new round of handwringing: is it even possible to hear these statistics without worrying about potential harm? In a way, this discomfort is the bedrock of another GAN project, “This Rental Does Not Exist,” which synthesizes generic Airbnb listings. With grainy images of bland, over-lit interiors, the actual text reads like the William S. Burroughs cut-up method; a listing for a “High End Condo for SXSW” begins with the Mark E. Smith-ian phrase, “Welcome to Copenhageniaacity.” The site transforms a familiar source of information into an absurd and vaguely threatening experience, a page that can’t be half-read. It’s a reminder that media literacy is holistic, relying on similar contextual skills to sniff out phishing scams or real-world con artists.
In that spirit, the concluding room of the MoMI exhibition, the “Hall of Mirrors,” challenges museumgoers to identify the “giveaways” of deepfake videos. Usually there’s a glossy sheen on their face, or something slightly off about the content, like John Lennon expressing his devotion to podcasts. But the strangeness of deepfakes goes beyond whether or not we can comfortably identify them—they undo the conceptual underpinnings of sincerity, a highly nostalgic value post-Trump. Sincerity depends on acknowledging a shared social reality, which is an outdated fantasy online; an ascendant irony feels like a more honest shield. Still, as the Bourdain controversy clarified, the anxiety around deepfakes goes deeper than their effect on political earnestness: they threaten what it means to be human. Consider the 2019 news that a computer-generated James Dean would appear in a forthcoming Vietnam-era movie Finding Jack. Isn’t part of the tragedy of James Dean that he only gave three performances—all proof of his indelible personality, which could never be replicated by a computer? But playing devil’s advocate: what kinds of films could be made experimental, uncanny, or simply weird by the addition of an animated, photorealistic James Dean? Perhaps an episode of CSI? (Or, as Heidecker and Turkington imagined, Dracula?)
The musician Holly Herndon has been among the first to model how a person’s likeness might be fashioned into a tool for artists. Working with the machine-learning software developers Never Before Heard Sounds, she created an “instrument” called Holly+ from samples of her own vocals: upload an audio file and Holly+ will reproduce that piece of music as performed by Herndon’s voice. Herndon argues that by treating the instrument as an open-source tool, she’ll actually increase the value of her original recordings; she also wanted Holly+ to genuinely appeal to artists, so she was committed to preserving the personality of her vocals (this video shows the technique at its best). Herndon’s outlook perhaps too recklessly embraces an emergent strain of techno-utopianism—a democratic committee will mint NFTs of artistically valuable Holly+ pieces—but she has been vocal about the need to envision a future built by artists, for artists, before techno-capitalists solidify the rules of gameplay. If anything has become clear from the streaming era of music, it’s that sustainable structures need to be developed alongside evolving technology, but when it comes to something as easily exploitable as cryptocurrency, they shouldn’t simply rely on a belief in good faith.
MoMI visitors were free to sit down in the In Event of Moon Disaster living room, but the day I was there no one—including me—dared to even step on the edge of its rug. Maybe this spoke to a self-protective impulse, as though we didn’t want to allow ourselves to enter this synthetic world. Obviously, though, we’re already there. We’ll likely have more sophisticated ways of processing deepfakes in the future, but their arrival presents a chance to reject the status quo and build something new—not a purely hopeful or hopeless chance, but a chance all the same.
Photo credit: Thanassi Karageorgiu / Museum of the Moving Image