In the second season of the BBC’s mystery thriller The Capture, deep forgeries threaten the future of democracy and Britain’s national security. In a dystopia set in modern-day London, hackers use AI to insert these highly realistic fake images and videos of people into live news broadcasts to destroy politicians’ careers.
But my team’s research has shown how difficult it is to create convincing deepfakes in real life. In fact, techies and creative professionals have started collaborating on solutions to help people spot fake videos of politicians and celebrities. We have a decent chance of staying one step ahead of fraudsters.
In my research project, Virtual Maggie, I tried to use deepfakes to digitally resurrect former British Prime Minister Margaret Thatcher for a new drama. After months of work, we were unable to create a virtual Maggie that was acceptable for broadcast.
Producing convincing high-resolution deepfakes requires top-spec hardware, lots of computing time, and human intervention to fix errors in the output. This did not stop me from enjoying The Capture, despite knowing that Ben Chanan’s drama was not a scenario likely to play out in the near future. Like any good dystopia, it had the seed of something that would one day be possible.
The use of deepfakes since they started in 2017 has been shocking. The majority of deepfakes on the internet are abuse of women, taking facial images without consent and inserting them into pornographic content. Deepfakes expert Henry Ajder found that 96% of deepfakes found online were pornographic and 100% of these were video images of women.
The premise of The Capture is grounded in fact. Deepfakes threaten democracy. In the 2019 British general election, artist Bill Posters released a provocative video of Boris Johnson saying we should vote for Jeremy Corbyn.
Poster’s deepfake was far more convincing than the boring Russian deepfake showing Ukrainian President Volodymyr Zelenskyy asking his troops to surrender. Still, unlike the Kremlin, the British artist made it obvious that his AI Boris was unreal by having “Boris” direct viewers to a website about deepfakes. He aimed to highlight our vulnerability to false political propaganda.
Deepfakes may not yet be convincing enough to fool people. But creative work usually involves an unwritten agreement between creator and viewer to suspend their disbelief.
The threat from deepfakes has led to an intense search for technical solutions. A coalition of companies has formed the Content Authenticity Initiative (CAI) to provide “a way to evaluate the truth in the media presented to us”.
It is a promising approach. CAI collaborators and technology companies Truepic and Qualcomm have created a system that embeds the history of an image into its metadata so that it can be verified. American photographer Sara Naomi Lewkowicz has completed an experimental project with CAI that embeds source information into her images.
But creative and tech professionals don’t necessarily want to miss out on the emerging technology of deepfakes. Researchers at the Massachusetts Institute of Technology Media Lab have brainstormed ways to put deepfakes to good use. Some of these are in care and treatment.
Research engineers Kate Glazko and Yiwei Zheng are using deepfakes to help people with aphantasia, the inability to create mental images in your mind. The breakup simulator, which is in development, aims to use deepfakes to “relieve the anxiety of difficult conversations through repetition”.
The most profoundly positive uses of deepfakes include campaigns for political change. The parents of Joaquin Oliver, who was killed in a 2018 Florida school shooting, used the technology to bring him back in a powerful video calling for gun control.
There are also cultural applications of deepfakes. At the Dali Museum in Florida, a deeply fake Salvador Dali welcomes visitors and talks about himself and his art. Researcher Mihaela Mihailova says this gives visitors “a sense of immediacy, closeness and personalisation”. Deepfake Dali even offers you the chance to take a selfie with him.
Deepfakes and AI-generated characters can be instructive. In Shanghai, during the lockdown, associate professor Jiang Fei noticed that his students’ attention spans decreased during online classes. To help them focus better, he used an anime version of himself as a front for his teaching. Jiang Fei said, “The enthusiasm of the students in the class and the improvement of the quality of homework have made obvious progress.”
Channel Four used its alternative Christmas message for 2020 to entertain viewers with a deeply fake Queen, while making a serious point about not trusting everything we see on video.
A growing network of filmmakers, researchers and AI technologists in the UK, hosted by the University of Reading and funded by the Alan Turing Institute, is seeking to harness the positive potential of deepfakes in creative film production. Filmmaker Benjamin Field told the group during a workshop how he used deepfakes to “resurrect” the animator who created Thunderbirds for Gerry Anderson: A Life Uncharted, a documentary about the troubled lives of children’s TV heroes.
Field and his co-producer, Anderson’s youngest son Jamie, unearthed old audio tapes and used deepfakes to construct a “filmed” interview with the famous puppeteer. Field is among a small group of creatives determined to find positive ways to use deepfakes in broadcasts.
Deepfakes and AI characters are part of our future and the examples above show how this can be at least partially positive. But we also need laws to protect people whose images are stolen or misused, and ethical guidelines for how deepfakes are used by filmmakers. Responsible producers have already formed a partnership on AI and drafted a code of conduct that could help avert the catastrophic vision of the future we saw in The Capture.