How to Spot a Deep Fake When You Can't Trust Your Eyes

How to Spot a Deep Fake When You Can't Trust Your Eyes

In January 2018, a desktop application called FakeApp was launched. The app allowed users to create and share videos with faces swapped. The app used an artificial neural network to generate fake video. Now in 2019, "Deep Fake" technology is used to combine and superimpose existing photos and videos onto source images or footage using AI machine learning. Making a person appear to say or do something they did not has the potential to take the war of disinformation to a whole new level. It's been possible to alter video footage for decades, but doing it took time, highly skilled artists, and a lot of money. Deep Fake technology is about to change the game. As it develops and proliferates, anyone will have the ability to make a convincing fake video, including people who might seek to weaponize it for political or other ill-disposed purposes.

I think it's clear that there's quite a lot to be concerned about when it comes to Deep Fake videos and their potential to be used maliciously. Considering only the possibility of Deep Fakes making video evidence in a court of law inadmissible suggests that we might have opened Pandora's Box here. As these neural networks become more sophisticated, human judgment is no longer going to be reliable to discern truth from illusion or fact from fiction. When this technology is made available to the general public, it's going to create chaos. Authoritarian governments might use this kind of tech to incriminate political dissidents or opponents, people in the public eye could be easily defamed. We've already reached a point where simple photographic source images are sufficient to create amazing talking head avatars for use in telepresence applications. Some might argue that Hollywood has been making "Fake Videos" using CGI and/or special effects for over a century and the world hasn't exploded yet. (Well, in some cases it has, at least on cinema screens.) However, comparing Deep Fakes to Hollywood visual trickery is problematic as it ignores real-life context. Hollywood isn't lying to me when Robert Downey Jr. and Samuel L. Jackson are suddenly 30 years younger in the Marvel movies. There is no wicked deception here. One can't compare digitally re-created actors and works of fiction to Deep Fake videos that disingenuous individuals or government agencies try to pass off as real.

Ongoing development of tools for fake video and face-spoof detection alongside with the shift towards privacy and data security at major IT companies may be a possible antidote. Maybe not. The claim being made that breakthroughs in AI tools will be used to detect Deep Fakes and auto-debunk content may be wishful thinking. Let me stick to my point on real-life context for another argument. There's reason for concern about a future where we might have these incredibly realistic DF videos that the human eye will be unable to detect as digital fabrications will convincingly fool most people into believing a politician or public figure has said or done something that they haven't. Once these non-events go viral globally in the blink of a tricked eye, fake news will become ubiquitous, and no one is going to believe anything anymore. Now, people being more skeptical isn't necessarily a bad thing. However, everything and anybody can be called into question. We'll trust no one, we'll even think that REAL video footage isn't authentic anymore.

Consider the broken logic of this situation: A neural network AI is used to create an entirely convincing DF video and then we have to rely on another AI to determine if that footage is real or not because we, the humans, can no longer be relied upon to do so. Imagine the irony of AI systems being used to solve problems created by other AI systems. And what about 'false positives'? I doubt that fake-detection accuracy will reach 100% anytime soon. Most civilians aren't going to be able to understand what's actually going on under the hood inside the AI software. How would we know that DF detection AI wasn't manipulated and programmed to lie about which videos are authentic? There's going to be a real need for some serious transparency for the public. Are we supposed just to trust the machines because our biological judgment is impaired? Maybe producers of DF videos will be locked in a kind of ongoing arms race to stay one step ahead of the detection tools. Think of it a little bit like the battle between hackers who produce trojans, malware and viruses and the antivirus software companies. So far, I haven't even touched upon 'digital puppetry' of human body movements and the synthesizing of fake audio that mimics the voices of real people. Some experts worry that truly convincing fakes could undermine public trust and heighten misinformation during (presidential) elections in ways that threaten the foundations of democratic institutions and governance.

One counter-strategy involves building a database with personalized models based on celebrities politicians and other public figures which are supposed to train video analysis algorithms in detecting Deep Fake anomalies. MIT sees the likeliest solution in a balance between automated detection tools that can screen millions of videos and more sophisticated human based scrutiny that can focus on trickier cases. Journalist fact-checkers and researchers could collect and consider supporting evidence about the context of what a video supposedly shows in order to corroborate or debunk. Remember when Facebook started using fact checkers to determine what news was real and fake on the platform? I don't have any reason to trust human-based fact checkers any more than I have trusting the AI. Especially when it comes to politics. None of these solutions to the impending Deep Fake Wars fill me with confidence. If a Deep Fake video is carefully crafted, pixel-perfect and the faker works hard to remove any telltale signs of forgery, I just don't see how anyone's eyes would be sufficiently reliable. And if we have to bring in 'specialized' human fact checkers to spot anomalies or digital trickery on our behalf, then we better concern ourselves with the pitfalls of human error, subjective opinions, and bias.

Return to Blog
Discover Card with white and orange
Diners Club International logo
Blue Visa Logo
Mastercard logo with orange and red
JCB logo with blue, red and green
Union Pay logo with blues and red
American Express with a blue background
PCI Compliant

* Created by Fencl Web Design