Deepfakes proliferate during election cycles. They’re not a new phenomenon, but recently they’ve become more accessible to create.
For example, one viral video shows Hillary Clinton endorsing Ron DeSantis for president. I checked outside—a clear day with no pigs in the sky flying. So it must be fake.
If one watches to the end, the fake Clinton closes with “Hail Hydra,” a reference to the Marvel superhero universe. This kind of humor appeals to internet natives, but videos like these cause a ripple of headlines of near-existential concern.
Deepfakes may present a profound, destabilizing consequence of rapidly progressing AI technology. For now, deepfakes are mostly satirical, with the casualties of fooled viewers few and among the non-internet savvy.
What about deepfakes that aren’t humorous? How serious is the threat of deepfakes?
What are deepfakes?
I’ve covered the basics of a deepfake in another article about how image generators can create headshots of nonexistent people that are nearly indistinguishable from real ones. I wrote, “Deepfake refers to a computer-generated image or video of a person that is nearly indistinguishable from reality.”
Dr. Ryan Denison wrote about an AI-generated image of former President Trump being arrested by police while resisting the arrest in a comical way. Although that post was a harmless joke meant for a few people, it reached millions of views. In that anonymous sea of people, how many were led astray and thought Trump had been dragged into a police van? Hopefully few, especially since the fakes were so widely reported.
At the beginning of the war in Ukraine, Russia spread a deepfake of President Vladimir Zelensky calling Ukrainians to lay down their arms and surrender. It didn’t get far, but it reveals a frightening application of AI.
Currently, most deepfakes seem relatively harmless and easy to spot for the internet savvy. But could Russia or other bad actors use AI-generated deepfakes to cause unrest?
Are deepfakes a threat?
Gen Z, and by extension, internet humor, relies heavily on sarcasm and irony. Satire drips from most deepfakes, and the commenters who seem to be “fooled” by the videos almost certainly belong to a community that treasures sarcasm above all else.
Nevertheless, without careful attention, or for people not well-versed in the internet world, the videos might genuinely fool a handful of people. If the video is viral, a small percent fooled could translate to thousands of individuals.
Reality might be “up for grabs” during elections as technology improves. The CIA, for example, is trying to keep up in an AI arms race against our foreign opponents. Social media companies scramble to regulate deepfakes. Consider another frightening prospect: doctored videos used as evidence in courtrooms.
The threat is not fully here, but it looms larger and larger.
For those less articulate on the internet, how do you spot a deepfake?
How do you spot a deepfake?
In a previous article, I wrote that AI can produce still images of human faces indistinguishable from real ones. Even with training, people could tell them apart only about 60 percent of the time.
Videos are a different matter and much easier to spot (for now).
- Watch people’s mouths in videos. The voice often doesn’t match their movements.
- Look for an “airbrushed” effect, where the area around their mouth or face looks smoother than normal. The person might also have too many wrinkles.
- Watch for strange shadows or incorrect glare; the physics of light is difficult for AI.
- Watch for glitches and small distortions, especially around the hair.
- Look for strange, flowy movements in the head area in motion, like the forehead lagging behind the face.
- Read the comments. Twitter uses a feature allowing users to identify the AI-generated content under “readers added context.”
- The poorer quality of the video, the easier to deepfake. Pay special attention to low-quality footage. Countless viral, “fun” videos reposted on social media are fake.
For more information, read MIT’s resource dedicated to combatting deepfakes. They include a test you can take as part of a study that checks your ability to determine fake videos.
Deceit is as old as sin
The Bible was written through a God and human partnership over a thousand years, with the last piece written two thousand years ago. Suffice it to say, I didn’t find any reference to AI-generated deepfakes.
Deceit, however, is as old as humanity—as old as sin. The snake deceived. Cain lied to God. Abraham lied about his wife. In nearly every biblical story, deceit slithers through it in one way or another.
Putting aside the satirical use, deepfakes will be seriously abused and may pose a massive threat to an already frayed public discourse. Many bad actors hope to deceive viewers, causing strife, unrest, and confusion.
The Lord declares of Israel through Jeremiah that they “[heap] oppression upon oppression, and deceit upon deceit” (Jeremiah 9:6). In the internet-based world, we have indeed heaped deceit upon deceit.
In my estimation, the misuse of deepfakes fits nearly everything the Lord hates from Proverbs 6:17–19: “A lying tongue,” “a heart that devises wicked plans,” “a false witness who breathes out lies,” and “one who sows discord among brothers.”
Grow in savviness and ask God to protect you from false hearts, not only from deepfakes but from the propensity of social media and the internet to deceive and divide.