AI tools are making it even easier to make realistic fake videos. By Alvin Soon

Portrait of Tammy Strobel

AI tools are making it even easier to make realistic fake videos. By Alvin Soon

My Reading Room

Fake news is going to become more believable, thanks to advancements in AI. While the tools to create fake news are advancing, defenses against fake news are crumbling. We may soon live in a time where nobody can be sure what’s real anymore.

Weapons of mass disinformation are the new reality. The United States is investigating a disinformation campaign in its 2017 Presidential elections. Fake news likely played a part in persuading UK voters to push for Brexit.

But it turns out that what we’ve seen so far have been small arms skirmishes. Fake news is getting a weapons upgrade, and the unholy matrimony of AI with porn is giving us a preview.

At the end of 2017, Redditor ‘deepfakes’ released porn videos apparently starring actors like Gal Gadot and Emma Watson. The videos were fake, but startlingly realistic for a single person’s work in his spare time.

Deepfakes revealed that he’d used machine learning to superimpose actors’ faces onto the videos. Everything he’d used was easily available, like celebrity images from the internet and open-source tools like Google TensorFlow. deepfakes’ videos were shocking enough. But then another Redditor, ‘deepfakeapp,’ released an app that allowed anyone to create fake porn videos.

The technology to make people seemingly do things on video has been brewing for a while. In recent years, we’ve seen demonstrations like the Face2Face research project, which can realistically manipulate faces on video. Adobe’s VoCo audio editing suite can simulate a person’s voice after listening to it for 20 minutes. Combine the two and you have a recipe for convincing fake videos.

While the tools to create fake news are advancing, the tools to discredit them aren’t keeping pace. Fake news still goes viral on platforms like Facebook, Google, Twitter, and YouTube, especially in the aftermath of tragedies. After the Parkland high school mass shooting in the US, for example, YouTube videos that maligned victims as actors shot to the top of the Trending list.

These platforms have automated systems to flag violations of standards. But Facebook says that even catching text-based violations is difficult to automate. And while platforms can spot copies of images, videos and audio for copyright infringement, detecting altered media is more difficult. This makes it easier for doctored media — i.e. fake news — to spread.

The traditional bulwark against fake news has been journalism. But that bastion is fading just as fake news is rising. Media companies have lost advertising revenue to the very platforms that are enabling the spread of fake news. The consequence is that traditional media is understaffed and outgunned.

The truth is not ready for how good fake news is becoming. We’ve already seen how effective disinformation campaigns are at influencing entire nations, using current technology. So, it should be worrying to even imagine about what is coming next.