(Noted News) — In an effort to spread awareness of the advancement of deepfake technology, Microsoft has released an online quiz called ‘Spot The Deepfake’, challenging users to recognize real videos and photos from the deepfakes. This comes following Microsoft’s unveiling of their deepfake detection software just ahead of the U.S election.
Deepfakes, which have taken the world by storm with their hyper-realistic impersonations of celebrities and politicians, utilize advanced machine learning and artificial intelligence to alter video and audio content.
Microsoft describes a deepfake as a “photo, video, or audio track created using artificial intelligence techniques to realistically simulate or alter people’s faces, movements, and voices, among other simulations.”
The “Spot The Deepfake” quiz tests users’ understanding of what a deepfake is, their ability to detect them, and their impact on the world, particularly in media and politics.
The homepage for the quiz features a video of Richard Nixon announcing to the world that the astronauts on the Apollo 11 mission to the moon had in fact died on their mission, and never returned home.
“Good evening, My fellow Americans. Fate has ordained that the men who went to the moon, to explore in peace, will stay on the moon to rest in peace,” Nixon says.
A link next to the video that says “I’m confused, is this real?” leads to an explanation stating: “The video of Nixon is not real. It is actually a deepfake: a form of audiovisual manipulation that allows media to be easily and convincingly altered or fabricated to make it seem as if someone did or said something that they never did.”
The quiz goes on to ask users if a video of Jeremy Corbyn curiously endorsing his political opponent Boris Johnson is real.
“This video of Jeremy Corbyn endorsing his own political rival was actually created as an educational tool about deepfakes. Even if you can’t immediately tell that something is a deepfake just by looking at it, thinking about the motivations behind it can help you think through whether it is real.”
The deepfake detection technology, officially called Microsoft Video Authenticator, can analyze a video or a photo, and then give it a percentage score on whether or not its fake.
In a blog post titled “New Steps To Defeat Disinformation”, written by Microsoft’s corporate vice president of security and trust Tom Burt, and chief scientific officer Eric Horvitz, they outline how and why Microsoft decided to combat and educate the masses on deepfake technology.
“Video Authenticator can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated. In the case of a video, it can provide this percentage in real-time on each frame as the video plays. It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”
Microsoft says its Video Authenticator is an important part of its “Defending Democracy Program”, an initiative to fight disinformation, protect voting, and secure campaigns and other aspects of the democratic process.
“We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.”
Microsoft announced it is partnering with the University Of Washington, Sensity, and USA Today in regards to “media literacy”.
“Improving media literacy will help people sort disinformation from genuine facts and manage risks posed by deepfakes and cheap fakes. Practical media knowledge can enable us all to think critically about the context of media and become more engaged citizens while still appreciating satire and parody. Though not all synthetic media is bad, even a short intervention with media literacy resources has been shown to help people identify it and treat it more cautiously.”