'Deepfake' AI Tech Is The Real Fake News You Need To Fear
A doctored video of Facebook's Mark Zuckerberg has exposed a frightening new technology that can bring photos to life and fool even the most sceptical news consumers.
The clip was met with international outrage this week despite the fact it is entirely fake.
The video shows Mark Zuckerberg sitting at a desk, seemingly giving a sinister speech about Facebook's power.
"Imagine this for a second: One man, with total control of billions of people's stolen data, all their secrets, their lives, their futures," a digitally created Zuckerberg said.
"Whoever controls the data, controls the future."
The Zuckerberg video was created for an art installation of videos generated by artificial intelligence (AI) called ‘Spectre’ in the United Kingdom.
The video is the most recent example of 'deepfake' technology -- an emerging type of artificial intelligence.
More innocent versions of deepfakes were created by Samsung's AI technology, bringing to life celebrities like Marilyn Monroe and Albert Einstein.
But the technology is now being used in a political landscape, particularly in the U.S. in the lead up to the 2020 Presidential elections.
A deepfake of Democratic leader Nancy Pelosi released last month has put politics on all sides on high alert.
What Is Deepfake?
Seeing is no longer believing.
Deepfake is AI technology that allows the production of videos to be completely doctored or created to make someone say or do anything.
They are created by taking source video, identifying patterns of movement within a subject’s face, and using AI to recreate those movements within the context of a target piece of video.
Why Is Deepfake Dangerous?
Previously, the most well-known examples of deepfake videos were pornographic clips of celebrities, including Scarlett Johansson, Gal Gadot and Emma Watson.
While the explicit videos are embarrassing, there are wider concerns about how the technology could be weaponised in a political landscape.
In an era of 'fake news', there are concerns deepfakes could play a dangerous role in national security and politics by spreading false information to sway opinion, said Dr Raymond Sheh, from Curtin University, who specialises in artificial intelligence, robotics and cyber security.
"The problem comes as much from people who deliberately generate deep fakes with the expressed purpose of manipulating opinion, as it comes from people generating deep fakes for fun and then others seeing them, not realising they're fakes, and taking them as evidence of their point of view," he told 10 daily.
The biggest danger comes from people being selectively trusting and passing on what they hear and see, without checking their sources.
"This risks having the credibility of a news source being less about how accurate the news is and more about how well it matches what they already believe," he said.
What Is Being Done About Deepfake Technology?
Sheh compared regulating deepfakes to being able to regulate what someone can write with a pencil -- it's just not possible.
"You can tell people not to create them but at the end of the day there isn't actually anything you can do to stop them."
The U.S. Defence Advanced Research Projects Agency has spent millions on research into detecting deepfakes over the past two years, but this is an "arms race" said Sheh.
"The same research that can detect deep fakes can also make them better able to escape detection," he said.
The Australian government is currently discussing its options for AI, because while it can have social, economic and environmental benefits, there are also ethical problems, such as privacy protection and transparency.
March 31 marked the end of submissions to the Department of Industry, Innovation and Science for the government's approach to an Ethics Framework for AI.
The government and industry experts aim to create an ethics plan on how AI can be used and developed within Australia.
'The Conversation', an Australian media outlet funded by universities, government, business and the research sector, is just one of the partners involved in the development of the framework.
"The ethical framework looks at various case studies from around the world to discuss how AI has been used in the past and the impacts that it has had," The Conversation said.
"The case studies help us understand where things went wrong and how to avoid repeating past mistakes."
How Can We Tell If Something Is Deepfake?
In short, it's almost impossible.
Tthe biggest problem for deepfakes had previously been making them "believable", but that is no longer a concern, with eerily realistic videos surfacing regularly.
Fact-checking and extra research can help people work out if something is legitimate or not.
If something sounds outrageous, it might not be real.
The best way of protecting against deepfakes and 'fake news' is for people to use "critical thinking, critical reading and avoiding jumping to conclusions", said Sheh.
"It's not the easy answer and it is, in and of itself, full of controversy as it cuts to the notion of 'truth'," he said.
Contact the author at firstname.lastname@example.org