May 5, 2020 – A new report from the Australian Strategic Policy Institute and the International Cyber Policy Centre detailed the state of rapidly developing “deep fake” technology and its potential to produce propaganda and misleading imagery more easily than ever.
The report, by Australian National University’s Senior Advisor for Public Policy Katherine Mansted and Researcher Hannah Smith, explained the costs of artificial intelligence technology allowing users to falsify or misrepresent existing media, as well as to generate new media entirely.
While audio-visual "cheap fakes" (edited media using tools other than AI) are not a recent phenomenon, the rapid rise of artificial-intelligence-powered technology has seen several means by which nefarious actors can produce misleading material at a staggering pace, four of which were highlighted by the ASPI report.
First, the face swapping method maps the face of one person and superimposes it onto the head of another.
The re-enactment method allows a deep fake creator to use facial tracking to manipulate the facial movements of their desired target. Another method, known as lip-syncing, combines re-enactment with phony audio generation to make it appear as though speakers are saying things they never did.
Finally, motion transfer technology allows the body movements of one person to control those of another.
This technology creates disastrous possibilities, the report said. When using various deep fake methods in conjunction, one can make it appear as though critical political figures are performing offensive or criminal acts or announcing forthcoming military action in hostile countries.
If deployed in a high-pressure situation where the prompt authentication of such media is not possible, real-life retaliation could occur.
The technology has already caused harm outside of the political arena.
The vast majority of deep fake technology is used on internet forums like Reddit to superimpose the faces of non-consenting peoples such as celebrities onto the bodies of men and women in pornographic videos, the report said.
Visual deep fakes are not perfect, and those available to the layman are often recognizable. But the technology has developed rapidly since 2017, and programs that work to make the deep fakes undetectable have as well.
Generative adversarial networks compete with other AI networks to develop and detect deep fakes, checking and refining hundreds or thousands of times, until deep fake audio and visual media are unrecognizable to the network and far less to the human eye. "GAN models are now widely accessible," the report said, "and many are available for free online."
Such forged videos are already widespread and may already have had an impact on public trust in elected officials and others, although such a phenomenon is difficult to quantify.
The report also detailed multiple instances in which a purposely altered video circulated online and potentially misinformed viewers, including a cheap fake video that was slowed and pitch-corrected to make House Speaker Nancy Pelosi appear inebriated.
Another video mentioned in the report, generated by AI thinktank Future Advocacy during the 2019 UK general election, used voice generation and lip-sync to make it appear as though now-Prime Minister Boris Johnson and then-opponent Jeremy Corbin were endorsing each other for the office.
Such videos can have a devastating effect on public trust, wrote Mansted and Smith. And in addition to the fact that the production of such videos is more accessible than ever, deep fake creators can use bots to swarm public internet forums and comment sections with commentary that, because of the lack of a visual element, can be almost impossible to recognize as artificial.
The accelerated production of such materials can make it feel as though the future of media is one where almost no video can be trusted to be authentic, and the report admitted that "On balance, detectors are losing the 'arms race' with creators of sophisticated deep fakes."
However, Mansted and Smith concluded with several suggestions for combating the rise of ill-intentioned deep fakes.
Firstly, the report proposed that international governments and online forums should "fund research into the further development and deployment of detection technologies" as well as "require digital platforms to deploy detection tools, especially to identify and label content generated through deep fake processes."
Secondly, the report suggested that media and individuals should stop accepting audio-visual media at face value, adding that "Public awareness campaigns... will be needed to encourage users to critically engage with online content."
Such a change of perception will be difficult, however, as the spread of this imagery is largely based on emotion and not critical thinking.
Lastly, the report suggested the implementation of authentication standards such as encryption and blockchain technology.
"An alternative to detecting all false content is to signal the authenticity of all legitimate content," Mansted and Smith wrote. "Over time, it's likely that certification systems for digital content will become more sophisticated, in part mitigating the risk of weaponised deep fakes."
- Technology Behind Google and Apple’s Protocol is Insufficient for Contact Tracing, But Preserves Users’ Privacy
- Broadband Roundup: Section 230 Fears, T-Mobile Claims 5G Rollout, Ajit Pai Challenges Twitter
- At Silicon Flatirons, UN Representative Says World Must Stand By Twitter in Battle of Intimidation with Trump
- Partisan Disagreement Delays Broadband Funding That Might Come Through HEROES Act
- Gary Bolton: Under the Stress of COVID-19, the Networks That Held Fast Were Symmetrical Fiber Broadband
Signup for Broadband Breakfast
Congress6 days ago
Senators Introduce Healthcare Broadband Bill as House Companion, Proposes $2 Billion Telehealth Expansion
China1 month ago
China Expert Predicts that Nation’s Flawed Coronavirus Response Will Damage the Power of Chinese Communist Party
Broadband Data1 month ago
CenturyLink CTO Boasts Success in Handling Coronavirus-Induced ‘Hot’ Networks, Credits Company’s Fiber Push
Big Tech3 weeks ago
The Rise, Reign, and Self-Repair of Zoom
Fiber4 days ago
Fiber Networks Hold a Cybersecurity Advantage Over Rival Co-Axial and Wireless Technologies, Say Panelists
#broadbandlive1 month ago
Broadband Breakfast Live Online on Wednesday, April 29, 2020 – Will the Coronavirus Lead to a Loss of Privacy? Weighing Contact Tracing and Broadband Surveillance
Net Neutrality1 month ago
Public Interest Groups Blast FCC For Refusal to Extend Public Safety Deadline on Net Neutrality Comments
Rural4 weeks ago
Why the Rural Digital Opportunity Fund is So Significant, and How to Succeed in Applying For RDOF