Connect with us

Artificial Intelligence

Australian Group Chronicles the Growing Realism of ‘Deep Fakes,’ and Their Geopolitical Risk

Elijah Labby

Published

on

Image by Chetraruc used with permission

May 5, 2020 – A new report from the Australian Strategic Policy Institute and the International Cyber Policy Centre detailed the state of rapidly developing “deep fake” technology and its potential to produce propaganda and misleading imagery more easily than ever.

The report, by Australian National University’s Senior Advisor for Public Policy Katherine Mansted and Researcher Hannah Smith, explained the costs of artificial intelligence technology allowing users to falsify or misrepresent existing media, as well as to generate new media entirely.

While audio-visual “cheap fakes” (edited media using tools other than AI) are not a recent phenomenon, the rapid rise of artificial-intelligence-powered technology has seen several means by which nefarious actors can produce misleading material at a staggering pace, four of which were highlighted by the ASPI report.

First, the face swapping method maps the face of one person and superimposes it onto the head of another.

The re-enactment method allows a deep fake creator to use facial tracking to manipulate the facial movements of their desired target. Another method, known as lip-syncing, combines re-enactment with phony audio generation to make it appear as though speakers are saying things they never did.

Finally, motion transfer technology allows the body movements of one person to control those of another.

An example of face swapping. Source: “Bill Hader impersonates Arnold Schwarzenegger [DeepFake]” Video

This technology creates disastrous possibilities, the report said. When using various deep fake methods in conjunction, one can make it appear as though critical political figures are performing offensive or criminal acts or announcing forthcoming military action in hostile countries.

If deployed in a high-pressure situation where the prompt authentication of such media is not possible, real-life retaliation could occur.

The technology has already caused harm outside of the political arena.

The vast majority of deep fake technology is used on internet forums like Reddit to superimpose the faces of non-consenting peoples such as celebrities onto the bodies of men and women in pornographic videos, the report said.

Visual deep fakes are not perfect, and those available to the layman are often recognizable. But the technology has developed rapidly since 2017, and programs that work to make the deep fakes undetectable have as well.

Generative adversarial networks compete with other AI networks to develop and detect deep fakes, checking and refining hundreds or thousands of times, until deep fake audio and visual media are unrecognizable to the network and far less to the human eye. “GAN models are now widely accessible,” the report said, “and many are available for free online.”

Video tweeted from a nameless, faceless account that appears to show House Speaker Nancy Pelosi inebriated, but was merely slowed and pitch-corrected.

Such forged videos are already widespread and may already have had an impact on public trust in elected officials and others, although such a phenomenon is difficult to quantify.

The report also detailed multiple instances in which a purposely altered video circulated online and potentially misinformed viewers, including a cheap fake video that was slowed and pitch-corrected to make House Speaker Nancy Pelosi appear inebriated.

Another video mentioned in the report, generated by AI thinktank Future Advocacy during the 2019 UK general election, used voice generation and lip-sync to make it appear as though now-Prime Minister Boris Johnson and then-opponent Jeremy Corbin were endorsing each other for the office.

Such videos can have a devastating effect on public trust, wrote Mansted and Smith. And in addition to the fact that the production of such videos is more accessible than ever, deep fake creators can use bots to swarm public internet forums and comment sections with commentary that, because of the lack of a visual element, can be almost impossible to recognize as artificial.

Apps like botnet exemplify the problem of deep fake bots. Users make an account, post to it, and are quickly flooded with artificial comments. This technology is frequently used on online forums, and can be impossible to discern from legitimate comments.

The accelerated production of such materials can make it feel as though the future of media is one where almost no video can be trusted to be authentic, and the report admitted that “On balance, detectors are losing the ‘arms race’ with creators of sophisticated deep fakes.”

However, Mansted and Smith concluded with several suggestions for combating the rise of ill-intentioned deep fakes.

Firstly, the report proposed that international governments and online forums should “fund research into the further development and deployment of detection technologies” as well as “require digital platforms to deploy detection tools, especially to identify and label content generated through deep fake processes.”

Secondly, the report suggested that media and individuals should stop accepting audio-visual media at face value, adding that “Public awareness campaigns… will be needed to encourage users to critically engage with online content.”

Such a change of perception will be difficult, however, as the spread of this imagery is largely based on emotion and not critical thinking.

Lastly, the report suggested the implementation of authentication standards such as encryption and blockchain technology.

“An alternative to detecting all false content is to signal the authenticity of all legitimate content,” Mansted and Smith wrote. “Over time, it’s likely that certification systems for digital content will become more sophisticated, in part mitigating the risk of weaponised deep fakes.”

 

Artificial Intelligence

Staying Ahead On Artificial Intelligence Requires International Cooperation

Benjamin Kahn

Published

on

Screenshot from the webinar

March 4, 2021—Artificial intelligence is present in most facets of American digital life, but experts are in a constant race to identify and address potential dangers before they impact consumers.

From making a simple search on Google to listening to music on Spotify to streaming Tiger King on Netflix, AI is everywhere. Predictive algorithms learn from a consumer’s viewing habits and attempt to direct consumers to other content an algorithm thinks a consumer will be interested in.

While this can be extremely convenient for consumers, it also raises many concerns.

Jaisha Wray, associate administrator for international affairs at the National Telecommunications and Information Administration, was a panelist at a conference hosted Tuesday by the Federal Communications Bar Association.

Wray identified three key areas of interest that are at the forefront of AI policy: content moderation, algorithm transparency, and the establishment of common-ground policies between foreign governments.

In addition to all the aforementioned uses for AI, it also has proven to be an indispensable tool for websites like Facebook, Alphabet’s Youtube, and myriad other social media platforms in auto-moderating their content. While most social media platforms employ humans to review various decisions made by AI (such as Facebook’s Oversight Board), most content is first handled by AI moderators.

According to Tubefilter, in 2019 more than 500 hours of video content were uploaded to Youtube every minute; in less than 20 minutes, a year’s worth of content is uploaded.

Content moderation, algorithm transparency, foreign alignment

On this scale, AI is necessary to police the website, even if it not a perfect system. “[AI] is like a thread that’s woven into every issue that we work on and every venue,” Wray explained. She described how both governments and private entities have looked to AI to not only moderate somewhat mundane things such copyright issues, but also national security issues like violent extremist content.

Her second point pertained to algorithm transparency. She outlined how entities outside of the U.S. have sought to address this concern by providing consumers with the opportunity to have their content reviewed by humans before a final decision is made. Wray pointed to the European General Protection Regulation, “which enshrines the principle that every person has the right not to be subject to a decision solely based on automated processing.”

Her final point raised the issue of coordinating these efforts between different international jurisdictions—namely the U.S. and its allies. “We’re really trying to hone-in on where our values align and where we can find common ground.” She added that coordination does not end with allies, however, and that it is key that the U.S. also coordinate with authoritarian regimes, allied or otherwise.

She said that the primary task facing the U.S. right now is simply trying to determine which issues are worth prioritizing when it comes to coordinating with foreign governments—whether that is addressing the spread of AI, how to police AI multilaterally, or how to address the use of AI by adversarial authoritarian regimes.

Technology needs to be built with security in mind

One of Wray’s co-panelists, Evelyn Remaley, who is the associate administrator for the NTIA’s Office of Policy Analysis and Development, said all multilateral cybersecurity efforts related to AI must be approached from a position of what she called a “zero-trust model.” She explained that this model operates from the presupposition that technology should not and cannot be trusted.

“We have to build in controls and standards from the bottom-up to make sure that we are building in the security layer by layer,” Remaley said. “It’s really that premise of ensuring that we realize that we’re always going to have vulnerabilities within this technical development space.”

Remaley said that increasing competition and collaboration can only be safely achieved with a zero-trust mindset.

Continue Reading

Artificial Intelligence

Connectivity Will Need To Keep Up With The Advent Of New Tech, Says Expert

Samuel Triginelli

Published

on

Screenshot from the webinar

February 24, 2021 – It used to be that technology had to keep up with the deployment of the growing ubiquity of broadband innovations. But the pace of technological advancements in the home is starting a conversation about whether connectivity can keep up.

That’s according to Shawn DuBravac, an accountant and author of a book about how big data will transform our everyday lives, who argues that the pandemic has illustrated the need for broader connections in the home to meet the need of future technologies. He was speaking on Tuesday at the conference of NTCA – The Rural Broadband Association.

Emerging consumer technologies, such as Samsung’s robots, which will perform tasks including loading a dishwasher, serving wine, and setting a dinner table, are redefining the conversation about how connectivity at home will manage them, DuBravac argues.

Health companies are also introducing “companion robots” focused on interacting with seniors. With its artificial intelligence and sensors, these robots develop a personality to adapt to the needs of consumers so social distancing does not become a disadvantage for care.

As such, the pandemic has grown the telehealth industry. With more people avoiding going to hospitals, the creation of watches, belts, scales that are connected to share information with medical professionals is further requiring better broadband connectivity to keep up.

But it’s not like the industry isn’t paying attention. Mesh network technologies, which utilize multiple router-like devices to enhance coverage inside the home, have started to emerge just as smart-home technologies illustrated the need for broader connectivity that better enhanced coverage as Wi-Fi signals experienced degradation through walls.

Continue Reading

Artificial Intelligence

AI the Most Important Change in Health Care Since Introduction of the MRI, Say Experts

Samuel Triginelli

Published

on

Screenshot from the webinar

February 7, 2021 – Artificial Intelligence is the most important technological change in health care since the introduction of the MRI, experts said at a Thursday panel discussion about European tech sponsored by the Information Technology and Innovation Foundation.

AI will not be replacing doctors and nurses, but empowering decision-maker with new resources, according to those participating in the discussion on “How Can Europe Enhance the Benefits of AI-Enabled Health Care?”

For example, pharmaceutical companies are using AI for the speedy development of vaccines, panelists said. Additionally, AI is helping address the uneven ratio of skilled doctors to patients, assist health-care professionals in complex procedures, and deliver personalized health care to patients.

Yet, for AI technologies to reach their potential, European Union actors need to create regulations governing transparency, they said.

How AI works in healthcare

AI works through big collections of data that validate algorithms. These help explain certain solutions and detect anomalies in the data set of patients.

But algorithm-creation needs to be held to higher standards than they are currently. Systemic errors can easily enter in on a large scale, said Elmar Kotter, chairperson of the eHealth and Informatics Subcommittee of the European Society of Radiologists.

AI should have been used more during the early stages of the COVID-19 pandemic, said Maria Manuel Marques, on the Special Committee on Artificial Intelligence in a Digital Age.

AI helps treat more patients at a faster rate, and with consistency and agility, said Chris Walker, chair of the working group on digital health for the European Federation of Pharmaceutical Industries and Associations. It helps provide new insights and improve treatment by allowing early-stage treatment of diseases.

Europe faces great challenges because of people’s misconception of what AI can do, panelists said. It is not to replacing doctors and nurses, but empowering with decision-making resources.

More trust would come if companies would conduct safe experimentation by testing and showing examples of how AI can improve the life of health care workers and patients, said Marques.

Regulations of data is crucial for hospitals to trust the products. Moreover, patients must have privacy with their information. Regulations will help them understand what’s been done in the manufacture of AI system, and to what use data will be put.

Ander Elustondo Jauregui, policy officer for Digital Health, added that data quality was an important indicator of the maturity of an AI system. That providing assurances for doctors, he said.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending