May 5, 2020 – A new report from the Australian Strategic Policy Institute and the International Cyber Policy Centre detailed the state of rapidly developing “deep fake” technology and its potential to produce propaganda and misleading imagery more easily than ever.
The report, by Australian National University’s Senior Advisor for Public Policy Katherine Mansted and Researcher Hannah Smith, explained the costs of artificial intelligence technology allowing users to falsify or misrepresent existing media, as well as to generate new media entirely.
While audio-visual “cheap fakes” (edited media using tools other than AI) are not a recent phenomenon, the rapid rise of artificial-intelligence-powered technology has seen several means by which nefarious actors can produce misleading material at a staggering pace, four of which were highlighted by the ASPI report.
First, the face swapping method maps the face of one person and superimposes it onto the head of another.
The re-enactment method allows a deep fake creator to use facial tracking to manipulate the facial movements of their desired target. Another method, known as lip-syncing, combines re-enactment with phony audio generation to make it appear as though speakers are saying things they never did.
Finally, motion transfer technology allows the body movements of one person to control those of another.
This technology creates disastrous possibilities, the report said. When using various deep fake methods in conjunction, one can make it appear as though critical political figures are performing offensive or criminal acts or announcing forthcoming military action in hostile countries.
If deployed in a high-pressure situation where the prompt authentication of such media is not possible, real-life retaliation could occur.
The technology has already caused harm outside of the political arena.
The vast majority of deep fake technology is used on internet forums like Reddit to superimpose the faces of non-consenting peoples such as celebrities onto the bodies of men and women in pornographic videos, the report said.
Visual deep fakes are not perfect, and those available to the layman are often recognizable. But the technology has developed rapidly since 2017, and programs that work to make the deep fakes undetectable have as well.
Generative adversarial networks compete with other AI networks to develop and detect deep fakes, checking and refining hundreds or thousands of times, until deep fake audio and visual media are unrecognizable to the network and far less to the human eye. “GAN models are now widely accessible,” the report said, “and many are available for free online.”
Such forged videos are already widespread and may already have had an impact on public trust in elected officials and others, although such a phenomenon is difficult to quantify.
The report also detailed multiple instances in which a purposely altered video circulated online and potentially misinformed viewers, including a cheap fake video that was slowed and pitch-corrected to make House Speaker Nancy Pelosi appear inebriated.
Another video mentioned in the report, generated by AI thinktank Future Advocacy during the 2019 UK general election, used voice generation and lip-sync to make it appear as though now-Prime Minister Boris Johnson and then-opponent Jeremy Corbin were endorsing each other for the office.
Such videos can have a devastating effect on public trust, wrote Mansted and Smith. And in addition to the fact that the production of such videos is more accessible than ever, deep fake creators can use bots to swarm public internet forums and comment sections with commentary that, because of the lack of a visual element, can be almost impossible to recognize as artificial.
The accelerated production of such materials can make it feel as though the future of media is one where almost no video can be trusted to be authentic, and the report admitted that “On balance, detectors are losing the ‘arms race’ with creators of sophisticated deep fakes.”
However, Mansted and Smith concluded with several suggestions for combating the rise of ill-intentioned deep fakes.
Firstly, the report proposed that international governments and online forums should “fund research into the further development and deployment of detection technologies” as well as “require digital platforms to deploy detection tools, especially to identify and label content generated through deep fake processes.”
Secondly, the report suggested that media and individuals should stop accepting audio-visual media at face value, adding that “Public awareness campaigns… will be needed to encourage users to critically engage with online content.”
Such a change of perception will be difficult, however, as the spread of this imagery is largely based on emotion and not critical thinking.
Lastly, the report suggested the implementation of authentication standards such as encryption and blockchain technology.
“An alternative to detecting all false content is to signal the authenticity of all legitimate content,” Mansted and Smith wrote. “Over time, it’s likely that certification systems for digital content will become more sophisticated, in part mitigating the risk of weaponised deep fakes.”
Deepfakes Pose National Security Threat, Private Sector Tackles Issue
Content manipulation can include misinformation from authoritarian governments.
WASHINGTON, July 20, 2022 – Content manipulation techniques known as deepfakes are concerning policy makers and forcing the public and private sectors to work together to tackle the problem, a Center for Democracy and Technology event heard on Wednesday.
A deepfake is a technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. Policymakers are concerned that deepfakes could pose a threat to the country’s national security as the technology is being increasingly offered to the general population.
Deepfake concerns that policymakers have identified, said participants at Wednesday’s event, include misinformation from authoritarian governments, faked compromising and abusive images, and illegal profiting from faked celebrity content.
“We should not and cannot have our guard down in the cyberspace,” said Representative John Katko, R-NY, ranking member of House Committee on homeland security.
Adobe pitches technology to identify deepfakes
Software company Adobe released an open-source toolkit to counter deepfake concerns earlier this month, said Dana Rao, executive vice president of Adobe. The companies’ Content Credentials feature is a technology developed over three years that tracks changes made to images, videos, and audio recordings.
Content Credentials is now an opt-in feature in the company’s photo editing software Photoshop that it says will help establish credibility for creators by adding “robust, tamper-evident provenance data about how a piece of content was produced, edited, and published,” read the announcement.
Adobe’s Connect Authenticity Initiative project is dedicated to addressing problems establishing trust after the damage caused by deepfakes. “Once we stop believing in true things, I don’t know how we are going to be able to function in society,” said Rao. “We have to believe in something.”
As part of its initiative, Adobe is working with the public sector in supporting the Deepfake Task Force Act, which was introduced in August of 2021. If adopted, the bill would establish a National Deepfake and Digital task force comprised of members from the private sector, public sector, and academia to address disinformation.
For now, said Cailin Crockett, senior advisor to the White House Gender Policy Council, it is important to educate the public on the threat of disinformation.
Should the Federal Government Regulate Artificial Intelligence?
Two experts were on opposite sides of the debate about how to mitigate the downsides of AI.
WASHINGTON, July 12, 2022 – Representatives from academia and a nonprofit diverged at a Bipartisan Policy Center event Tuesday about whether the government should step in and minimize problems associated with artificial intelligence, including bias and discrimination in algorithms.
“We really do want actors to help us establish national and international guidelines,” said Miriam Vogel, president, and CEO of EqualAI, a nonprofit that seeks to reduce bias in AI. “We are driving full speed without lanes, without speed limits to manage the expectations.”
While acknowledging the benefits of AI in society today, Vogel said its algorithms present risk that often leads to bias and discrimination. She shared the example of how facial recognition misses certain voices or skin tones.
AI is used in various sectors and powers algorithms that cater services to individuals. Panelists referenced the use of AI algorithms in suspect identification for criminal justice, in disease diagnosis in health care, and for movie and employment recommendations.
Vogel said regulation will establish clear expectations for AI companies to minimize such risks.
Adam Thierer, a senior research fellow at the Mercatus Center at George Mason University, said he is “a little skeptical that we should create a regulatory AI structure” and instead proposed educating workers on how to set best practices for risk management. He called this an “educational institution approach.”
He said that because of how long federal law takes to enact, he wants to reach AI workers directly, such as the computer programmers and AI innovators “of tomorrow” to do a better job of “baking best practices” into AI.
“I think baking best practice principles in by design begins with an educational focus,” said Thierer.
Thierer said he wants to give this job to trusted third parties to suggest pathways forward, including ethical evaluations and consultations with AI companies. He said that when it comes to AI rules across different sectors, “we don’t need one overarching standard to rule them all.”
Thierer added that because of how fast AI is changing, “it can’t go through the same regulatory process.” He argued if regulation is put in place, we will lose AI innovators.
Vogel disagreed with Thierer, saying she doesn’t believe that there is a risk of losing innovators with regulating AI, and instead, said, “I see regulation is the partner to innovation.”
She said that because there is no government regulation for AI, companies are left to do it themselves if they choose, referencing the Badge Program at EqualAI that seeks to help companies navigate risks.
“We need to have a governance system put in place to make sure continual testing is taking place,” said Vogel.
FTC Commissioner Says Agency Report on AI for Online Harms Did Not Consult Outside Experts
The FTC released a report that warned about the dangers of AI’s use to combat online harms.
WASHINGTON, June 22, 2022 – Federal Trade Commissioner Noah Phillips said last week that a report by the commission about the use of artificial intelligence to tackle online harms did not consult outside experts as Congress asked.
The FTC’s “Combatting Online Harms through Innovation” report – approved by a 4-1 vote to send to Congress and released on June 16 – warns against using AI as a policy solution for online problems, as the commission says it contains inherent design flaws, bias and discrimination, and features commercial surveillance concerns. The commission concluded that the potential adoption of AI could increase additional harms.
However, the report found that amid the use of AI by Big Tech platforms to address online harms, “lawmakers should consider focusing on developing legal frameworks that would ensure that AI tools do not cause additional harm.”
The one dissenting opinion on the report was from Phillips, who said the FTC did not do the study that was required by Congress. As part of the 2021 Consolidated Appropriations Act, Congress asked the FTC to conduct a study on how artificial intelligence could address online harms such as fake reviews, hate crimes and harassment and child sexual abuse.
“I do not believe we conducted the requisite study, and I do not think the report on AI issued by the Commission takes sufficient care to answer the questions Congress asked,” Phillips said in his dissenting statement.
Phillips said the report mainly focuses on the technology of AI itself and lacks the outside perspective from individuals and companies who use AI and try to combat the harms of AI online, which he said is “precisely what Congress asked us to evaluate.”
Phillips added that in the 12 months the FTC was given to complete this study, “rather than use this time to solicit input from all relevant stakeholders, the Commission chose to conduct a kind of literature review.
Phillips said in his statement he would have liked to see interviews of market participants or surveys conducted, which allegedly isn’t included in the recent report and adds that he is instead concerned about the “quantity of self-reference” used by the FTC in the report.
“Still, we should at least endeavor to produce a report that reflects the full diversity of experiences and viewpoints on these important issues concerning AI.” Phillips also noted the report doesn’t include a serious cost-benefit analysis of using AI to combat online harms.
- As Middle Mile Program Deadline Approaches, NTIA Proposes ‘Buy America’ Exemptions
- Reason 5 to Attend Broadband Mapping Masterclass: Understanding Public Challenges
- FCC Spectrum Authority Expires on September 30, Agency Seeks Renewal
- NTCA Smart Rural Communities, International Telecommunications Union Conference, Carr on TikTok
- Kirsten Compitello: The Need for a Digital Equity Focus on Broadband Mapping
- Reason 4 to Attend Broadband Mapping Masterclass: Measuring Actual Speeds
Signup for Broadband Breakfast
Broadband Roundup4 weeks ago
Comcast and Charter’s State Grants, AT&T Fiber in Arizona, New US Cellular Lobbyist
Broadband Roundup3 weeks ago
AT&T Sues T-Mobile Over Ad, Nokia Partners with Ready, LightPath Expanding
Broadband Roundup4 weeks ago
Promoting Affordable Connectivity Program, Google Bars Truth Social, T-Mobile Wins 2.5 GHz Auction
#broadbandlive4 weeks ago
Broadband Breakfast on September 21, 2022 – Broadband Mapping and Data
Fiber4 weeks ago
Missouri City Utility to Complete Fiber Build Using Utility Lease Model
#broadbandlive4 weeks ago
Broadband Breakfast on September 14, 2022 – How Can Cities Take Advantage of Federal Broadband Funding?
Rural4 weeks ago
FCC Commits Additional $800 Million From Rural Digital Opportunity Fund
#broadbandlive4 weeks ago
Broadband Breakfast on September 7, 2022 – Assessing the NTIA’s Middle Mile Grant Application Process