Connect with us

Artificial Intelligence

Deepfakes Could Pose A Threat to National Security, But Experts Are Split On How To Handle It

Experts disagree on the right response to video manipulation — is more tech or a societal shift the right solution?

Published

on

Rep. Anthony Gonzalez, R-Ohio

June 3, 2021—The emerging and growing phenomenon of video manipulation known as deepfakes could pose a threat to the country’s national security, policy makers and technology experts said at an online conference Wednesday, but how best to address them divided the panel.

A deepfake is a highly technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. A well done deepfake can make a person appear to do things that they never actually did and say things that they never actually said.

“The way the technology has evolved, it is literally impossible for a human to actually detect that something is a deepfake,” said Ashish Jaiman, the director of technology operations at Microsoft, at an online event hosted by the Information Technology and Innovation Foundation.

Experts are wary of the associated implications of this technology being increasingly offered to the general population, but how best to address the brewing dilemma has them split. Some believe better technology aimed at detecting deepfakes is the answer, while others say that a shift in social perspective is necessary. Others argue that such a societal shift would be dangerous, and that the solution actually lies in the hands of journalists.

Deepfakes pose a threat to democracy

Such technology posed no problem when only Hollywood had the means to portray such impressive special effects, says Rep. Anthony Gonzalez, R-Ohio, but the technology has progressed to a point that allows most anybody to get their hands on it. He says that with the spread of disinformation, and the challenges that poses to establishing a well-informed public, deepfakes could be weaponized to spread lies and affect elections.

As of yet, however, no evidence exists that deepfakes have been used for this purpose, according to Daniel Kimmage, the acting coordinator for the Global Engagement Center of the Department of State. But he, along with the other panelists, agree that the technology could be used to influence elections and further already growing seeds of mistrust in the information media. They believe that its best to act preemptively and solve the problem before it becomes a crisis.

“Once people realize they can’t trust the images and videos they’re seeing, not only will they not believe the lies, they aren’t going to believe the truth,” said Dana Rao, executive vice president of software company Adobe.

New technology as a solution

Jaiman says Microsoft has been developing sophisticated technologies aimed at detecting deepfakes for over two years now. Deborah Johnson, emeritus technology professor at the University of Virginia School of Engineering, refers to this method as an “arms race,” in which we must develop technology that detects deepfakes at a faster rate than the deepfake technology progresses.

But Jaiman was the first to admit that, despite Microsoft’s hard work, detecting deepfakes remains a grueling challenge. Apparently, it’s much harder to detect a deepfake than it is to create one, he said. He believes that a societal response is necessary, and that technology will be inherently insufficient to address the problem.

Societal shift as a solution

Jaiman argues that people need to be skeptical consumers of information. He believes that until the technology catches up and deepfakes can more easily be detected and misinformation can easily be snuffed, people need to approach online information with the perspective that they could easily be deceived.

But critics believe this approach of encouraging skepticism could be problematic. Gabriela Ivens, the head of open source research at Human Rights Watch, says that “it becomes very problematic if people’s first reactions are not to believe anything.” Ivens’ job revolves around researching and exposing human rights violations, but says that the growing mistrust of media outlets will make it harder for her to gain the necessary public support.

She believes that a “zero-trust society” must be resisted.

Vint Cerf, the vice president and chief internet evangelist at Google, says that it is up to journalists to prevent the growing spread of distrust. He accused journalists not of deliberately lying, but often times misleading the public. He believes that the true risk of deepfakes lies in their ability to corrode America’s trust in truth, and that it is up to journalists to restore that trust already beginning to corrode by being completely transparent and honest in their reporting.

Reporter Tyler Perkins studied rhetoric and English literature, and also economics and mathematics, at the University of Utah. Although he grew up in and never left the West (both Oregon and Utah) until recently, he intends to study law and build a career on the East Coast. In his free time, he enjoys reading excellent literature and playing poor golf.

Artificial Intelligence

AI Should Compliment and Not Replace Humans, Says Stanford Expert

AI that strictly imitates human behavior can make workers superfluous and concentrate power in the hands of employers.

Published

on

Photo of Erik Brynjolfsson, director of the Stanford Digital Economy Lab, in January 2017 by Sandra Blaser used with permission

WASHINGTON, November 4, 2022 – Artificial intelligence should be developed primarily to augment the performance of, not replace, humans, said Erik Brynjolfsson, director of the Stanford Digital Economy Lab, at a Wednesday web event hosted by the Brookings Institution.

AI that complements human efforts can increase wages by driving up worker productivity, Brynjolfsson argued. AI that strictly imitates human behavior, he said, can make workers superfluous – thereby lowering the demand for workers and concentrating economic and political power in the hands of employers – in this case the owners of the AI.

“Complementarity (AI) implies that people remain indispensable for value creation and retain bargaining power in labor markets and in political decision-making,” he wrote in an essay earlier this year.

What’s more, designing AI to mimic existing human behaviors limits innovation, Brynjolfsson argued Wednesday.

“If you are simply taking what’s already being done and using a machine to replace what the human’s doing, that puts an upper bound on how good you can get,” he said. “The bigger value comes from creating an entirely new thing that never existed before.”

Brynjolfsson argued that AI should be crafted to reflect desired societal outcomes. “The tools we have now are more powerful than any we had before, which almost by definition means we have more power to change the world, to shape the world in different ways,” he said.

The AI Bill of Rights

In October, the White House released a blueprint for an “AI Bill of Rights.” The document condemned algorithmic discrimination on the basis of race, sex, religion, or age and emphasized the importance of user privacy. It also endorsed system transparency with users and suggested the use of human alternatives to AI when feasible.

To fully align with the blueprint’s standards, Russell Wald, policy director for Stanford’s Institute for Human-Centered Artificial Intelligence, argued at a recent Brookings event that the nation must develop a larger AI workforce.

Continue Reading

Artificial Intelligence

Workforce Training Needed to Address Artificial Intelligence Bias, Researchers Suggest

Building on the Blueprint for an AI Bill of Rights by the White House Office of Science and Technology Policy.

Published

on

Russell Wald. Credit: Rod Searcey, Stanford Law School

WASHINGTON, October 24, 2022–To align with the newly released White House guide on artificial intelligence, Stanford University’s director of policy said at an October Brookings Institution event last week that there needs to be more social and technical workforce training to address artificial intelligence biases.

Released on October 4, the Blueprint for an AI Bill of Rights framework by the White House’s Office of Science and Technology Policy is a guide for companies to follow five principles to ensure the protection of consumer rights from automated harm.

AI algorithms rely on learning the users behavior and disclosed information to customize services and advertising. Due to the nature of this process, algorithms carry the potential to send targeted information or enforce discriminatory eligibility practices based on race or class status, according to critics.

Risk mitigation, which prevents algorithm-based discrimination in AI technology is listed as an ‘expectation of an automated system’ under the “safe and effective systems” section of the White House framework.

Experts at the Brookings virtual event believe that workforce development is the starting point for professionals to learn how to identify risk and obtain the capacity to fulfill this need.

“We don’t have the talent available to do this type of investigative work,” Russell Wald, policy director for Stanford’s Institute for Human-Centered Artificial Intelligence, said at the event.

“We just don’t have a trained workforce ready and so what we really need to do is. I think we should invest in the next generation now and start giving people tools and access and the ability to learn how to do this type of work.”

Nicol Turner-Lee, senior fellow at the Brookings Institution, agreed with Wald, recommending sociologists, philosophers and technologists get involved in the process of AI programming to align with algorithmic discrimination protections – another core principle of the framework.

Core principles and protections suggested in this framework would require lawmakers to create new policies or include them in current safety requirements or civil rights laws. Each principle includes three sections on principles, automated systems and practice by government entities.

In July, Adam Thierer, senior research fellow at the Mercatus Center of George Mason University stated that he is “a little skeptical that we should create a regulatory AI structure,” and instead proposed educating workers on how to set best practices for risk management, calling it an “educational institution approach.”

Continue Reading

Artificial Intelligence

Deepfakes Pose National Security Threat, Private Sector Tackles Issue

Content manipulation can include misinformation from authoritarian governments.

Published

on

Photo of Dana Roa of Adobe, Paul Lekas of Global Policy (left to right)

WASHINGTON, July 20, 2022 – Content manipulation techniques known as deepfakes are concerning policy makers and forcing the public and private sectors to work together to tackle the problem, a Center for Democracy and Technology event heard on Wednesday.

A deepfake is a technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. Policymakers are concerned that deepfakes could pose a threat to the country’s national security as the technology is being increasingly offered to the general population.

Deepfake concerns that policymakers have identified, said participants at Wednesday’s event, include misinformation from authoritarian governments, faked compromising and abusive images, and illegal profiting from faked celebrity content.

“We should not and cannot have our guard down in the cyberspace,” said Representative John Katko, R-NY, ranking member of House Committee on homeland security.

Adobe pitches technology to identify deepfakes

Software company Adobe released an open-source toolkit to counter deepfake concerns earlier this month, said Dana Rao, executive vice president of Adobe. The companies’ Content Credentials feature is a technology developed over three years that tracks changes made to images, videos, and audio recordings.

Content Credentials is now an opt-in feature in the company’s photo editing software Photoshop that it says will help establish credibility for creators by adding “robust, tamper-evident provenance data about how a piece of content was produced, edited, and published,” read the announcement.

Adobe’s Connect Authenticity Initiative project is dedicated to addressing problems establishing trust after the damage caused by deepfakes. “Once we stop believing in true things, I don’t know how we are going to be able to function in society,” said Rao. “We have to believe in something.”

As part of its initiative, Adobe is working with the public sector in supporting the Deepfake Task Force Act, which was introduced in August of 2021. If adopted, the bill would establish a National Deepfake and Digital task force comprised of members from the private sector, public sector, and academia to address disinformation.

For now, said Cailin Crockett, senior advisor to the White House Gender Policy Council, it is important to educate the public on the threat of disinformation.

Continue Reading

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Broadband Breakfast Research Partner

Trending