Connect with us

Artificial Intelligence

Australian Group Chronicles the Growing Realism of ‘Deep Fakes,’ and Their Geopolitical Risk

Published

on

Image by Chetraruc used with permission

May 5, 2020 – A new report from the Australian Strategic Policy Institute and the International Cyber Policy Centre detailed the state of rapidly developing “deep fake” technology and its potential to produce propaganda and misleading imagery more easily than ever.

The report, by Australian National University’s Senior Advisor for Public Policy Katherine Mansted and Researcher Hannah Smith, explained the costs of artificial intelligence technology allowing users to falsify or misrepresent existing media, as well as to generate new media entirely.

While audio-visual “cheap fakes” (edited media using tools other than AI) are not a recent phenomenon, the rapid rise of artificial-intelligence-powered technology has seen several means by which nefarious actors can produce misleading material at a staggering pace, four of which were highlighted by the ASPI report.

First, the face swapping method maps the face of one person and superimposes it onto the head of another.

The re-enactment method allows a deep fake creator to use facial tracking to manipulate the facial movements of their desired target. Another method, known as lip-syncing, combines re-enactment with phony audio generation to make it appear as though speakers are saying things they never did.

Finally, motion transfer technology allows the body movements of one person to control those of another.

An example of face swapping. Source: “Bill Hader impersonates Arnold Schwarzenegger [DeepFake]” Video

This technology creates disastrous possibilities, the report said. When using various deep fake methods in conjunction, one can make it appear as though critical political figures are performing offensive or criminal acts or announcing forthcoming military action in hostile countries.

If deployed in a high-pressure situation where the prompt authentication of such media is not possible, real-life retaliation could occur.

The technology has already caused harm outside of the political arena.

The vast majority of deep fake technology is used on internet forums like Reddit to superimpose the faces of non-consenting peoples such as celebrities onto the bodies of men and women in pornographic videos, the report said.

Visual deep fakes are not perfect, and those available to the layman are often recognizable. But the technology has developed rapidly since 2017, and programs that work to make the deep fakes undetectable have as well.

Generative adversarial networks compete with other AI networks to develop and detect deep fakes, checking and refining hundreds or thousands of times, until deep fake audio and visual media are unrecognizable to the network and far less to the human eye. “GAN models are now widely accessible,” the report said, “and many are available for free online.”

Video tweeted from a nameless, faceless account that appears to show House Speaker Nancy Pelosi inebriated, but was merely slowed and pitch-corrected.

Such forged videos are already widespread and may already have had an impact on public trust in elected officials and others, although such a phenomenon is difficult to quantify.

The report also detailed multiple instances in which a purposely altered video circulated online and potentially misinformed viewers, including a cheap fake video that was slowed and pitch-corrected to make House Speaker Nancy Pelosi appear inebriated.

Another video mentioned in the report, generated by AI thinktank Future Advocacy during the 2019 UK general election, used voice generation and lip-sync to make it appear as though now-Prime Minister Boris Johnson and then-opponent Jeremy Corbin were endorsing each other for the office.

Such videos can have a devastating effect on public trust, wrote Mansted and Smith. And in addition to the fact that the production of such videos is more accessible than ever, deep fake creators can use bots to swarm public internet forums and comment sections with commentary that, because of the lack of a visual element, can be almost impossible to recognize as artificial.

Apps like botnet exemplify the problem of deep fake bots. Users make an account, post to it, and are quickly flooded with artificial comments. This technology is frequently used on online forums, and can be impossible to discern from legitimate comments.

The accelerated production of such materials can make it feel as though the future of media is one where almost no video can be trusted to be authentic, and the report admitted that “On balance, detectors are losing the ‘arms race’ with creators of sophisticated deep fakes.”

However, Mansted and Smith concluded with several suggestions for combating the rise of ill-intentioned deep fakes.

Firstly, the report proposed that international governments and online forums should “fund research into the further development and deployment of detection technologies” as well as “require digital platforms to deploy detection tools, especially to identify and label content generated through deep fake processes.”

Secondly, the report suggested that media and individuals should stop accepting audio-visual media at face value, adding that “Public awareness campaigns… will be needed to encourage users to critically engage with online content.”

Such a change of perception will be difficult, however, as the spread of this imagery is largely based on emotion and not critical thinking.

Lastly, the report suggested the implementation of authentication standards such as encryption and blockchain technology.

“An alternative to detecting all false content is to signal the authenticity of all legitimate content,” Mansted and Smith wrote. “Over time, it’s likely that certification systems for digital content will become more sophisticated, in part mitigating the risk of weaponised deep fakes.”

 

Artificial Intelligence

As ChatGPT’s Popularity Skyrockets, Some Experts Call for AI Regulation

As generative AI models grow more sophisticated, they present increasing risks.

Published

on

Photo by Tada Images/Adobe Stock used with permission

WASHINGTON, February 3, 2023 — Just two months after its viral launch, ChatGPT reached 100 million monthly users in January, reportedly making it the fastest-growing consumer application in history — and raising concerns, both internal and external, about the lack of regulation for generative artificial intelligence.

Many of the potential problems with generative AI models stem from the datasets used to train them. The models will reflect whatever biases, inaccuracies and otherwise harmful content was present in their training data, but too much dataset filtering can detract from performance.

OpenAI has grappled with these concerns for years while developing powerful, publicly available tools such as DALL·E — an AI system that generates realistic images and original art from text descriptions, said Anna Makanju, OpenAI’s head of public policy, a Federal Communications Bar Association event on Friday.

“We knew right off the bat that nonconsensual sexual imagery was going to be a problem, so we thought, ‘Why don’t we just try to go through the dataset and remove any sexual imagery so people can’t generate it,’” Makanju said. “And when we did that, the model could no longer generate women, because it turns out most of the visual images that are available to train a dataset on women are sexual in nature.”

Despite rigorous testing before ChatGPT’s release, early users quickly discovered ways to evade some of the guardrails intended to prevent harmful uses.

The model would not generate offensive content in response to direct requests, but one user found a loophole by asking it to write from the perspective of someone holding racist views — resulting in several paragraphs of explicitly racist text. When some users asked ChatGPT to write code using race and gender to determine whether someone would be a good scientist, the bot replied with a function that only selected white men. Still others were able to use the tool to generate phishing emails and malicious code.

OpenAI quickly responded with adjustments to the model’s filtering algorithms, as well as increased monitoring.

“So far, the approach we’ve taken is we just try to stay away from areas that can be controversial, and we ask the model not to speak to those areas,” Makanju said.

The company has also attempted to limit certain high-impact uses, such as automated hiring. “We don’t feel like at this point we know enough about how our systems function and biases that may impact employment, or if there’s enough accuracy for there to be an automated decision about hiring without a human in the loop,” Makanju explained.

However, Makanju noted that future generative language models will likely reach a point where users can significantly customize them based on personal worldviews. At that point, strong guardrails will need to be in place to prevent the model from behaving in certain harmful ways — for example, encouraging self-harm or giving incorrect medical advice.

Those guardrails should probably be established by external bodies or government agencies, Makanju said. “We recognize that we — a pretty small company in Silicon Valley — are not the best place to make a decision of how this will be used in every single domain, as hard as we try to think about it.”

Little AI regulation currently exists

So far, the U.S. has very little legislation governing the use of AI, although some states regulate automated hiring tools. On Jan. 26, the National Institute of Standards and Technology released the first version of its voluntary AI risk management framework, developed at the direction of Congress.

This regulatory crawl is being rapidly outpaced by the speed of generative AI research. Google reportedly declared a “code red” in response to ChatGPT’s release, speeding the development of multiple AI tools. Chinese tech company Baidu is planning to launch its own AI chatbot in March.

Not every company will respond to harmful uses as quickly as OpenAI, and some may not even attempt to stop them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI. PAI is a nonprofit coalition that develops tools recommendations for AI governance.

Various private organizations, including PAI, have laid out their own ethical frameworks and policy recommendations. There is ongoing discussion about the extent to which these organizations, government agencies and tech companies should be determining AI regulation, Leibowicz said.

“What I’m interested in is, who’s involved in that risk calculus?” she asked. “How are we making those decisions? What types of actual affected communities are we talking to in order to make that calculus? Or is it a group of engineers sitting in a room trying to forecast for the whole world?”

Leibowicz advocated for transparency measures such as requiring standardized “nutrition labels” that would disclose the training dataset for any given AI model — a proposal similar to the label mandate announced in November for internet service providers.

A regulatory framework should be implemented while these technologies are still being created, rather than in response to a future crisis, Makanju said. “It’s very clear that this technology is going to be incorporated into every industry in some way in the coming years, and I worry a little bit about where we are right now in getting there.”

Continue Reading

Artificial Intelligence

Automated Content Moderation’s Main Problem is Subjectivity, Not Accuracy, Expert Says

With millions of pieces of content generated daily, platforms are increasingly relying on AI for moderation.

Published

on

Screenshot of American Enterprise Institute event

WASHINGTON, February 2, 2023 — The vast quantity of online content generated daily will likely drive platforms to increasingly rely on artificial intelligence for content moderation, making it critically important to understand the technology’s limitations, according to an industry expert.

Despite the ongoing culture war over content moderation, the practice is largely driven by financial incentives — so even companies with “a speech-maximizing set of values” will likely find some amount of moderation unavoidable, said Alex Feerst, CEO of Murmuration Labs, at a Jan. 25 American Enterprise Institute event. Murmuration Labs works with tech companies to develop online trust and safety products, policies and operations.

If a piece of online content could potentially lead to hundreds of thousands of dollars in legal fees, a company is “highly incentivized to err on the side of taking things down,” Feerst said. And even beyond legal liability, if the presence of certain content will alienate a substantial number of users and advertisers, companies have financial motivation to remove it.

However, a major challenge for content moderation is the sheer quantity of user-generated online content — which, on the average day, includes 500 million new tweets, 700 million Facebook comments and 720,000 hours of video uploaded to YouTube.

“The fully loaded cost of running a platform includes making millions of speech adjudications per day,” Feerst said.

“If you think about the enormity of that cost, very quickly you get to the point of, ‘Even if we’re doing very skillful outsourcing with great accuracy, we’re going to need automation to make the number of daily adjudications that we seem to need in order to process all of the speech that everybody is putting online and all of the disputes that are arising.’”

Automated moderation is not just a theoretical future question. In a March 2021 congressional hearing, Meta CEO Mark Zuckerberg testified that “more than 95 percent of the hate speech that we take down is done by an AI and not by a person… And I think it’s 98 or 99 percent of the terrorist content.”

Dealing with subjective content

But although AI can help manage the volume of user-generated content, it can’t solve one of the key problems of moderation: Beyond a limited amount of clearly illegal material, most decisions are subjective.

Much of the debate surrounding automated content moderation mistakenly presents subjectivity problems as accuracy problems, Feerst said.

For example, much of what is generally considered “hate speech” is not technically illegal, but many platforms’ terms of service prohibit such content. With these extrajudicial rules, there is often room for broad disagreement over whether any particular piece of content is a violation.

“AI cannot solve that human subjective disagreement problem,” Feerst said. “All it can do is more efficiently multiply this problem.”

This multiplication becomes problematic when AI models are replicating and amplifying human biases, which was the basis for the Federal Trade Commission’s June 2022 report warning Congress to avoid overreliance on AI.

“Nobody should treat AI as the solution to the spread of harmful online content,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement announcing the report. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology — which can be both helpful and dangerous — will take these problems off our hands.”

The FTC’s report pointed to multiple studies revealing bias in automated hate speech detection models, often as a result of being trained on unrepresentative and discriminatory data sets.

As moderation processes become increasingly automated, Feerst predicted that the “trend of those problems being amplified and becoming less possible to discern seems very likely.”

Given those dangers, Feerst emphasized the urgency of understanding and then working to resolve AI’s limitations, noting that the demand for content moderation will not go away. To some extent, speech disputes are “just humans being human… you’re never going to get it down to zero,” he said.

Continue Reading

Artificial Intelligence

AI Should Compliment and Not Replace Humans, Says Stanford Expert

AI that strictly imitates human behavior can make workers superfluous and concentrate power in the hands of employers.

Published

on

Photo of Erik Brynjolfsson, director of the Stanford Digital Economy Lab, in January 2017 by Sandra Blaser used with permission

WASHINGTON, November 4, 2022 – Artificial intelligence should be developed primarily to augment the performance of, not replace, humans, said Erik Brynjolfsson, director of the Stanford Digital Economy Lab, at a Wednesday web event hosted by the Brookings Institution.

AI that complements human efforts can increase wages by driving up worker productivity, Brynjolfsson argued. AI that strictly imitates human behavior, he said, can make workers superfluous – thereby lowering the demand for workers and concentrating economic and political power in the hands of employers – in this case the owners of the AI.

“Complementarity (AI) implies that people remain indispensable for value creation and retain bargaining power in labor markets and in political decision-making,” he wrote in an essay earlier this year.

What’s more, designing AI to mimic existing human behaviors limits innovation, Brynjolfsson argued Wednesday.

“If you are simply taking what’s already being done and using a machine to replace what the human’s doing, that puts an upper bound on how good you can get,” he said. “The bigger value comes from creating an entirely new thing that never existed before.”

Brynjolfsson argued that AI should be crafted to reflect desired societal outcomes. “The tools we have now are more powerful than any we had before, which almost by definition means we have more power to change the world, to shape the world in different ways,” he said.

The AI Bill of Rights

In October, the White House released a blueprint for an “AI Bill of Rights.” The document condemned algorithmic discrimination on the basis of race, sex, religion, or age and emphasized the importance of user privacy. It also endorsed system transparency with users and suggested the use of human alternatives to AI when feasible.

To fully align with the blueprint’s standards, Russell Wald, policy director for Stanford’s Institute for Human-Centered Artificial Intelligence, argued at a recent Brookings event that the nation must develop a larger AI workforce.

Continue Reading

Signup for Broadband Breakfast

Twice-weekly Breakfast Media news alerts
* = required field

Broadband Breakfast Research Partner

Trending