Connect with us

Artificial Intelligence

Senate Hearing Created a Clash With Google Over the Definition of ‘Persuasive’ Technology

Published

on

WASHINGTON, June 27, 2019 — A Tuesday Senate Commerce Subcommittee hearing, on “Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms,” became an open invitation for senators to attack the business model of the technology industry.

At the hearing, Google confronted bipartisan skepticism about its claimed neutrality, and about its power as a company. (See our story, “Bipartisan Group of Senators Stoke Fears About Google’s Neutrality and Influence in 2020 Election.”)

Other witnesses and senators piled on, particularly when the Google witness claimed that the search engine giant does not use “persuasive” technologies.

Instead, said Maggie Stanphill, Google’s user experience director, Google’s products are built with “privacy, security, and control for the user” in an effort to build a “lifelong relationship.”

“I don’t know what any of that meant,” replied Ranking Member Brian Schatz, D-Hawaii.

Sen. Richard Blumenthal, D-Conn., also found Stanphill’s assertion “difficult to believe.”

Subcommittee Chairman John Thune, R-S.D., took a darker and more conspiratorial tact: “The powerful mechanisms behind these platforms meant to enhance engagement also have the ability, or at least the potential, to influence the thoughts and behaviors of literally billions of people.”

Thune said that “the use of artificial intelligence and algorithms to optimize engagement can have an unintended and possibly even dangerous downside.”

Using the politically loaded term of ‘persuasive’ technology

Part of the disconnect may be the introduction – in the title of the event – of the politically loaded term “persuasive” technology.

Companies such as Google have a significant business incentive to take as narrow a view as possible of that term, suggested Rashida Richardson, directory of policy research at the AI Now Institute.

Center for Humane Technology Executive Director Tristan Harris argued that, in fact, “persuasive technology is everywhere.”

Social media platforms are carefully designed to be addictive because the business model is reliant on maintaining user engagement, he said. Twitter’s “pull to refresh” has the same addictive qualities of a slot machine, while Instagram’s infinitely scrolling feed gives users no signal of when to stop.

Polarization and the so-called “callout culture” are a direct result of the focus on keeping users’ attention, because moral outrage and succinct statements—in place of logic-based, nuanced arguments—lead to the highest levels of engagement.

However, there’s no easy way to address these issues because the fundamental problem is the business model itself, said Harris.

The power and reach of artificial intelligence algorithms is far more extensive than many people realize. Harris highlighted research showing that AI can predict an individual’s personality traits based on mouse movements and click patterns alone with 80 percent accuracy.

Platforms are using artificial intelligence and machine learning to build increasingly detailed and accurate models of behavior; for example, YouTube uses this to promote the autoplay content that is most likely to keep users watching.

Not only do the platforms make their media as addictive as possible, they actively make it difficult for users to leave. When Facebook users attempt to delete their accounts, the platform shows them the profiles of five users who will supposedly miss them, carefully selected based on past engagement, said Harris.

All of these tactics create what Harris called an “asymmetry of power,” meaning that users believe that they have control when they actually don’t.

Artificial intelligence is having a significant impact on society as well as on individuals. Many companies have attempted to use algorithms to determine who should be hired, released on bail, given loans, and more, oftentimes leading to highly biased and flawed outcomes. These algorithms are primarily developed and deployed by just a few powerful companies, giving them dangerously immense power to shape society, said Richardson.

Harris agreed, comparing human use of these immensely powerful technologies is comparable to “chimpanzees with nukes.”

Senators raise concern about algorithm’s impact on children

Multiple senators expressed especial concern over the impacts of these algorithms on children. Children can inadvertently stumble on extremist material by being drawn to shocking content or using search terms that carry an unknown subtext, said Sen. Tom Udall, D-NM. This can spiral into radicalization.

Harris cited various examples of this phenomenon, such as a video explaining a diet being followed by portrayals of anorexia, or a video about the moon landing being followed by flat earth conspiracy theories.

Not only is this content accidentally found, YouTube may actually be “systemically” serving it to children, said Sen. Ed Markey, D-Mass., who is planning to introduce the “Kids Internet and Safety Act” to stop autoplay and other forms of commercialization that may be targeting children.

Stanphill was adamant in stating that Google had already taken steps to fix the problems under discussion. Her claims were met with skepticism from both senators and other witnesses.

(Photo of Sen. John Thune at the hearing on Tuesday by Emily McPhie.)

Reporter Em McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for the student newspaper. In addition to agency and freelance marketing experience, she has reported extensively on Section 230, big tech, and rural broadband access. She is a founding board member of Code Open Sesame, an organization that teaches computer programming skills to underprivileged children.

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

AI Should Compliment and Not Replace Humans, Says Stanford Expert

AI that strictly imitates human behavior can make workers superfluous and concentrate power in the hands of employers.

Published

on

Photo of Erik Brynjolfsson, director of the Stanford Digital Economy Lab, in January 2017 by Sandra Blaser used with permission

WASHINGTON, November 4, 2022 – Artificial intelligence should be developed primarily to augment the performance of, not replace, humans, said Erik Brynjolfsson, director of the Stanford Digital Economy Lab, at a Wednesday web event hosted by the Brookings Institution.

AI that complements human efforts can increase wages by driving up worker productivity, Brynjolfsson argued. AI that strictly imitates human behavior, he said, can make workers superfluous – thereby lowering the demand for workers and concentrating economic and political power in the hands of employers – in this case the owners of the AI.

“Complementarity (AI) implies that people remain indispensable for value creation and retain bargaining power in labor markets and in political decision-making,” he wrote in an essay earlier this year.

What’s more, designing AI to mimic existing human behaviors limits innovation, Brynjolfsson argued Wednesday.

“If you are simply taking what’s already being done and using a machine to replace what the human’s doing, that puts an upper bound on how good you can get,” he said. “The bigger value comes from creating an entirely new thing that never existed before.”

Brynjolfsson argued that AI should be crafted to reflect desired societal outcomes. “The tools we have now are more powerful than any we had before, which almost by definition means we have more power to change the world, to shape the world in different ways,” he said.

The AI Bill of Rights

In October, the White House released a blueprint for an “AI Bill of Rights.” The document condemned algorithmic discrimination on the basis of race, sex, religion, or age and emphasized the importance of user privacy. It also endorsed system transparency with users and suggested the use of human alternatives to AI when feasible.

To fully align with the blueprint’s standards, Russell Wald, policy director for Stanford’s Institute for Human-Centered Artificial Intelligence, argued at a recent Brookings event that the nation must develop a larger AI workforce.

Continue Reading

Artificial Intelligence

Workforce Training Needed to Address Artificial Intelligence Bias, Researchers Suggest

Building on the Blueprint for an AI Bill of Rights by the White House Office of Science and Technology Policy.

Published

on

Russell Wald. Credit: Rod Searcey, Stanford Law School

WASHINGTON, October 24, 2022–To align with the newly released White House guide on artificial intelligence, Stanford University’s director of policy said at an October Brookings Institution event last week that there needs to be more social and technical workforce training to address artificial intelligence biases.

Released on October 4, the Blueprint for an AI Bill of Rights framework by the White House’s Office of Science and Technology Policy is a guide for companies to follow five principles to ensure the protection of consumer rights from automated harm.

AI algorithms rely on learning the users behavior and disclosed information to customize services and advertising. Due to the nature of this process, algorithms carry the potential to send targeted information or enforce discriminatory eligibility practices based on race or class status, according to critics.

Risk mitigation, which prevents algorithm-based discrimination in AI technology is listed as an ‘expectation of an automated system’ under the “safe and effective systems” section of the White House framework.

Experts at the Brookings virtual event believe that workforce development is the starting point for professionals to learn how to identify risk and obtain the capacity to fulfill this need.

“We don’t have the talent available to do this type of investigative work,” Russell Wald, policy director for Stanford’s Institute for Human-Centered Artificial Intelligence, said at the event.

“We just don’t have a trained workforce ready and so what we really need to do is. I think we should invest in the next generation now and start giving people tools and access and the ability to learn how to do this type of work.”

Nicol Turner-Lee, senior fellow at the Brookings Institution, agreed with Wald, recommending sociologists, philosophers and technologists get involved in the process of AI programming to align with algorithmic discrimination protections – another core principle of the framework.

Core principles and protections suggested in this framework would require lawmakers to create new policies or include them in current safety requirements or civil rights laws. Each principle includes three sections on principles, automated systems and practice by government entities.

In July, Adam Thierer, senior research fellow at the Mercatus Center of George Mason University stated that he is “a little skeptical that we should create a regulatory AI structure,” and instead proposed educating workers on how to set best practices for risk management, calling it an “educational institution approach.”

Continue Reading

Artificial Intelligence

Deepfakes Pose National Security Threat, Private Sector Tackles Issue

Content manipulation can include misinformation from authoritarian governments.

Published

on

Photo of Dana Roa of Adobe, Paul Lekas of Global Policy (left to right)

WASHINGTON, July 20, 2022 – Content manipulation techniques known as deepfakes are concerning policy makers and forcing the public and private sectors to work together to tackle the problem, a Center for Democracy and Technology event heard on Wednesday.

A deepfake is a technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. Policymakers are concerned that deepfakes could pose a threat to the country’s national security as the technology is being increasingly offered to the general population.

Deepfake concerns that policymakers have identified, said participants at Wednesday’s event, include misinformation from authoritarian governments, faked compromising and abusive images, and illegal profiting from faked celebrity content.

“We should not and cannot have our guard down in the cyberspace,” said Representative John Katko, R-NY, ranking member of House Committee on homeland security.

Adobe pitches technology to identify deepfakes

Software company Adobe released an open-source toolkit to counter deepfake concerns earlier this month, said Dana Rao, executive vice president of Adobe. The companies’ Content Credentials feature is a technology developed over three years that tracks changes made to images, videos, and audio recordings.

Content Credentials is now an opt-in feature in the company’s photo editing software Photoshop that it says will help establish credibility for creators by adding “robust, tamper-evident provenance data about how a piece of content was produced, edited, and published,” read the announcement.

Adobe’s Connect Authenticity Initiative project is dedicated to addressing problems establishing trust after the damage caused by deepfakes. “Once we stop believing in true things, I don’t know how we are going to be able to function in society,” said Rao. “We have to believe in something.”

As part of its initiative, Adobe is working with the public sector in supporting the Deepfake Task Force Act, which was introduced in August of 2021. If adopted, the bill would establish a National Deepfake and Digital task force comprised of members from the private sector, public sector, and academia to address disinformation.

For now, said Cailin Crockett, senior advisor to the White House Gender Policy Council, it is important to educate the public on the threat of disinformation.

Continue Reading

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Broadband Breakfast Research Partner

Trending