Connect with us

Artificial Intelligence

Senate Hearing Created a Clash With Google Over the Definition of ‘Persuasive’ Technology

Published

on

WASHINGTON, June 27, 2019 — A Tuesday Senate Commerce Subcommittee hearing, on “Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms,” became an open invitation for senators to attack the business model of the technology industry.

At the hearing, Google confronted bipartisan skepticism about its claimed neutrality, and about its power as a company. (See our story, “Bipartisan Group of Senators Stoke Fears About Google’s Neutrality and Influence in 2020 Election.”)

Other witnesses and senators piled on, particularly when the Google witness claimed that the search engine giant does not use “persuasive” technologies.

Instead, said Maggie Stanphill, Google’s user experience director, Google’s products are built with “privacy, security, and control for the user” in an effort to build a “lifelong relationship.”

“I don’t know what any of that meant,” replied Ranking Member Brian Schatz, D-Hawaii.

Sen. Richard Blumenthal, D-Conn., also found Stanphill’s assertion “difficult to believe.”

Subcommittee Chairman John Thune, R-S.D., took a darker and more conspiratorial tact: “The powerful mechanisms behind these platforms meant to enhance engagement also have the ability, or at least the potential, to influence the thoughts and behaviors of literally billions of people.”

Thune said that “the use of artificial intelligence and algorithms to optimize engagement can have an unintended and possibly even dangerous downside.”

Using the politically loaded term of ‘persuasive’ technology

Part of the disconnect may be the introduction – in the title of the event – of the politically loaded term “persuasive” technology.

Companies such as Google have a significant business incentive to take as narrow a view as possible of that term, suggested Rashida Richardson, directory of policy research at the AI Now Institute.

Center for Humane Technology Executive Director Tristan Harris argued that, in fact, “persuasive technology is everywhere.”

Social media platforms are carefully designed to be addictive because the business model is reliant on maintaining user engagement, he said. Twitter’s “pull to refresh” has the same addictive qualities of a slot machine, while Instagram’s infinitely scrolling feed gives users no signal of when to stop.

Polarization and the so-called “callout culture” are a direct result of the focus on keeping users’ attention, because moral outrage and succinct statements—in place of logic-based, nuanced arguments—lead to the highest levels of engagement.

However, there’s no easy way to address these issues because the fundamental problem is the business model itself, said Harris.

The power and reach of artificial intelligence algorithms is far more extensive than many people realize. Harris highlighted research showing that AI can predict an individual’s personality traits based on mouse movements and click patterns alone with 80 percent accuracy.

Platforms are using artificial intelligence and machine learning to build increasingly detailed and accurate models of behavior; for example, YouTube uses this to promote the autoplay content that is most likely to keep users watching.

Not only do the platforms make their media as addictive as possible, they actively make it difficult for users to leave. When Facebook users attempt to delete their accounts, the platform shows them the profiles of five users who will supposedly miss them, carefully selected based on past engagement, said Harris.

All of these tactics create what Harris called an “asymmetry of power,” meaning that users believe that they have control when they actually don’t.

Artificial intelligence is having a significant impact on society as well as on individuals. Many companies have attempted to use algorithms to determine who should be hired, released on bail, given loans, and more, oftentimes leading to highly biased and flawed outcomes. These algorithms are primarily developed and deployed by just a few powerful companies, giving them dangerously immense power to shape society, said Richardson.

Harris agreed, comparing human use of these immensely powerful technologies is comparable to “chimpanzees with nukes.”

Senators raise concern about algorithm’s impact on children

Multiple senators expressed especial concern over the impacts of these algorithms on children. Children can inadvertently stumble on extremist material by being drawn to shocking content or using search terms that carry an unknown subtext, said Sen. Tom Udall, D-NM. This can spiral into radicalization.

Harris cited various examples of this phenomenon, such as a video explaining a diet being followed by portrayals of anorexia, or a video about the moon landing being followed by flat earth conspiracy theories.

Not only is this content accidentally found, YouTube may actually be “systemically” serving it to children, said Sen. Ed Markey, D-Mass., who is planning to introduce the “Kids Internet and Safety Act” to stop autoplay and other forms of commercialization that may be targeting children.

Stanphill was adamant in stating that Google had already taken steps to fix the problems under discussion. Her claims were met with skepticism from both senators and other witnesses.

(Photo of Sen. John Thune at the hearing on Tuesday by Emily McPhie.)

Development Associate Emily McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for campus publication Student Life. She is a founding board member of Code Open Sesame, an organization that teaches computer skills to underprivileged children in six cities across Southern California.

Artificial Intelligence

CES 2022: Artificial Intelligence Needs to Resonate with People for Widespread Acceptance

Even though stakeholders may want technologies that yield better results, they may be uncomfortable with artificial intelligence.

Published

on

Pat Baird speaking at CES 2022

LAS VEGAS, January 6, 2022 – To get artificial intelligence into the mainstream, the industry needs to appease not just regulators, but stakeholders as well.

Pat Baird, regulatory head for software standards at electronics maker Philips, said at the Consumer Electronics Show Thursday that for AI technology to be successfully implemented in a field like medicine, everyone touched by it needs to be comfortable with it.

“A lot of people want to know more information, more information, more information before you dare use that [technology] on me one of the members of my family,” Baird said, “I totally get that, but it is interesting – some of the myths that we see in Hollywood compared to how the technology [actually functions],” adding to be successful you have to win the approval of all stakeholders, not just regulators.

“It is a fine line to take and walk,” Baird said. “I think we need to make sure that the lawmakers really understand the benefits and the risks about this – not all AI is the same. Not all applications are the same.”

Like accidents involving autonomous vehicles, rare accidents for AI can set the technology back years, Baird said. “One of the things that I worry about is when something bad happens that’s kind of reflected on the entire industry.”

Baird noted that many people come prepared with preconceived biases against AI that make them susceptible to skepticism or hesitancy that a technology is safe or will work.

But he did not go so far as to say these biases against AI are putting a “thumb on the scale” against AI, “but [that thumb] is floating near the scale right now.”

“That is one of the things that I’m worried about,” he said. “Because this technology can make a difference. I want to help my patients, damn it, and if this can only improve performance by a couple percent, that is important to that family that you just helped with that [technology].”

Joseph Murphy, vice president of marketing at AI company Sensory Inc., said, “Just like everything in life it’s a tricky balance of innovation, and then putting up the speed bumps to innovation. It’s a process that has to happen.”

On Wednesday, Sally Lange Witkowski, founder of business consulting firm Slang Consulting, said that companies should be educating consumers about the benefits of 5G for widespread adoption.

Continue Reading

Artificial Intelligence

Henry Kissinger: AI Will Prompt Consideration of What it Means to Be Human

Event with the former Secretary of State discusses our current lack of knowledge on how to responsibly harness AI’s power.

Published

on

Former Secretary of State Henry Kissinger

WASHINGTON, December 24, 2021 – Former Secretary of State Henry Kissinger says that further use of artificial intelligence will call into question what it means to be human, and that the technology cannot solve all those problems humans fail to address on their own.

Kissinger spoke at a Council on Foreign Relations event highlighting his new book “The Age of AI: And Our Human Future” on Monday along with co-author and former Google CEO Eric Schmidt in a conversation moderated by PBS NewsHour anchor Judy Woodruff.

Schmidt remarked throughout the event on unanswered questions about AI despite common use of the technology.

He emphasized that the computer systems may be able to solve complex problems, such as in physics dealing with dark matter or dark energy, but that the humans who built the technology may not be able to determine how exactly the computer solved the problems.

Along the lines of this potential for dangerous use of the technology, he stated how AI development, though sometimes a force for good, “plays” with human lives.

He pointed out that to deal with this great technological power, almost every country now has created a governmental to oversee the ethics of AI development.

Schmidt stated that western values must be the dominant values in AI platforms that influence everyday life such as ones that have key implications for democracy.

With all the consideration on how to make AI work so it is effective but also utilitarian, Kissinger noted how much human thinking must go into managing the “thinking” these machines do, and that “a mere technological edge is not in itself decisive” in terms of AI that can compete with adversaries such as China’s diplomatic technological might.

Continue Reading

Artificial Intelligence

Vaccine Makers Promote Use of Artificial Intelligence for Development

Artificial Intelligence assists in the development of vaccine research and trial testing, makers say.

Published

on

Najat Khan, Janssen’s research and development global head of strategy

WASHINGTON, December 15, 2021 – Artificial intelligence is helping accelerate the development of COVID-19 vaccines.

Leaders in Janssen’s and Moderna’s research and development groups said Tuesday that AI will help drug makers create better, more effective vaccines for patients.

Speaking at Bloomberg’s Technology Summit on Tuesday, Najat Khan, Janssen’s research and development global head of strategy, said AI is speeding up the delivery of new vaccines for populations in need. (Janssen is a subsidiary of Johnson & Johnson.)

“We use AI and machine learning to predict performance of clinical sites for potential [vaccine] trial sites,” Khan said. AI can help researchers target patients for trials to obtain more comprehensive data sets. Vaccine developers spend time, money, and resources finding patients to participate in clinical trials.

Khan said “only four percent” of eligible patients join a clinical trial. AI can help researchers focus their efforts to identify patients to participate, she said.

Outstanding concerns with AI

Despite AI’s usefulness in vaccine development, Khan said there is still a gap that exists between the information available in healthcare and what’s useful for AI. “There’s lots of data generated in health care, but it’s not connected,” Khan stated. “If it’s not connected, it’s fragmented.”

The problem, Khan said, is the varying systems health clinics use to input and store patients’ information. “Different systems across different clinics needs the same data,” Khan added. “I can go to two different clinics, each one year apart, and my data would be separate.”

On a large scale, mismatched datasets lead to “an over-index of patient information in some areas and an under-index in others,” she said.

For better innovation in treating and curing diseases, health providers need better ways to gather share data while complying with patient privacy concerns, Khan added.

One of health care providers’ challenges is effective data minimization and ensuring that health entities only use patient data according to the patient’s consent over the use of their data. The industry is starting to see progress with tokenization, Khan said, which anonymizes data and links with other data sources for a specific patient-focused purpose.

“This allows us to do even more with AI,” Khan said.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending