Artificial Intelligence
Senate Hearing Created a Clash With Google Over the Definition of ‘Persuasive’ Technology

WASHINGTON, June 27, 2019 — A Tuesday Senate Commerce Subcommittee hearing, on “Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms,” became an open invitation for senators to attack the business model of the technology industry.
At the hearing, Google confronted bipartisan skepticism about its claimed neutrality, and about its power as a company. (See our story, “Bipartisan Group of Senators Stoke Fears About Google’s Neutrality and Influence in 2020 Election.”)
Other witnesses and senators piled on, particularly when the Google witness claimed that the search engine giant does not use “persuasive” technologies.
Instead, said Maggie Stanphill, Google’s user experience director, Google’s products are built with “privacy, security, and control for the user” in an effort to build a “lifelong relationship.”
“I don’t know what any of that meant,” replied Ranking Member Brian Schatz, D-Hawaii.
Sen. Richard Blumenthal, D-Conn., also found Stanphill’s assertion “difficult to believe.”
Subcommittee Chairman John Thune, R-S.D., took a darker and more conspiratorial tact: “The powerful mechanisms behind these platforms meant to enhance engagement also have the ability, or at least the potential, to influence the thoughts and behaviors of literally billions of people.”
Thune said that “the use of artificial intelligence and algorithms to optimize engagement can have an unintended and possibly even dangerous downside.”
Using the politically loaded term of ‘persuasive’ technology
Part of the disconnect may be the introduction – in the title of the event – of the politically loaded term “persuasive” technology.
Companies such as Google have a significant business incentive to take as narrow a view as possible of that term, suggested Rashida Richardson, directory of policy research at the AI Now Institute.
Center for Humane Technology Executive Director Tristan Harris argued that, in fact, “persuasive technology is everywhere.”
Social media platforms are carefully designed to be addictive because the business model is reliant on maintaining user engagement, he said. Twitter’s “pull to refresh” has the same addictive qualities of a slot machine, while Instagram’s infinitely scrolling feed gives users no signal of when to stop.
Polarization and the so-called “callout culture” are a direct result of the focus on keeping users’ attention, because moral outrage and succinct statements—in place of logic-based, nuanced arguments—lead to the highest levels of engagement.
However, there’s no easy way to address these issues because the fundamental problem is the business model itself, said Harris.
The power and reach of artificial intelligence algorithms is far more extensive than many people realize. Harris highlighted research showing that AI can predict an individual’s personality traits based on mouse movements and click patterns alone with 80 percent accuracy.
Platforms are using artificial intelligence and machine learning to build increasingly detailed and accurate models of behavior; for example, YouTube uses this to promote the autoplay content that is most likely to keep users watching.
Not only do the platforms make their media as addictive as possible, they actively make it difficult for users to leave. When Facebook users attempt to delete their accounts, the platform shows them the profiles of five users who will supposedly miss them, carefully selected based on past engagement, said Harris.
All of these tactics create what Harris called an “asymmetry of power,” meaning that users believe that they have control when they actually don’t.
Artificial intelligence is having a significant impact on society as well as on individuals. Many companies have attempted to use algorithms to determine who should be hired, released on bail, given loans, and more, oftentimes leading to highly biased and flawed outcomes. These algorithms are primarily developed and deployed by just a few powerful companies, giving them dangerously immense power to shape society, said Richardson.
Harris agreed, comparing human use of these immensely powerful technologies is comparable to “chimpanzees with nukes.”
Senators raise concern about algorithm’s impact on children
Multiple senators expressed especial concern over the impacts of these algorithms on children. Children can inadvertently stumble on extremist material by being drawn to shocking content or using search terms that carry an unknown subtext, said Sen. Tom Udall, D-NM. This can spiral into radicalization.
Harris cited various examples of this phenomenon, such as a video explaining a diet being followed by portrayals of anorexia, or a video about the moon landing being followed by flat earth conspiracy theories.
Not only is this content accidentally found, YouTube may actually be “systemically” serving it to children, said Sen. Ed Markey, D-Mass., who is planning to introduce the “Kids Internet and Safety Act” to stop autoplay and other forms of commercialization that may be targeting children.
Stanphill was adamant in stating that Google had already taken steps to fix the problems under discussion. Her claims were met with skepticism from both senators and other witnesses.
(Photo of Sen. John Thune at the hearing on Tuesday by Emily McPhie.)
Artificial Intelligence
U.S. Must Take Lead on Global AI Regulations: State Department Official
Call for leadership comes during pivotal time in AI development.

WASHINGTON, May 31, 2023 – A State Department official is calling for a United States-led global coalition to set artificial intelligence regulations.
“This is the exact moment where the US needs to show leadership,” Jennifer Bachus, assistant secretary of state for Cyberspace and Digital Policy, said last week on a panel discussing international principles on responsible AI. “This is a shared problem and we need a shared solution.”
She opposed pitting the U.S. and China against one another in the AI race, saying it would “ultimately always lead to a problem.” Instead, Bachus called for an alliance of the United States, the European Union, and Japan to take the lead in creating a legal framework to govern artificial intelligence.
The introduction of OpenAI’s ChatGPT earlier this year sent tech companies in a rush to create their own generative AI chatbot systems. Competition between tech giants has heated up with the recent release of Google’s Bard and Microsoft’s Bing chatbot. Similar to ChatGPT in terms of its vast language model, these chatbots can also access data from the internet to answer queries or carry out tasks.
Experts are concerned about the dangers posed by this unprecedented technology. On Tuesday, hundreds of tech experts and industry leaders, including OpenAI’s CEO Sam Altman, signed a one-sentence statement calling the existential threats presented by A.I. a “global priority” on par with “pandemics and nuclear conflicts.” Earlier in March, Elon Musk joined several AI experts signing another open letter urging for a pause on “giant AI experiments.”
Despite the pressing concerns about generative AI, there is rising criticism that policymakers are slow to put forth adequate legislation for this nascent technology. Panelists argued this is partly because legislators have difficulty understanding technological innovations. Michelle Giuda, director of Krach Institute for Tech Diplomacy, argued for a more proactive contribution from the academic community and tech firms.
“There is a risk of relying too much on the government to regulate ahead of where innovation is going and providing the clarity that’s needed,” said Giuda. “We all know that the government isn’t going to stay ahead of the innovation curve, but this is an ongoing dialogue between tech companies, governments and civil society.”
Microsoft’s Chief Responsible AI Officer, Natasha Crampton, agreed that developers and experts in the field must play a central role in crafting and implementing legislation pertaining to artificial intelligence. She did, however, mention that businesses using AI technology should also share part of the responsibility.
“It is our job to make sure that safety and responsibility is baked into these systems from the very beginning,” said Crampton. “Making sure that you are really holding developers to very high standards but also deployers of technology in some aspects as well.”
Earlier in May, Sens. Michael Bennet, D-C.O., and Peter Welch, D-VT. introduced a bill to establish a government agency to oversee artificial intelligence. The Joe Biden administration also announced $140 million in funding to establish seven new National AI Research institutions, increasing the total number of institutions in the nation to 25.
Artificial Intelligence
AI is a Key Component in Effectively Managing the Energy Grid
The ability to balance the grid’s supply and demand in real time will become extremely complex.

WASHINGTON, May 30, 2023 – Artificial intelligence will be required to effectively manage and optimize a more complex energy grid, said experts at a United States Energy Association event Tuesday.
Renewable energy technologies such as solar panels, electric vehicles, and power walls add large amounts of energy storage to the grid, said Jeremy Renshaw, senior technical executive at the Electric Power Research Institute. Utility companies are required to manage many bidirectional resources that both store and use energy, he said.
“The grid of the future is going to be significantly more complicated,” said Renshaw. Having humans operate the grid will be economically infeasible, he continued, claiming that AI will drastically improve operations.
The ability to balance the grid’s supply and demand in real time will become extremely complex with the adoption of these new technologies, added Marc Spieler, leader for global business development at AI hardware and software supplier, Nvidia.
Utility companies will need to redirect traffic in real time to support the incoming demand, he said. AI enables real time redirecting of traffic and an understanding of the capacity of the grid at any point, said Spieler.
Moreover, AI can identify what changes need to be made to avoid waste by over generating electricity and black outs by under generating, he said. AI also has the capability to predict and plan for extreme weather that can be hazardous to electrical infrastructure and can identify bottleneck areas where infrastructure needs to be updated, said Spieler.
Human management will still be required to ensure that systems are operated responsibly, said John Savage, professor of computer science at Brown University. Utility companies should avoid allowing AI to make unsupervised decisions especially for unforeseen scenarios, he said.
The panelists envision AI as a decision support mechanism to help humans make more informed decisions, agreed the panelists. The technology will replace jobs that deal with mundane and repetitive tasks but will ultimately create more jobs in new positions, said Renshaw.
This comes several weeks after industry experts urged Congress to implement federal AI regulation.
Artificial Intelligence
Experts Debate Artificial Intelligence Licensing Legislation
Licensing requirements will distract from wide scale testing and will limit competition, an event heard.

WASHINGTON, May 23, 2023 – Experts on artificial intelligence disagree on whether licensing is the proper legislation for the technology.
If adopted, licensing requirements would require companies to obtain a federal license prior to developing AI technology. Last week, OpenAI CEO Sam Altman testified that Congress should consider a series of licensing and testing requirements for AI models above a threshold of capability.
At a Public Knowledge event Monday, Aalok Mehta, head of US Public Policy at OpenAI, added licensing is a means to ensuring that AI developers put together safety practices. By establishing licensing rules, we are developing external validation tools that will improve consumer experience, he said.
Generative AI — the model used by chatbots including OpenAI’s widely popular ChatGPT and Google’s Bard — is AI designed to produce content rather than simply processing information, which could have widespread effects on copyright disputes and disinformation, experts have said. Many industry experts have urged for more federal AI regulation, claiming that widespread AI applications could lead to broad societal risks including an uptick in online disinformation, technological displacement, algorithmic discrimination, and other harms.
Some industry leaders, however, are concerned that calls for licensing are a way of shutting the door to competition and new startups by large companies like OpenAI and Google.
B Cavello, director of emerging technologies at the Aspen Institute, said Monday that licensing requirements place burdens on competition, particularly small start-ups.
Implementing licensing requirements can place a threshold that defines a set of players allowed to play in the AI space and a set that are not, said B. Licensing can make it more difficult for smaller players to gain traction in the competitive space, B said.
Already the resources required to support these systems create a barrier that can be really tough to break through, B continued. While there should be mandates for greater testing and transparency, it can also present unique challenges we should seek to avoid, B said.
Austin Carson, founder and president of SeedAI, said a licensing model would not get to the heart of the issue, which is to make sure AI developers are consciously testing and measuring their own models.
The most important thing is to support the development of an ecosystem that revolves around assurance and testing, said Carson. Although no mechanisms currently exist for wide-scale testing, it will be critical to the support of this technology, he said.
Base-level testing at this scale will require that all parties participate, Carson emphasized. We need all parties to feel a sense of accountability for the systems they host, he said.
Christina Montgomery, AI ethics board chair at IBM, urged Congress to adopt precision regulation approach to AI that would govern AI in specific use cases, not regulating the technology itself in her testimony last week.
-
Open Access3 weeks ago
AT&T Closes Open Access Fiber Deal With BlackRock
-
Broadband Roundup4 weeks ago
Starlink Likes FCC Direction on 12 GHz, Verizon & Comcast Urge ACP Funding, FCC Head on ACP Tour
-
Expert Opinion3 weeks ago
Scott Wallsten: A $10 Billion Broadband Black Hole?
-
5G3 weeks ago
Crown Castle CEO Says 5G Plus Fixed Wireless Can Rival Fiber Connections
-
Digital Inclusion3 weeks ago
Debra Berlyn: Creating a Path to Close the Digital Divide for Older Adults
-
#broadbandlive4 weeks ago
Broadband Breakfast on June 7, 2023 – Affordable Connectivity Fund (Special Town Hall Edition)
-
Broadband Roundup4 weeks ago
New ACP Landing Page, Cellular Association Wants More Mid-Band Spectrum, New Ezee Fiber CEO
-
Infrastructure4 weeks ago
Revisiting the NTIA’s Middle Mile Program Ahead of Funding Announcements