Connect with us

Artificial Intelligence

Is or Isn’t Google Politically Neutral? Senators From the Left and the Right Ponder the Question

Published

on

WASHINGTON, July 22, 2019 — With great power comes great responsibility. And now Google, which insists that it is not slanting search results based upon political leanings, is under attack from both the left and the right.

At a Senate Judiciary Subcommittee hearing last Tuesday — titled “Google and Censorship through Search Engines” — Sen. Ted Cruz, R-Texas, took the opportunity to repeat his oft-made claims about Google’s allegedly anti-conservative bias.

Cruz, chairman of the Subcommittee on the Constitution, highlighted his allegations from a Monday letter to the Federal Trade Commission on Monday: Google and other major tech platforms unfairly enforce their moderation policies to silence conservative voices.

This supposed censorship is reason for Congress to rethink the legal protections of digital platforms, said Cruz, claiming that Section 230 of the Communications Decency Act was a trade that offered legal immunity in exchange for political neutrality.

If big tech cannot provide “clear, compelling data and evidence” of their neutrality, “there’s no reason on earth why Congress should give them a special subsidy through Section 230,” he said.

In actual fact, of course, Section 230 does not include a requirement of political or other neutrality. Online platforms are legally permitted to moderate content at their discretion while being safeguarded from liability.

Google’s mission is to be politically neutral, said a company official

Providing a platform for a broad range of information is core to not only Google’s mission but also to its business model, said Google witness Karan Bhatia, a company vice president. Bhatia argued that it simply wouldn’t make business sense for Google to moderate based on political affiliation.

Besides alienating users, it would erode their trust.

“Google is not politically biased—indeed, we go to extraordinary lengths to build our products and enforce our policies in an analytically objective, apolitical way,” Bhatia said. “Our platforms reflect the online world that exists.”

“Claims of anti-conservative bias in the tech industry are baseless,” agreed Ranking Member Mazie Hirono, D-Hawaii. “Study after study has debunked suggestions of political bias on the part of Facebook, Google, and Twitter.”

She cited a number of studies that, she said, proved her point:

  • In June, The Economist released the findings of a year-long analysis of search results in Google’s News tab that found no evidence that Google biases results against conservatives.
  • A 37-week study into alleged conservative censorship on Facebook completed by Media Matters in April showed that left-leaning pages were actually outperformed by right-leaning pages in terms of overall user interaction.
  • In March, data analysts at Twitter performed a five-week analysis of all tweets sent by members of Congress and found no statistically significant difference between the number of times a tweet by a Democratic member was viewed as compared to a tweet by a Republican member.

Different ways of understanding ‘algorithmic bias’

Additionally, perception of algorithmic bias may stem from the complex nature of the algorithms in question, said Francesca Tripodi, a sociology professor at James Madison University. Simple shifts in the phrasing of a Google search can dramatically change the results. For example, whether a user searches for “NFL ratings up” or “NFL ratings down,” they will find content to support their query.

“What we get from Google depends primarily on what we search, and depending on what we search, conservatism thrives online,” Tripodi said.

A simple search for a person or organization will usually return straightforward data about that person or organization. The first three Google search results for “PragerU,” a conservative organization that publishes educational content, are the main PragerU website, Twitter account, and YouTube channel.

Results becomes more complicated when websites and publications use search engine optimization tools to game the results. A search for “AOC,” referring to liberal congresswoman Alexandria Ocasio-Cortez, will return news results from primarily conservative publications, due to marketing strategies like the fact that Fox News uses “AOC” as a search tag 6.7 times more than MSNBC, Tripodi said.

Likewise, the top YouTube results for terms like “social justice” or “gender identity” are from conservative sources. If left on autoplay, the algorithm will not steer viewers to more liberal sources but rather play a steady stream of conservative views.

Some senators were simply not persuaded by these explanations about tagging and volume of content. Sen. Marsha Blackburn, R-Tenn., for example, suggested that a truly neutral algorithm would simply promote all news results equally “whether the article be from the Huffington Post or Breitbart.”

Factors that get considered — and screened out — by search engines

But the reality is more complicated.

Google’s search engine analyzes more than 200 factors to decide which results to display and in what order. Among these are the number of links that come to a site, how fast the pages download, how recent the content is, how well the pages are linked internally, and so on.

Political ideology is not a factor, say Google officials. But publishing material that Google deems to be a conspiracy theory — or simply misleading and factually incorrect information — could lower a web site’s Google rankings.

Cruz pointed to the fact that some of PragerU’s videos are unavailable in YouTube’s restricted mode as proof that the platform discriminates against conservative media.

Both Cruz and PragerU co-founder Dennis Prager highlighted one video in particular that has been restricted, entitled “The Ten Commandments: What You Should Know.” This restriction is “so absurd as to be hilarious,” Prager said, adding that the “only possible explanation” was that Google disliked PragerU for being an influential conservative publication.

Another possible explanation is that the video contains depictions of violence and Nazi imagery, which fall under the category of “potentially objectionable content” that YouTube’s restricted mode is designed to screen.

(Screenshots from PragerU’s video.)

Restricted videos are only filtered out for the 1.5 percent of YouTube users that choose to watch in restricted mode, said Bhatia, emphasizing that every single PragerU video is available to the 98.5 percent of viewers who use the default settings.

“Those who want to profit from YouTube must adhere to their terms of service,” said Tripodi.

Moreover, only 23 percent of PragerU’s videos are restricted, said Hirono. By comparison, restrictions apply to 28 percent of the Huffington Post’s videos, 30 percent of the History Channel’s videos, 45 percent of the Daily Show’s videos, and 61 percent of progressive socialist-leaning group The Young Turks’ videos.

Senators call on Google to fix the ‘real problems’ with the platform

“Brow-beating the tech industry for a problem that does not exist also draws attention away from the real problems with Google and other tech companies,” Hirono said. “As long as we’re busy making Google defend itself from bogus claims of anti-conservative bias, it has no incentive to address these real issues.”

Twitter has avoided using the proactive, algorithmic approach it used to remove ISIS-related content to also rid the platform of white supremacist content because it is afraid that it might also catch content posted by Republican politicians, according to a report by Vice.

Hirono referenced these stories and more, arguing that “fears of being tarred as ‘biased’ have made tech companies hesitant to deal with the real problems of racist and harassing content on their platforms.”

The platform should instead be focusing on solving problem of metadata being used to amplify hate speech, pedophilia, conspiracy theories, and disinformation, Tripodi said.

Hirono agreed, citing a recent Wall Street Journal examination that found that videos with potentially lethal content such as anti-vaccination conspiracies or fake claims for cancer cures are often viewed millions of times.

Google should prioritize devoting resources to solving real issues like those uncovered by a June investigation from The New York Times, Hirono continued, which showed that YouTube’s recommendation engine served as a roadmap leading pedophiles to find videos of younger and younger girls.

Bhatia said that the platform is fixing these problems through improving its machine learning tools and that dramatic improvement is occurring as technology progresses. It’s a difficult process because of the enormous volume of content being constantly added to the site.

“You can’t simply unleash the monster and then say it’s too big to control,” said Sen. Richard Blumenthal, D-Conn. “You have a moral responsibility, even if you have that legal protection,” he said, referring to Section 230 immunity.

(Photo of hearing by Emily McPhie.)

Artificial Intelligence

Deepfakes Pose National Security Threat, Private Sector Tackles Issue

Content manipulation can include misinformation from authoritarian governments.

Published

on

Photo of Dana Roa of Adobe, Paul Lekas of Global Policy (left to right)

WASHINGTON, July 20, 2022 – Content manipulation techniques known as deepfakes are concerning policy makers and forcing the public and private sectors to work together to tackle the problem, a Center for Democracy and Technology event heard on Wednesday.

A deepfake is a technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. Policymakers are concerned that deepfakes could pose a threat to the country’s national security as the technology is being increasingly offered to the general population.

Deepfake concerns that policymakers have identified, said participants at Wednesday’s event, include misinformation from authoritarian governments, faked compromising and abusive images, and illegal profiting from faked celebrity content.

“We should not and cannot have our guard down in the cyberspace,” said Representative John Katko, R-NY, ranking member of House Committee on homeland security.

Adobe pitches technology to identify deepfakes

Software company Adobe released an open-source toolkit to counter deepfake concerns earlier this month, said Dana Rao, executive vice president of Adobe. The companies’ Content Credentials feature is a technology developed over three years that tracks changes made to images, videos, and audio recordings.

Content Credentials is now an opt-in feature in the company’s photo editing software Photoshop that it says will help establish credibility for creators by adding “robust, tamper-evident provenance data about how a piece of content was produced, edited, and published,” read the announcement.

Adobe’s Connect Authenticity Initiative project is dedicated to addressing problems establishing trust after the damage caused by deepfakes. “Once we stop believing in true things, I don’t know how we are going to be able to function in society,” said Rao. “We have to believe in something.”

As part of its initiative, Adobe is working with the public sector in supporting the Deepfake Task Force Act, which was introduced in August of 2021. If adopted, the bill would establish a National Deepfake and Digital task force comprised of members from the private sector, public sector, and academia to address disinformation.

For now, said Cailin Crockett, senior advisor to the White House Gender Policy Council, it is important to educate the public on the threat of disinformation.

Continue Reading

Artificial Intelligence

Should the Federal Government Regulate Artificial Intelligence?

Two experts were on opposite sides of the debate about how to mitigate the downsides of AI.

Published

on

Screenshot of the panel at the Bipartisan Policy Center event Tuesday

WASHINGTON, July 12, 2022 – Representatives from academia and a nonprofit diverged at a Bipartisan Policy Center event Tuesday about whether the government should step in and minimize problems associated with artificial intelligence, including bias and discrimination in algorithms.

“We really do want actors to help us establish national and international guidelines,” said Miriam Vogel, president, and CEO of EqualAI, a nonprofit that seeks to reduce bias in AI. “We are driving full speed without lanes, without speed limits to manage the expectations.”

While acknowledging the benefits of AI in society today, Vogel said its algorithms present risk that often leads to bias and discrimination. She shared the example of how facial recognition misses certain voices or skin tones.

AI is used in various sectors and powers algorithms that cater services to individuals. Panelists referenced the use of AI algorithms in suspect identification for criminal justice, in disease diagnosis in health care, and for movie and employment recommendations.

Vogel said regulation will establish clear expectations for AI companies to minimize such risks.

Adam Thierer, a senior research fellow at the Mercatus Center at George Mason University, said he is “a little skeptical that we should create a regulatory AI structure” and instead proposed educating workers on how to set best practices for risk management. He called this an “educational institution approach.”

He said that because of how long federal law takes to enact, he wants to reach AI workers directly, such as the computer programmers and AI innovators “of tomorrow” to do a better job of “baking best practices” into AI.

“I think baking best practice principles in by design begins with an educational focus,” said Thierer.

Thierer said he wants to give this job to trusted third parties to suggest pathways forward, including ethical evaluations and consultations with AI companies. He said that when it comes to AI rules across different sectors, “we don’t need one overarching standard to rule them all.”

Thierer added that because of how fast AI is changing, “it can’t go through the same regulatory process.” He argued if regulation is put in place, we will lose AI innovators.

Vogel disagreed with Thierer, saying she doesn’t believe that there is a risk of losing innovators with regulating AI, and instead, said, “I see regulation is the partner to innovation.”

She said that because there is no government regulation for AI, companies are left to do it themselves if they choose, referencing the Badge Program at EqualAI that seeks to help companies navigate risks.

“We need to have a governance system put in place to make sure continual testing is taking place,” said Vogel.

Continue Reading

Artificial Intelligence

FTC Commissioner Says Agency Report on AI for Online Harms Did Not Consult Outside Experts

The FTC released a report that warned about the dangers of AI’s use to combat online harms.

Published

on

Photo of FTC Commissioner Noah Phillips

WASHINGTON, June 22, 2022 – Federal Trade Commissioner Noah Phillips said last week that a report by the commission about the use of artificial intelligence to tackle online harms did not consult outside experts as Congress asked.

The FTC’s “Combatting Online Harms through Innovation” report – approved by a 4-1 vote to send to Congress and released on June 16 – warns against using AI as a policy solution for online problems, as the commission says it contains inherent design flaws, bias and discrimination, and features commercial surveillance concerns. The commission concluded that the potential adoption of AI could increase additional harms.

However, the report found that amid the use of AI by Big Tech platforms to address online harms, “lawmakers should consider focusing on developing legal frameworks that would ensure that AI tools do not cause additional harm.”

The one dissenting opinion on the report was from Phillips, who said the FTC did not do the study that was required by Congress. As part of the 2021 Consolidated Appropriations Act, Congress asked the FTC to conduct a study on how artificial intelligence could address online harms such as fake reviews, hate crimes and harassment and child sexual abuse.

“I do not believe we conducted the requisite study, and I do not think the report on AI issued by the Commission takes sufficient care to answer the questions Congress asked,” Phillips said in his dissenting statement.

Phillips said the report mainly focuses on the technology of AI itself and lacks the outside perspective from individuals and companies who use AI and try to combat the harms of AI online, which he said is “precisely what Congress asked us to evaluate.”

Phillips added that in the 12 months the FTC was given to complete this study, “rather than use this time to solicit input from all relevant stakeholders, the Commission chose to conduct a kind of literature review.

Phillips said in his statement he would have liked to see interviews of market participants or surveys conducted, which allegedly isn’t included in the recent report and adds that he is instead concerned about the “quantity of self-reference” used by the FTC in the report.

“Still, we should at least endeavor to produce a report that reflects the full diversity of experiences and viewpoints on these important issues concerning AI.” Phillips also noted the report doesn’t include a serious cost-benefit analysis of using AI to combat online harms.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending