Connect with us

Artificial Intelligence

Is or Isn’t Google Politically Neutral? Senators From the Left and the Right Ponder the Question

Published

on

WASHINGTON, July 22, 2019 — With great power comes great responsibility. And now Google, which insists that it is not slanting search results based upon political leanings, is under attack from both the left and the right.

At a Senate Judiciary Subcommittee hearing last Tuesday — titled “Google and Censorship through Search Engines” — Sen. Ted Cruz, R-Texas, took the opportunity to repeat his oft-made claims about Google’s allegedly anti-conservative bias.

Cruz, chairman of the Subcommittee on the Constitution, highlighted his allegations from a Monday letter to the Federal Trade Commission on Monday: Google and other major tech platforms unfairly enforce their moderation policies to silence conservative voices.

This supposed censorship is reason for Congress to rethink the legal protections of digital platforms, said Cruz, claiming that Section 230 of the Communications Decency Act was a trade that offered legal immunity in exchange for political neutrality.

If big tech cannot provide “clear, compelling data and evidence” of their neutrality, “there’s no reason on earth why Congress should give them a special subsidy through Section 230,” he said.

In actual fact, of course, Section 230 does not include a requirement of political or other neutrality. Online platforms are legally permitted to moderate content at their discretion while being safeguarded from liability.

Google’s mission is to be politically neutral, said a company official

Providing a platform for a broad range of information is core to not only Google’s mission but also to its business model, said Google witness Karan Bhatia, a company vice president. Bhatia argued that it simply wouldn’t make business sense for Google to moderate based on political affiliation.

Besides alienating users, it would erode their trust.

“Google is not politically biased—indeed, we go to extraordinary lengths to build our products and enforce our policies in an analytically objective, apolitical way,” Bhatia said. “Our platforms reflect the online world that exists.”

“Claims of anti-conservative bias in the tech industry are baseless,” agreed Ranking Member Mazie Hirono, D-Hawaii. “Study after study has debunked suggestions of political bias on the part of Facebook, Google, and Twitter.”

She cited a number of studies that, she said, proved her point:

  • In June, The Economist released the findings of a year-long analysis of search results in Google’s News tab that found no evidence that Google biases results against conservatives.
  • A 37-week study into alleged conservative censorship on Facebook completed by Media Matters in April showed that left-leaning pages were actually outperformed by right-leaning pages in terms of overall user interaction.
  • In March, data analysts at Twitter performed a five-week analysis of all tweets sent by members of Congress and found no statistically significant difference between the number of times a tweet by a Democratic member was viewed as compared to a tweet by a Republican member.

Different ways of understanding ‘algorithmic bias’

Additionally, perception of algorithmic bias may stem from the complex nature of the algorithms in question, said Francesca Tripodi, a sociology professor at James Madison University. Simple shifts in the phrasing of a Google search can dramatically change the results. For example, whether a user searches for “NFL ratings up” or “NFL ratings down,” they will find content to support their query.

“What we get from Google depends primarily on what we search, and depending on what we search, conservatism thrives online,” Tripodi said.

A simple search for a person or organization will usually return straightforward data about that person or organization. The first three Google search results for “PragerU,” a conservative organization that publishes educational content, are the main PragerU website, Twitter account, and YouTube channel.

Results becomes more complicated when websites and publications use search engine optimization tools to game the results. A search for “AOC,” referring to liberal congresswoman Alexandria Ocasio-Cortez, will return news results from primarily conservative publications, due to marketing strategies like the fact that Fox News uses “AOC” as a search tag 6.7 times more than MSNBC, Tripodi said.

Likewise, the top YouTube results for terms like “social justice” or “gender identity” are from conservative sources. If left on autoplay, the algorithm will not steer viewers to more liberal sources but rather play a steady stream of conservative views.

Some senators were simply not persuaded by these explanations about tagging and volume of content. Sen. Marsha Blackburn, R-Tenn., for example, suggested that a truly neutral algorithm would simply promote all news results equally “whether the article be from the Huffington Post or Breitbart.”

Factors that get considered — and screened out — by search engines

But the reality is more complicated.

Google’s search engine analyzes more than 200 factors to decide which results to display and in what order. Among these are the number of links that come to a site, how fast the pages download, how recent the content is, how well the pages are linked internally, and so on.

Political ideology is not a factor, say Google officials. But publishing material that Google deems to be a conspiracy theory — or simply misleading and factually incorrect information — could lower a web site’s Google rankings.

Cruz pointed to the fact that some of PragerU’s videos are unavailable in YouTube’s restricted mode as proof that the platform discriminates against conservative media.

Both Cruz and PragerU co-founder Dennis Prager highlighted one video in particular that has been restricted, entitled “The Ten Commandments: What You Should Know.” This restriction is “so absurd as to be hilarious,” Prager said, adding that the “only possible explanation” was that Google disliked PragerU for being an influential conservative publication.

Another possible explanation is that the video contains depictions of violence and Nazi imagery, which fall under the category of “potentially objectionable content” that YouTube’s restricted mode is designed to screen.

(Screenshots from PragerU’s video.)

Restricted videos are only filtered out for the 1.5 percent of YouTube users that choose to watch in restricted mode, said Bhatia, emphasizing that every single PragerU video is available to the 98.5 percent of viewers who use the default settings.

“Those who want to profit from YouTube must adhere to their terms of service,” said Tripodi.

Moreover, only 23 percent of PragerU’s videos are restricted, said Hirono. By comparison, restrictions apply to 28 percent of the Huffington Post’s videos, 30 percent of the History Channel’s videos, 45 percent of the Daily Show’s videos, and 61 percent of progressive socialist-leaning group The Young Turks’ videos.

Senators call on Google to fix the ‘real problems’ with the platform

“Brow-beating the tech industry for a problem that does not exist also draws attention away from the real problems with Google and other tech companies,” Hirono said. “As long as we’re busy making Google defend itself from bogus claims of anti-conservative bias, it has no incentive to address these real issues.”

Twitter has avoided using the proactive, algorithmic approach it used to remove ISIS-related content to also rid the platform of white supremacist content because it is afraid that it might also catch content posted by Republican politicians, according to a report by Vice.

Hirono referenced these stories and more, arguing that “fears of being tarred as ‘biased’ have made tech companies hesitant to deal with the real problems of racist and harassing content on their platforms.”

The platform should instead be focusing on solving problem of metadata being used to amplify hate speech, pedophilia, conspiracy theories, and disinformation, Tripodi said.

Hirono agreed, citing a recent Wall Street Journal examination that found that videos with potentially lethal content such as anti-vaccination conspiracies or fake claims for cancer cures are often viewed millions of times.

Google should prioritize devoting resources to solving real issues like those uncovered by a June investigation from The New York Times, Hirono continued, which showed that YouTube’s recommendation engine served as a roadmap leading pedophiles to find videos of younger and younger girls.

Bhatia said that the platform is fixing these problems through improving its machine learning tools and that dramatic improvement is occurring as technology progresses. It’s a difficult process because of the enormous volume of content being constantly added to the site.

“You can’t simply unleash the monster and then say it’s too big to control,” said Sen. Richard Blumenthal, D-Conn. “You have a moral responsibility, even if you have that legal protection,” he said, referring to Section 230 immunity.

(Photo of hearing by Emily McPhie.)

Artificial Intelligence

Int’l Ethical Framework for Auto Drones Needed Before Widescale Implementation

Observers say the risks inherent in letting autonomous drones roam requires an ethical framework.

Published

on

Timothy Clement-Jones was a member of the U.K. Parliament's committee on artificial intelligence

July 19, 2021 — Autonomous drones could potentially serve as a replacement for military dogs in future warfare, said GeoTech Center Director David Bray during a panel discussion hosted by the Atlantic Council last month, but ethical concerns have observers clamoring for a framework for their use.

Military dogs, trained to assist soldiers on the battlefield, are currently a great asset to the military. AI-enabled autonomous systems, such as drones, are developing capabilities that would allow them to assist in the same way — for example, inspecting inaccessible areas and detecting fires and leaks early to minimize the chance of on-the-job injuries.

However, concerns have been raised about the ability to impact human lives, including the recent issue of an autonomous drone possibly hunting down humans in asymmetric warfare and anti-terrorist operations.

As artificial intelligence continues to develop at a rapid rate, society must determine what, if any, limitations should be implemented on a global scale. “If nobody starts raising the questions now, then it’s something that will be a missed opportunity,” Bray said.

Sally Grant, vice president at Lucd AI, agreed with Bray’s concerns, pointing out the controversies surrounding the uncharted territory of autonomous drones. Panelists proposed the possibility of an international limitation agreement with regards to AI-enabled autonomous systems that can exercise lethal force.

Timothy Clement-Jones, who was a member of the U.K. Parliament’s committee on artificial intelligence, called for international ethical guidelines, saying, “I want to see a development of an ethical risk-based approach to AI development and application.”

Many panelists emphasized the immense risk involve if this technology gets in the wrong hands. Panelists provided examples stretching from terrorist groups to the paparazzi, and the power they could possess with that much access.

Training is vital, Grant said, and soldiers need to feel comfortable with this machinery while not becoming over-reliant. The idea of implementing AI-enabled autonomous systems into missions, including during national disasters, is that soldiers can use it as guidance to make the most informed decisions.

“AI needs to be our servant not our master,” Clement agreed, emphasizing that soldiers can use it as a tool to help them and not as guidance to follow. He compared AI technology with the use of phone navigation, pointing to the importance of keeping a map in the glove compartment in case the technology fails.

The panelists emphasized the importance of remaining transparent and developing an international agreement with an ethical risk-based approach to AI development and application in these technologies, especially if they might enter the battlefield as a reliable companion someday.

Continue Reading

Artificial Intelligence

Deepfakes Could Pose A Threat to National Security, But Experts Are Split On How To Handle It

Experts disagree on the right response to video manipulation — is more tech or a societal shift the right solution?

Published

on

Rep. Anthony Gonzalez, R-Ohio

June 3, 2021—The emerging and growing phenomenon of video manipulation known as deepfakes could pose a threat to the country’s national security, policy makers and technology experts said at an online conference Wednesday, but how best to address them divided the panel.

A deepfake is a highly technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. A well done deepfake can make a person appear to do things that they never actually did and say things that they never actually said.

“The way the technology has evolved, it is literally impossible for a human to actually detect that something is a deepfake,” said Ashish Jaiman, the director of technology operations at Microsoft, at an online event hosted by the Information Technology and Innovation Foundation.

Experts are wary of the associated implications of this technology being increasingly offered to the general population, but how best to address the brewing dilemma has them split. Some believe better technology aimed at detecting deepfakes is the answer, while others say that a shift in social perspective is necessary. Others argue that such a societal shift would be dangerous, and that the solution actually lies in the hands of journalists.

Deepfakes pose a threat to democracy

Such technology posed no problem when only Hollywood had the means to portray such impressive special effects, says Rep. Anthony Gonzalez, R-Ohio, but the technology has progressed to a point that allows most anybody to get their hands on it. He says that with the spread of disinformation, and the challenges that poses to establishing a well-informed public, deepfakes could be weaponized to spread lies and affect elections.

As of yet, however, no evidence exists that deepfakes have been used for this purpose, according to Daniel Kimmage, the acting coordinator for the Global Engagement Center of the Department of State. But he, along with the other panelists, agree that the technology could be used to influence elections and further already growing seeds of mistrust in the information media. They believe that its best to act preemptively and solve the problem before it becomes a crisis.

“Once people realize they can’t trust the images and videos they’re seeing, not only will they not believe the lies, they aren’t going to believe the truth,” said Dana Rao, executive vice president of software company Adobe.

New technology as a solution

Jaiman says Microsoft has been developing sophisticated technologies aimed at detecting deepfakes for over two years now. Deborah Johnson, emeritus technology professor at the University of Virginia School of Engineering, refers to this method as an “arms race,” in which we must develop technology that detects deepfakes at a faster rate than the deepfake technology progresses.

But Jaiman was the first to admit that, despite Microsoft’s hard work, detecting deepfakes remains a grueling challenge. Apparently, it’s much harder to detect a deepfake than it is to create one, he said. He believes that a societal response is necessary, and that technology will be inherently insufficient to address the problem.

Societal shift as a solution

Jaiman argues that people need to be skeptical consumers of information. He believes that until the technology catches up and deepfakes can more easily be detected and misinformation can easily be snuffed, people need to approach online information with the perspective that they could easily be deceived.

But critics believe this approach of encouraging skepticism could be problematic. Gabriela Ivens, the head of open source research at Human Rights Watch, says that “it becomes very problematic if people’s first reactions are not to believe anything.” Ivens’ job revolves around researching and exposing human rights violations, but says that the growing mistrust of media outlets will make it harder for her to gain the necessary public support.

She believes that a “zero-trust society” must be resisted.

Vint Cerf, the vice president and chief internet evangelist at Google, says that it is up to journalists to prevent the growing spread of distrust. He accused journalists not of deliberately lying, but often times misleading the public. He believes that the true risk of deepfakes lies in their ability to corrode America’s trust in truth, and that it is up to journalists to restore that trust already beginning to corrode by being completely transparent and honest in their reporting.

Continue Reading

Artificial Intelligence

Complexity, Lack of Expertise Could Hamper Economic Benefits Of Artificial Intelligence

Artificial intelligence is said to open up a new age of economic development, but its complexity could hamper its rollout.

Published

on

Keith Strier of NVIDIA

May 24, 2021 — One of the great challenges to adopting artificial intelligence is the lack of understanding of it, according to a panel hosted by the Atlantic Council’s new GeoTech Center.

The panel last week discussed the economic benefits of AI and how global policy leaders can leverage it to achieve sustainable economic growth with government buy-in. But getting the government excited and getting them to actually do something about it are two completely different tasks.

That’s because there exists little government understanding or planning around this emerging market, according to Keith Strier, vice-president of worldwide AI initiatives at NVIDIA, a tech company that designs graphics processing units.

If the trend continues, the consequences could be globally impactful, leading to a widening of the global economic divide and could even pose national security threats, he said.

“AI is the new critical infrastructure… It’s about the future of GDP,” said Strier.

Lack of understanding stems from complexity 

The reason for a lack of government understanding stems from the complexity of AI research, and the lack of consensus among experts, Strier said. He noted that the metrics used to quantify AI performance are “deceptively complex” and technical. Experts struggle to even find consensus on defining AI, only adding to its already intrinsic complexity.

This divergence in expert opinion makes the research markedly difficult to break down and communicate to policy makers in digestible, useful ways.

“Policy is just not evidence based,” Strier said. “It’s not well informed.”

World economic divide could widen 

Charles Jennings, AI entrepreneur and founder of internet technology company NeuralEye, warned of AI’s potential to widen the economic divide worldwide.

Currently, the 500 fastest computers in the world are split up between just 29 different countries, leaving the remaining 170 struggling to produce computing power. As computers become faster, the countries best suited to reap the economic benefits will do so at a rate that far outpaces less developed countries.

Jennings also believes that there exists security issues associated with the lack of AI understanding in government, claiming that the public’s increasing dependence on it matched with a lack of regulation could lead to a public safety threat. He is adamant that it’s time to bridge the gap between enterprise and policy.

Strier says there are three essential questions governments must answer: How much domestic AI compute capacity do we have? How does this compare to other nations? Do we have enough capacity to support our national AI ambitions?

Answering these questions would help governments address the AI question in terms of their own national values and interests. This would help create a framework that could mitigate the potential negative consequences which might otherwise affect us.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending