Connect with us

Artificial Intelligence

World Broadband Speeds Released, Broadband Competition Questions, Social Media Algorithms

Published

on

Taiwan has the world’s fastest broadband, while the United States comes in at 15th place, according to the annual Worldwide Broadband Speed League report released by Cable.co.uk on Tuesday.

The report analyzes more than 276 million broadband speed tests collected by M-Lab, an open source project involving civil society organizations, educational institutions, and private sector companies.

While the average global speed has increased, the gap between the fastest and slowest countries has widened as well, causing concern among researchers.

“Faster countries are the ones lifting the average, pulling away at speed and leaving the slowest to stagnate, said Dan Howdle, consumer telecoms analyst at Cable.co.uk. “Last year, we measured the slowest five countries at 88 times slower than the five fastest. This year they are 125 times slower.”

According to telecoms watchdog Ofcom, the minimum speed required to meet the typical needs of a family or small business is 10Mbps. This goal is still unmet by 141 countries.

FCC clashes with San Francisco over city ordinance

The Federal Communications Commission is attempting to overrule a San Francisco ordinance that prevents owners of residential and commercial buildings from blocking tenants from accessing certain internet service providers.

In a blog post on June 18, FCC Chairman Ajit Pai called the ordinance an “outlier” and “a policy which deters broadband deployment.”

Rep. Katie Porter, D-Calif., introduced a budget amendment barring the FCC from finalizing the ruling. The amendment passed in the House but has yet to pass the Senate.

“The communications industry is in dire need of more competition,” Porter said in a statement. “San Francisco’s Article 52 has been incredibly effective in promoting broadband competition — giving residents the benefit of competition and choice in the market, increasing their service quality while decreasing their monthly bills.”

Sen. Johnson requests more information on social media algorithms

On Monday, Sen. Ron Johnson, R-Wisconsin, wrote a letter to Facebook and Instagram executives requesting information on their use of algorithms and artificial intelligence by July 10.

“The lack of transparency regarding human bias and the use of algorithms and artificial intelligence is troubling,” Johnson wrote. “As a result, policymakers and the American public deserve to understand the facts behind the content and suggestions they are served on these internet platforms.”

The letter follows a Senate Committee on Commerce, Science, and Transportation hearing about the use of persuasive technology on internet platforms, at which Johnson raised concerns about the presence of political bias in the algorithms.

Reporter Em McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for the student newspaper. In addition to agency and freelance marketing experience, she has reported extensively on Section 230, big tech, and rural broadband access. She is a founding board member of Code Open Sesame, an organization that teaches computer programming skills to underprivileged children.

Artificial Intelligence

As ChatGPT’s Popularity Skyrockets, Some Experts Call for AI Regulation

As generative AI models grow more sophisticated, they present increasing risks.

Published

on

Photo by Tada Images/Adobe Stock used with permission

WASHINGTON, February 3, 2023 — Just two months after its viral launch, ChatGPT reached 100 million monthly users in January, reportedly making it the fastest-growing consumer application in history — and raising concerns, both internal and external, about the lack of regulation for generative artificial intelligence.

Many of the potential problems with generative AI models stem from the datasets used to train them. The models will reflect whatever biases, inaccuracies and otherwise harmful content was present in their training data, but too much dataset filtering can detract from performance.

OpenAI has grappled with these concerns for years while developing powerful, publicly available tools such as DALL·E — an AI system that generates realistic images and original art from text descriptions, said Anna Makanju, OpenAI’s head of public policy, a Federal Communications Bar Association event on Friday.

“We knew right off the bat that nonconsensual sexual imagery was going to be a problem, so we thought, ‘Why don’t we just try to go through the dataset and remove any sexual imagery so people can’t generate it,’” Makanju said. “And when we did that, the model could no longer generate women, because it turns out most of the visual images that are available to train a dataset on women are sexual in nature.”

Despite rigorous testing before ChatGPT’s release, early users quickly discovered ways to evade some of the guardrails intended to prevent harmful uses.

The model would not generate offensive content in response to direct requests, but one user found a loophole by asking it to write from the perspective of someone holding racist views — resulting in several paragraphs of explicitly racist text. When some users asked ChatGPT to write code using race and gender to determine whether someone would be a good scientist, the bot replied with a function that only selected white men. Still others were able to use the tool to generate phishing emails and malicious code.

OpenAI quickly responded with adjustments to the model’s filtering algorithms, as well as increased monitoring.

“So far, the approach we’ve taken is we just try to stay away from areas that can be controversial, and we ask the model not to speak to those areas,” Makanju said.

The company has also attempted to limit certain high-impact uses, such as automated hiring. “We don’t feel like at this point we know enough about how our systems function and biases that may impact employment, or if there’s enough accuracy for there to be an automated decision about hiring without a human in the loop,” Makanju explained.

However, Makanju noted that future generative language models will likely reach a point where users can significantly customize them based on personal worldviews. At that point, strong guardrails will need to be in place to prevent the model from behaving in certain harmful ways — for example, encouraging self-harm or giving incorrect medical advice.

Those guardrails should probably be established by external bodies or government agencies, Makanju said. “We recognize that we — a pretty small company in Silicon Valley — are not the best place to make a decision of how this will be used in every single domain, as hard as we try to think about it.”

Little AI regulation currently exists

So far, the U.S. has very little legislation governing the use of AI, although some states regulate automated hiring tools. On Jan. 26, the National Institute of Standards and Technology released the first version of its voluntary AI risk management framework, developed at the direction of Congress.

This regulatory crawl is being rapidly outpaced by the speed of generative AI research. Google reportedly declared a “code red” in response to ChatGPT’s release, speeding the development of multiple AI tools. Chinese tech company Baidu is planning to launch its own AI chatbot in March.

Not every company will respond to harmful uses as quickly as OpenAI, and some may not even attempt to stop them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI. PAI is a nonprofit coalition that develops tools recommendations for AI governance.

Various private organizations, including PAI, have laid out their own ethical frameworks and policy recommendations. There is ongoing discussion about the extent to which these organizations, government agencies and tech companies should be determining AI regulation, Leibowicz said.

“What I’m interested in is, who’s involved in that risk calculus?” she asked. “How are we making those decisions? What types of actual affected communities are we talking to in order to make that calculus? Or is it a group of engineers sitting in a room trying to forecast for the whole world?”

Leibowicz advocated for transparency measures such as requiring standardized “nutrition labels” that would disclose the training dataset for any given AI model — a proposal similar to the label mandate announced in November for internet service providers.

A regulatory framework should be implemented while these technologies are still being created, rather than in response to a future crisis, Makanju said. “It’s very clear that this technology is going to be incorporated into every industry in some way in the coming years, and I worry a little bit about where we are right now in getting there.”

Continue Reading

Artificial Intelligence

Automated Content Moderation’s Main Problem is Subjectivity, Not Accuracy, Expert Says

With millions of pieces of content generated daily, platforms are increasingly relying on AI for moderation.

Published

on

Screenshot of American Enterprise Institute event

WASHINGTON, February 2, 2023 — The vast quantity of online content generated daily will likely drive platforms to increasingly rely on artificial intelligence for content moderation, making it critically important to understand the technology’s limitations, according to an industry expert.

Despite the ongoing culture war over content moderation, the practice is largely driven by financial incentives — so even companies with “a speech-maximizing set of values” will likely find some amount of moderation unavoidable, said Alex Feerst, CEO of Murmuration Labs, at a Jan. 25 American Enterprise Institute event. Murmuration Labs works with tech companies to develop online trust and safety products, policies and operations.

If a piece of online content could potentially lead to hundreds of thousands of dollars in legal fees, a company is “highly incentivized to err on the side of taking things down,” Feerst said. And even beyond legal liability, if the presence of certain content will alienate a substantial number of users and advertisers, companies have financial motivation to remove it.

However, a major challenge for content moderation is the sheer quantity of user-generated online content — which, on the average day, includes 500 million new tweets, 700 million Facebook comments and 720,000 hours of video uploaded to YouTube.

“The fully loaded cost of running a platform includes making millions of speech adjudications per day,” Feerst said.

“If you think about the enormity of that cost, very quickly you get to the point of, ‘Even if we’re doing very skillful outsourcing with great accuracy, we’re going to need automation to make the number of daily adjudications that we seem to need in order to process all of the speech that everybody is putting online and all of the disputes that are arising.’”

Automated moderation is not just a theoretical future question. In a March 2021 congressional hearing, Meta CEO Mark Zuckerberg testified that “more than 95 percent of the hate speech that we take down is done by an AI and not by a person… And I think it’s 98 or 99 percent of the terrorist content.”

Dealing with subjective content

But although AI can help manage the volume of user-generated content, it can’t solve one of the key problems of moderation: Beyond a limited amount of clearly illegal material, most decisions are subjective.

Much of the debate surrounding automated content moderation mistakenly presents subjectivity problems as accuracy problems, Feerst said.

For example, much of what is generally considered “hate speech” is not technically illegal, but many platforms’ terms of service prohibit such content. With these extrajudicial rules, there is often room for broad disagreement over whether any particular piece of content is a violation.

“AI cannot solve that human subjective disagreement problem,” Feerst said. “All it can do is more efficiently multiply this problem.”

This multiplication becomes problematic when AI models are replicating and amplifying human biases, which was the basis for the Federal Trade Commission’s June 2022 report warning Congress to avoid overreliance on AI.

“Nobody should treat AI as the solution to the spread of harmful online content,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement announcing the report. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology — which can be both helpful and dangerous — will take these problems off our hands.”

The FTC’s report pointed to multiple studies revealing bias in automated hate speech detection models, often as a result of being trained on unrepresentative and discriminatory data sets.

As moderation processes become increasingly automated, Feerst predicted that the “trend of those problems being amplified and becoming less possible to discern seems very likely.”

Given those dangers, Feerst emphasized the urgency of understanding and then working to resolve AI’s limitations, noting that the demand for content moderation will not go away. To some extent, speech disputes are “just humans being human… you’re never going to get it down to zero,” he said.

Continue Reading

Artificial Intelligence

AI Should Compliment and Not Replace Humans, Says Stanford Expert

AI that strictly imitates human behavior can make workers superfluous and concentrate power in the hands of employers.

Published

on

Photo of Erik Brynjolfsson, director of the Stanford Digital Economy Lab, in January 2017 by Sandra Blaser used with permission

WASHINGTON, November 4, 2022 – Artificial intelligence should be developed primarily to augment the performance of, not replace, humans, said Erik Brynjolfsson, director of the Stanford Digital Economy Lab, at a Wednesday web event hosted by the Brookings Institution.

AI that complements human efforts can increase wages by driving up worker productivity, Brynjolfsson argued. AI that strictly imitates human behavior, he said, can make workers superfluous – thereby lowering the demand for workers and concentrating economic and political power in the hands of employers – in this case the owners of the AI.

“Complementarity (AI) implies that people remain indispensable for value creation and retain bargaining power in labor markets and in political decision-making,” he wrote in an essay earlier this year.

What’s more, designing AI to mimic existing human behaviors limits innovation, Brynjolfsson argued Wednesday.

“If you are simply taking what’s already being done and using a machine to replace what the human’s doing, that puts an upper bound on how good you can get,” he said. “The bigger value comes from creating an entirely new thing that never existed before.”

Brynjolfsson argued that AI should be crafted to reflect desired societal outcomes. “The tools we have now are more powerful than any we had before, which almost by definition means we have more power to change the world, to shape the world in different ways,” he said.

The AI Bill of Rights

In October, the White House released a blueprint for an “AI Bill of Rights.” The document condemned algorithmic discrimination on the basis of race, sex, religion, or age and emphasized the importance of user privacy. It also endorsed system transparency with users and suggested the use of human alternatives to AI when feasible.

To fully align with the blueprint’s standards, Russell Wald, policy director for Stanford’s Institute for Human-Centered Artificial Intelligence, argued at a recent Brookings event that the nation must develop a larger AI workforce.

Continue Reading

Signup for Broadband Breakfast

Twice-weekly Breakfast Media news alerts
* = required field

Broadband Breakfast Research Partner

Trending