Connect with us

Innovation

Governments and Central Banks Continue to Be Necessary with ‘Stable Coins’ and Cryptocurrencies

Published

on

Photo of Andrew Bailey courtesy the Bank of England

September 8, 2020—Could so-called “stable coins” be the currency of the future?

That’s what Andrew Bailey, governor of the Bank of England projected in a Thursday webinar on cryptocurrencies like Bitcoin.

Such a shift might increase the speed of payments and lower the cost of currency, especially in terms of global payments where fees are exponentially higher than domestic payments, argued Bailey.

Christopher Brummer, professor at Georgetown University, and Blythe Masters, industry partner at Motive Partners, agreed with Bailey that stable coins would need more regulation before becoming a viable payment method.

Bailey specifically mentioned that stable coins would need to meet domestic standards first, and then meet global standards.

Such standards should provide a level of security like central banks and may even require more regulation than current payment methods such as cash or check.

“The public is not likely to understand that stable coins provide less robust protection than other methods, therefore there must be greater regulation,” said Bailey.

Not everyone agrees that stable coins would be equipped to replace current payment methods.

Fennie Wang, founder of Dionysus Labs, said that stable coins are primarily used for cryptotrading and investments—not payments.

“The velocity of stable coin is high,” she said. “It changes hands fast.”

Many question what role central banks and governments need to play in a world of cryptocurrency.

“If central banks and government took over a role that the private sector could manage, we might risk strangling other payment methods,” said Eswar Presad, senior fellow at Global Economy and Development.

Reporter Liana Sowa grew up in Simsbury, Connecticut. She studied editing and publishing as a writing fellow at Brigham Young University, where she mentored upperclassmen on neuroscience research papers. She enjoys reading and journaling, and marathon-runnning and stilt-walking.

Artificial Intelligence

As ChatGPT’s Popularity Skyrockets, Some Experts Call for AI Regulation

As generative AI models grow more sophisticated, they present increasing risks.

Published

on

Photo by Tada Images/Adobe Stock used with permission

WASHINGTON, February 3, 2023 — Just two months after its viral launch, ChatGPT reached 100 million monthly users in January, reportedly making it the fastest-growing consumer application in history — and raising concerns, both internal and external, about the lack of regulation for generative artificial intelligence.

Many of the potential problems with generative AI models stem from the datasets used to train them. The models will reflect whatever biases, inaccuracies and otherwise harmful content was present in their training data, but too much dataset filtering can detract from performance.

OpenAI has grappled with these concerns for years while developing powerful, publicly available tools such as DALL·E — an AI system that generates realistic images and original art from text descriptions, said Anna Makanju, OpenAI’s head of public policy, a Federal Communications Bar Association event on Friday.

“We knew right off the bat that nonconsensual sexual imagery was going to be a problem, so we thought, ‘Why don’t we just try to go through the dataset and remove any sexual imagery so people can’t generate it,’” Makanju said. “And when we did that, the model could no longer generate women, because it turns out most of the visual images that are available to train a dataset on women are sexual in nature.”

Despite rigorous testing before ChatGPT’s release, early users quickly discovered ways to evade some of the guardrails intended to prevent harmful uses.

The model would not generate offensive content in response to direct requests, but one user found a loophole by asking it to write from the perspective of someone holding racist views — resulting in several paragraphs of explicitly racist text. When some users asked ChatGPT to write code using race and gender to determine whether someone would be a good scientist, the bot replied with a function that only selected white men. Still others were able to use the tool to generate phishing emails and malicious code.

OpenAI quickly responded with adjustments to the model’s filtering algorithms, as well as increased monitoring.

“So far, the approach we’ve taken is we just try to stay away from areas that can be controversial, and we ask the model not to speak to those areas,” Makanju said.

The company has also attempted to limit certain high-impact uses, such as automated hiring. “We don’t feel like at this point we know enough about how our systems function and biases that may impact employment, or if there’s enough accuracy for there to be an automated decision about hiring without a human in the loop,” Makanju explained.

However, Makanju noted that future generative language models will likely reach a point where users can significantly customize them based on personal worldviews. At that point, strong guardrails will need to be in place to prevent the model from behaving in certain harmful ways — for example, encouraging self-harm or giving incorrect medical advice.

Those guardrails should probably be established by external bodies or government agencies, Makanju said. “We recognize that we — a pretty small company in Silicon Valley — are not the best place to make a decision of how this will be used in every single domain, as hard as we try to think about it.”

Little AI regulation currently exists

So far, the U.S. has very little legislation governing the use of AI, although some states regulate automated hiring tools. On Jan. 26, the National Institute of Standards and Technology released the first version of its voluntary AI risk management framework, developed at the direction of Congress.

This regulatory crawl is being rapidly outpaced by the speed of generative AI research. Google reportedly declared a “code red” in response to ChatGPT’s release, speeding the development of multiple AI tools. Chinese tech company Baidu is planning to launch its own AI chatbot in March.

Not every company will respond to harmful uses as quickly as OpenAI, and some may not even attempt to stop them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI. PAI is a nonprofit coalition that develops tools recommendations for AI governance.

Various private organizations, including PAI, have laid out their own ethical frameworks and policy recommendations. There is ongoing discussion about the extent to which these organizations, government agencies and tech companies should be determining AI regulation, Leibowicz said.

“What I’m interested in is, who’s involved in that risk calculus?” she asked. “How are we making those decisions? What types of actual affected communities are we talking to in order to make that calculus? Or is it a group of engineers sitting in a room trying to forecast for the whole world?”

Leibowicz advocated for transparency measures such as requiring standardized “nutrition labels” that would disclose the training dataset for any given AI model — a proposal similar to the label mandate announced in November for internet service providers.

A regulatory framework should be implemented while these technologies are still being created, rather than in response to a future crisis, Makanju said. “It’s very clear that this technology is going to be incorporated into every industry in some way in the coming years, and I worry a little bit about where we are right now in getting there.”

Continue Reading

Artificial Intelligence

Automated Content Moderation’s Main Problem is Subjectivity, Not Accuracy, Expert Says

With millions of pieces of content generated daily, platforms are increasingly relying on AI for moderation.

Published

on

Screenshot of American Enterprise Institute event

WASHINGTON, February 2, 2023 — The vast quantity of online content generated daily will likely drive platforms to increasingly rely on artificial intelligence for content moderation, making it critically important to understand the technology’s limitations, according to an industry expert.

Despite the ongoing culture war over content moderation, the practice is largely driven by financial incentives — so even companies with “a speech-maximizing set of values” will likely find some amount of moderation unavoidable, said Alex Feerst, CEO of Murmuration Labs, at a Jan. 25 American Enterprise Institute event. Murmuration Labs works with tech companies to develop online trust and safety products, policies and operations.

If a piece of online content could potentially lead to hundreds of thousands of dollars in legal fees, a company is “highly incentivized to err on the side of taking things down,” Feerst said. And even beyond legal liability, if the presence of certain content will alienate a substantial number of users and advertisers, companies have financial motivation to remove it.

However, a major challenge for content moderation is the sheer quantity of user-generated online content — which, on the average day, includes 500 million new tweets, 700 million Facebook comments and 720,000 hours of video uploaded to YouTube.

“The fully loaded cost of running a platform includes making millions of speech adjudications per day,” Feerst said.

“If you think about the enormity of that cost, very quickly you get to the point of, ‘Even if we’re doing very skillful outsourcing with great accuracy, we’re going to need automation to make the number of daily adjudications that we seem to need in order to process all of the speech that everybody is putting online and all of the disputes that are arising.’”

Automated moderation is not just a theoretical future question. In a March 2021 congressional hearing, Meta CEO Mark Zuckerberg testified that “more than 95 percent of the hate speech that we take down is done by an AI and not by a person… And I think it’s 98 or 99 percent of the terrorist content.”

Dealing with subjective content

But although AI can help manage the volume of user-generated content, it can’t solve one of the key problems of moderation: Beyond a limited amount of clearly illegal material, most decisions are subjective.

Much of the debate surrounding automated content moderation mistakenly presents subjectivity problems as accuracy problems, Feerst said.

For example, much of what is generally considered “hate speech” is not technically illegal, but many platforms’ terms of service prohibit such content. With these extrajudicial rules, there is often room for broad disagreement over whether any particular piece of content is a violation.

“AI cannot solve that human subjective disagreement problem,” Feerst said. “All it can do is more efficiently multiply this problem.”

This multiplication becomes problematic when AI models are replicating and amplifying human biases, which was the basis for the Federal Trade Commission’s June 2022 report warning Congress to avoid overreliance on AI.

“Nobody should treat AI as the solution to the spread of harmful online content,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement announcing the report. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology — which can be both helpful and dangerous — will take these problems off our hands.”

The FTC’s report pointed to multiple studies revealing bias in automated hate speech detection models, often as a result of being trained on unrepresentative and discriminatory data sets.

As moderation processes become increasingly automated, Feerst predicted that the “trend of those problems being amplified and becoming less possible to discern seems very likely.”

Given those dangers, Feerst emphasized the urgency of understanding and then working to resolve AI’s limitations, noting that the demand for content moderation will not go away. To some extent, speech disputes are “just humans being human… you’re never going to get it down to zero,” he said.

Continue Reading

Crypto

Cryptocurrency Has Promise But ‘Screams for Regulation,’ Says Miami Mayor Francis Suarez

The mayor has been an enthusiastic proponent of MiamiCoin, a privately-owned cryptocurrency.

Published

on

Screenshot of Francis Suarez, mayor of the City of Miami, at the Wilson Center event

WASHINGTON, January 19, 2023 — Embracing emerging technologies such as cryptocurrency will have long-term benefits for the general public, but the industry needs much stronger regulation, City of Miami Mayor Francis Suarez said at an event hosted Tuesday by the Wilson Center.

Suarez, who is president of the U.S. Conference of Mayors, spoke in advance of the mayors’ 91st annual meeting from Tuesday until this Friday.

Suarez has long been an advocate for cryptocurrency adoption; after winning reelection in 2021, he announced that his own salary would be paid in bitcoin. He has also been an enthusiastic proponent of MiamiCoin, a privately-owned cryptocurrency meant to benefit the city — even after the currency’s value dropped by more than 95 percent.

However, when discussing the recent collapse of crypto exchange FTX, Suarez acknowledged that the technology “screams for regulation.” U.S. legislation tends to be reactive instead of proactive, but the latter approach might have been able to stop the FTX crash, he added.

“I think there should have been regulation on what some of these custodial entities could do with custody assets,” he said. “They’re like banks — the kind of assets that they had were enormous — and what they were doing when you when you peel back the layers of the onion is frightening… there’s a reason why some level of regulation exists already in the banking industry.”

Suarez said that the first step for lawmakers taking on cryptocurrency regulation should be to recognize the significance of the technology. Issues such as the national debt ceiling and rate of inflation demonstrate the value of having currency “outside of the mainstream fiat system,” he said.

In addition to cryptocurrency, Suarez expressed his opinion on a variety of other timely technology issues.

“I think AI is going to be our generation’s arms race,” he said, noting the growing potential for cyberwarfare as weapons systems come to rely on encrypted technology.

Suarez also discussed the impacts that an increasingly digital world may have on childhood development. “My daughter one shocked me when she was two years old — she’s four now — by taking a pretend selfie with her pacifier of me,” he said. “And I was like, wow, this is really crazy.”

Despite having initial concerns about technology’s impact on children, Suarez said that watching his own children’s online interactions had assuaged his fears.

“I’m actually going to take it a step further — I’m starting to see socialization opportunities… they’re actually virtually online with a friend, and they’re playing and talking and socializing,” he said.

Continue Reading

Signup for Broadband Breakfast

Twice-weekly Breakfast Media news alerts
* = required field

Broadband Breakfast Research Partner

Trending