FCC RDOF Penalties, KOSA Reintroduced, Lawmakers Explore AI Regulation

RDOF defaults prevented an estimated 293,128 locations in 31 states from getting new investments, the FCC said.

FCC RDOF Penalties, KOSA Reintroduced, Lawmakers Explore AI Regulation
Photo of Sen. Marsha Blackburn in 2021 by Gage Skidmore used with permission

May 2, 2023 — The Federal Communications Commission on Monday proposed more than $8 million in fines against 22 applicants for the Rural Digital Opportunity Fund Phase I auction, alleging that they violated FCC requirements by defaulting on their bids.

The defaults prevented an estimated 293,128 locations in 31 states from receiving new investments in broadband infrastructure, according to a press release from the FCC.

“When applicants fail to live up to their obligations in a broadband deployment program, it is a setback for all of us,” Commissioner Geoffrey Starks said in a statement. “Defaulting applicants pay a fine, but rural communities that have already waited too long for broadband pay a larger toll.”

The FCC has previously put forward penalties against several other RDOF applicants for defaulting, including a proposed $4.3 million in fines against 73 applicants in July.

These enforcement actions intends to show that the agency “takes seriously its commitment to hold applicants accountable and ensure the integrity of our universal service funding,” said FCC Chairwoman Jessica Rosenworcel.

Kids Online Safety Act reintroduced

The Kids Online Safety Act was reintroduced on Tuesday by Sens. Marsha Blackburn, R-Tenn., and Richard Blumenthal, D-Conn., sparking a mix of praise and criticism from a broad range of youth health, civil liberties and technology organizations.

Although KOSA ultimately failed to pass in 2022, it won rare bipartisan support and continued to gain momentum even before its official reintroduction during the current session of Congress through energetic promotion in both House and Senate hearings.

“We need to hold these platforms accountable for their role in exposing our kids to harmful content, which is leading to declining mental health, higher rates of suicide, and eating disorders… these new laws would go a long way in safeguarding the experiences our children have online,” said Johanna Kandel, CEO of the National Alliance for Eating Disorders, in a Tuesday press release applauding the legislation.

However, KOSA’s opponents expressed disappointment that the reintroduced bill appeared largely similar to the original version, failing to substantially address several previous criticisms.

“KOSA’s sponsors seem determined to ignore repeated warnings that KOSA violates the First Amendment and will in fact harm minors,” said Ari Cohn, free speech counsel at TechFreedom, in a press release. “Their unwillingness to engage with these concerns in good faith is borne out by their superficial revisions that change nothing about the ultimate effects of the bill.”

Cohn also claimed that the bill did not clearly establish what constitutes reason for a platform to know that a user is underage.

“In the face of that uncertainty, platforms will clearly have to age-verify all users to avoid liability — or worse, avoid obtaining any knowledge whatsoever and leave minors without any protections at all,” he said. “The most ‘reasonable’ and risk-averse course remains to block minors from accessing any content related to disfavored subjects, ultimately to the detriment of our nation’s youth.”

In addition, the compliance obligations imposed by KOSA could actually undermine teens’ online privacy, argued Matt Schruers, president of the Computer & Communications Industry Association

“Governments should avoid compliance requirements that would compel digital services to collect more personal information about their users — such as geolocation information and a government-issued identification — particularly when responsible companies are instituting measures to collect and store less data on customers,” Schruers said in a statement.

Lawmakers introduce series of bills targeting AI

Amid growing calls for federal regulation of artificial intelligence, Rep. Yvette Clarke, D-N.Y., on Tuesday introduced a bill that would require disclosure of AI-generated content in political ads.

“Unfortunately, our current laws have not kept pace with the rapid development of artificial intelligence technologies,” Clarke said in a press release. “If AI-generated content can manipulate and deceive people on a large scale, it can have devastating consequences for our national security and election security.

Other lawmakers have taken a broader approach regulating the rapidly evolving technology. Legislation introduced Friday by Sen. Michael Bennet, D-Colo., would create a cabinet-level AI task force to recommend specific legislative and regulatory reforms for AI-related privacy protections, biometric identification standards and risk assessment frameworks.

“As the deployment of AI accelerates, the federal government should lead by example to ensure it uses the technology responsibly,” Bennet said in a press release. “Americans deserve confidence that our government’s use of AI won’t violate their rights or undermine their privacy.”

Earlier in April, Sen. Chuck Schumer, D-N.Y., proposed a high-level AI policy framework focused on ensuring transparency and accountability.