Connect with us

Artificial Intelligence

Sam Altman to Join Microsoft, New FCC Broadband Map, Providers Form 4.9 GHz Coalition

After being fired on Friday by the board of OpenAI, former CEO Altman will join Microsoft and lead its AI.

Published

on

Photo of Sam Altman, taken 2017 permission.

November 20, 2023 – Microsoft CEO Satya Nadella announced in an X post Monday that former OpenAI CEO Sam Altman will be joining Microsoft after being fired from the machine learning company. 

Over the course of the last four days, OpenAI has undergone several shifts in leadership, which includes OpenAI investor Microsoft acquiring OpenAI president and chairman Greg Brockman to lead an AI research team alongside Altman

Brockman, who had been concurrently relieved from his role as chairman of the OpenAI board, announced his resignation Friday via X, upon learning that the board had decided to fire Altman. 

OpenAI said in a blog post Friday that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

OpenAI then notified The Information Saturday that Emmett Shear, co-founder of streaming site Twitch, would serve as CEO after having CTO Mira Murati serve that role in the interim.  

Following Nadella’s announcement Monday morning, nearly 500 of the 700 OpenAI employees were signatories to a letter threatening to leave their roles to work under Altman and Brockman at Microsoft unless all of the current board members resign. 

As of Monday, OpenAI board member Ilya Sutskever posted a message of regret on X regarding the board decision to remove Altman and Brockman. The phrase “OpenAI is nothing without its people,” is now emerging from employee’s X accounts.  

FCC announces new national broadband map

The head of the Federal Communication Commission announced Friday the third iteration of its national broadband map, showing just over 7.2 million locations lack access to high-speed internet. 

That is less than the 8.3 million identified in May.   

FCC Chairwoman Jessica Rosenworcel noted that map data continue to fluctuate less between iterations, showing improvements in map accuracy. 

Previous iterations of the national broadband map had been criticized for not accurately depicting areas with and without service, with widespread concern that that would impact the allocation of Broadband Equity, Access and Deployment funding. 

The map outlines where adequate broadband service is and is not available throughout the nation and provides viewers with information on the providers who service those areas and the technology used to do so. 

Providers form spectrum advocacy coalition 

A group of telecom industry players including Verizon and T-Mobile announced Thursday the formation of the Coalition for Emergency Response and Critical Infrastructure to advocate for select use of the 4.9 GigaHertz (GHz) spectrum band. 

The coalition is in support of prioritizing state and local public safety agencies as main users of the 4.9 GHz band, while ensuring that non-public safety licensees operate on the band to avoid interference. 

“Public Safety agencies have vastly different needs from jurisdiction to jurisdiction, and they should decide what compatible non-public-safety use means within their jurisdictions,” read the coalition’s letter.  

In January of this year, the FCC adopted a report to manage the use of the 4.9 GHz band, while seeking comment on the role a band manager would play in facilitating license allocation between public safety and non-public safety entities. 

It had proposed two methods of operation for the band manager in which it would either lease access rights from public-safety entities and then sublease that to non-public safety entities, or to facilitate direct subleasing between public safety operators and external parties. 

In its letter to the FCC, the coalition announced support for the second of those methods stressing the fact that it will allow public safety license holders retain authority over who they sublease their spectrum to. 

Reporter Hanna Agro studied journalism at Columbia University focused on news reporting and video production. For Broadband Breakfast, she has covered broadband deployment, rural area investment and artificial intelligence. She has also done culture reporting and documentary production.

Artificial Intelligence

Sam Altman to Rejoin OpenAI, Tech CEOs Subpoenaed, EFF Warns About Malware

Altman was brought back to OpenAI only days after being fired.

Published

on

Photo of Snap CEO Evan Spiegel, taken 2019, permission.

November 22, 2023 – OpenAI announced in an X post early Wednesday morning that Sam Altman will be re-joining the company that built ChatGPT as CEO after he was fired on Friday. 

Altman confirmed his intention to rejoin OpenAI in an X post Wednesday morning, saying that he was looking forward to returning to OpenAI with support from the new board.

Former company president Greg Brockman also said Wednesday he will return to the AI company.

Altman and Brockman will join with a newly formed board, which includes former Salesforce co-CEO Bret Taylor as the chair, former US Treasury Secretary Larry Summers, and Quora CEO Adam D’Angelo, who previously held a position on the OpenAI board.

Satya Nadella, the CEO of OpenAI backer Microsoft, echoed support for both Brockman and Altman rejoining OpenAI, adding that he is looking forward to continuing building a relationship with the OpenAI team in order to best deliver AI services to customers. 

OpenAI received backlash from several hundred employees who threatened to leave and join Microsoft under Altman and Brockman unless the current board of directors agreed to resign.  

Tech CEOs subpoenaed to attend hearing

Sens. Dick Durbin, D-Illinois, and Lindsey Graham, R-South Carolina, announced Monday that tech giants Snap, Discord and X have been issued subpoenas for their appearance at the Senate Judiciary Committee on December 6 in relation to concerns over child sexual exploitation online. 

Snap CEO Evan Spiegel, X CEO Linda Yaccarino and Discord CEO Jason Citron have been asked to address how or if they’ve worked to confront that issue. 

Durbin said in a press release that the committee “promised Big Tech that they’d have their chance to explain their failures to protect kids. Now’s that chance. Hearing from the CEOs of some of the world’s largest social media companies will help inform the Committee’s efforts to address the crisis of online child sexual exploitation.” 

Durbin noted in a press release that both X and Discord refused to initially accept subpoenas, which required the US Marshal Service to personally deliver those respective documents. 

The committee is looking to have Meta CEO Mark Zuckerberg and TikTok CEO Shou Zi Chew testify as well but have not received confirmation regarding their attendance.  

Several bipartisan bills have been brought forth to address that kind of exploitation, including the Earn It Act, proposed by Sens. Richard Blumenthal, D-Connecticut, and Graham, which holds them liable under child sexual abuse material laws. 

EFF urging FTC to sanction sellers of malware-containing devices

The Electronic Frontier Foundation, a non-profit digital rights group, have asked the Federal Trade Commission in a letter on November 14 to sanction resellers like Amazon and AliExpress following allegations mobile devices and Android TV boxes purchased from their stores contain malware.

The letter explained that once the devices were turned on and connected to the internet,  they would begin “communicating with botnet command and control (C2) servers. From there, these devices connect to a vast click-fraud network which a report by HUMAN Security recently dubbed BADBOX.”

The EFF added that this malware is often operating unbeknownst to the consumer, and without advanced technical knowledge, there is nothing they can do to remedy it themselves.

“These devices put buyers at risk not only by the click-fraud they routinely take part in, but also the fact that they facilitate using the buyers’ internet connections as proxies for the malware manufacturers or those they sell access to,” explained the letter. 

EFF said that the devices containing malware included ones manufactured by Chinese companies AllWinner and RockChip, who have been reported on for sending out products with malware before by EFF.

Continue Reading

Artificial Intelligence

FCC Cybersecurity Pilot Program, YouTube AI Regulations, Infrastructure Act Anniversary

The FCC has proposed a pilot program to help schools and libraries protect against cyberattacks.

Published

on

Photo of fourth grade computer lab, taken 2009, permission.

November 15, 2023 – The Federal Communications Commission proposed Monday a cybersecurity pilot program for schools and libraries, which would require a three-year $200 million investment in ways to best protect K-12 students from cyberattacks. 

In addition to going in and assessing what kind of cybersecurity services are best suited for students and school needs, the program would also subsidize the cost of those services used in schools.  

The program would serve as a separate Universal Service Fund program, separate from the existing school internet subsidy program called E-Rate. 

“This pilot program is an important pathway for hardening our defenses against sophisticated cyberattacks on schools and ransomware attacks that harm our students and get in the way of their learning,” said FCC Chairwoman Jessica Rosenworcel.

The proposal would be a part of the larger Learn Without Limit’s initiative, which supports internet connectivity in schools to help reduce the homework gap by enabling kids’ digital access to digital learning.

YouTube rolling out AI content regulations 

Alphabet’s video sharing platform YouTube announced in a blog post Tuesday it will be rolling out AI guidelines over the next few months, which will inform viewers about when they are interacting with “synthetic” or AI-generated content. 

The rules will require creators to identify if the video is made of AI content. Creators who don’t disclose that information could see their work flagged and removed, and they may be suspended from the platform or subject to other penalties.

For the viewer, tags will appear in the description panel on videos indicating that if the video is synthetic or AI generated. YouTube noted that for videos dealing with more sensitive topics, it may use more prominent labels. 

YouTube’s AI guidelines come at a time when members of Congress and industry leaders are calling for increased effort toward AI regulatory reform, and after President Joe Biden’s executive order on AI guidelines signed into effect in October.

Two-year anniversary of the infrastructure investment jobs act 

Thursday marked the second-year anniversary of the Infrastructure, Investment and Jobs Act, which prompted a $400-billion investment into the US economy. 

The IIJA pushed for a variety of programs and initiatives, with over 40,000 sector-specific projects having received funding – several of those working to improve the broadband sector. 

$65 billion was invested by the IIJA into improving connectivity, which helped to establish the $14-billion Affordable Connectivity Program, which has so-far helped more than 20 million US households get affordable internet through a $30 and $75 subsidy per month. 

Outside of ACP, the IIJA called on the National Telecommunications and Information Administration to develop the Broadband Equity Access Deployment program, a $42.5-billion investment into high-speed broadband deployment across all 50 states. 

Currently, states are in the process of submitting their BEAD draft proposals, which all outline how states will administer the funding they receive as well as any funding they already have or how they will use broadband mapping data. 

Continue Reading

Artificial Intelligence

Will Rinehart: Unpacking the Executive Order on Artificial Intelligence

Most are underweighting the legal challenges and problems to rule of law.

Published

on

The author of this Expert Opinion is Will Rinehart, senior research fellow at the Center for Growth and Opportunity

If police are working on an investigation and want to tap your phone lines, they’ll effectively need to get a warrant. They will also need to get a warrant to search your home, your business, and your mail.

But if they want to access your email, all they need is just to wait for 180 days.

Because of a 1986 law called the Electronic Communications Privacy Act, people using third-party email providers, like Gmail, only get 180 days of warrant protection. It’s an odd quirk of the law that only exists because no one in 1986 could imagine holding onto emails longer than 180 days. There simply wasn’t space for it back then!¹

ECPA is a stark illustration of consistent phenomena in government: policy choices, especially technical requirements, have durable and long-lasting effects. There are more mundane examples as well. GPS could be dramatically more accurate but when the optical system was recently upgraded, it was held back by a technical requirement in the Federal Enterprise Architecture Framework (FEAF) of 1999. More accurate headlights have been shown to be better at reducing night crashes yet adaptive headlights only just got approved last year, nearly 16 years after Europe because of technical requirements in FMVSS 108. All it takes is one law or regulation to crystallize an idea into an enduring framework that fails to keep up with developments.

I fear the approach pushed by the White House in their recent executive order on AI might represent another crystallization moment. ChatGPT has been public for a year, the models on which they are based are only five years old, and yet the administration is already working to set the terms for regulation.

The “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” is sprawling. It spans 13 sections, extends over 100 pages, and lays out nearly 100 deliverables for every major agency. While there are praiseworthy elements to the document, there is also a lot of cause for concern.

Among the biggest changes is the new authority the White House has claimed over newly designated “dual use foundation models.” As the EO defines it, a dual-use foundation model is

  • an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.

While the designation seems to be common sense, it is new and without provenance. Until last week, no one had talked about dual use foundation models. Rather, the designation does comport with the power the president has over the export of military tech.

As the EO explains it, the administration is especially interested in those models with the potential to

  • lower the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear weapons;
  • enable powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
  • permit the evasion of human control or oversight through means of deception or obfuscation

The White House is justifying its regulation of these models under the Defense Production Act, a federal law first enacted in 1950 to respond to the Korean War. Modeled after World War II’s War Powers Acts, the DPA was part of a broad civil defense and war mobilization effort that gave the President the power to requisition materials and property, expand government and private defense production capacity, ration consumer goods, and fix wage and price ceilings, among other powers.

The DPA is reauthorized every five years, which has allowed Congress to expand the set of presidential powers in the DPA. Today, the allowable use of DPA extends far beyond U.S. military preparedness and includes domestic preparedness, response, and recovery from hazards, terrorist attacks, and other national emergencies. The DPA has long been intended to address market failures and slow procurement processes in times of crisis. Now the Biden Administration is using DPA to force companies to open up their AI models.

The administration’s invocation of the Defense Production Act is clearly a strategic maneuver to utilize the maximum extent of its DPA power in service of Biden’s AI policy agenda. The difficult part of this process now sits with the Department of Commerce, which has 90 days to issue regulations.

In turn, the Department will likely use the DPA’s industrial base assessment power to force companies to disclose various aspects of their AI models. Soon enough, dual use foundation models will have to report to the government tests based on guidance developed by the National Institute of Standards and Technology (NIST). But that guidance won’t be available for another 270 days. In other words, Commerce will regulate companies without knowing what they will be beholden to.

Recent news from the United Kingdom suggests that all of the major players in AI are going to be included in the new regulation. In closing out a two-day summit on AI, British Prime Minister Rishi Sunak announced that eight companies were going to give deeper access to their models in an agreement that had been signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, Korea, Singapore, the U.S. and the U.K. Those eight companies included Amazon Web Services, Anthropic, Google, as well its subsidiary DeepMind, Inflection AI, Meta, Microsoft, Mistral AI, and OpenAI.

Thankfully, the administration isn’t pushing for a pause on AI development, they aren’t denouncing more advanced models, nor are they suggesting that AI needs to be licensed. But this is probably because doing so would face a tough legal challenge. Indeed, it seems little appreciated by the AI community that the demand to report on models is a kind of compelled speech, which has typically triggered First Amendment scrutiny. But the courts have occasionally recognized that compelled commercial speech may actually advance First Amendment interests more than undermine them.

The EO clearly marks a shift in AI regulation because of what will come next. In addition to the countless deliverables, the EO encourages agencies to use their full power to advance rulemaking.

For example, the EO explains that,

  • the Federal Trade Commission is encouraged to consider, as it deems appropriate, whether to exercise the Commission’s existing authorities, including its rulemaking authority under the Federal Trade Commission Act, 15 U.S.C. 41 et seq., to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.

Innocuous as it may seem, the Federal Trade Commission, as well as all of the other agencies that have been encouraged to use their power by the administration, could come under court scrutiny. In West Virginia v. EPA, the Supreme Court made it more difficult for agencies to expand their power when the court established the major questions doctrine. This new line of legal reasoning takes an ax to agency delegation. Unless there’s explicit, clear-cut authority granted by Congress, an agency cannot regulate a major economic or political issue. Agency efforts to push rules on AI could get caught up by the courts.

To be fair, there are a lot of positive actions that this EO advances.² But details matter, and it will take time for the critical details to emerge.

Meanwhile, we need to be attentive to the creep of power. As Adam Thierer described this catch-22,

  • While there is nothing wrong with federal agencies being encouraged through the EO to use NIST’s AI Risk Management Framework to help guide sensible AI governance standards, it is crucial to recall that the framework is voluntary and meant to be highly flexible and iterative—not an open-ended mandate for widespread algorithmic regulation. The Biden EO appears to empower agencies to gradually convert that voluntary guidance and other amorphous guidelines into a sort of back-door regulatory regime (a process made easier by the lack of congressional action on AI issues).

In all, the EO is a mixed bag that will take time to shake out. On this, my colleague Neil Chilson is right: some of it is good, some is bad, and some is downright ugly.

Still, the path we are currently navigating with the Executive Order on AI parallels similar paths in ECPA, GPS, and adaptive lights. It underscores a fundamental truth about legal decisions: even the technical rules we set today will shape the landscape for years, perhaps decades, to come. As we move forward, we must tread carefully, ensuring that our legal frameworks are adaptable and resilient, capable of evolving alongside the very technologies they seek to regulate.

Will Rinehart is a senior research fellow at the Center for Growth and Opportunity, where he specializes in telecommunication, internet and data policy, with a focus on emerging technologies and innovation. He was formerly the Director of Technology and Innovation Policy at the American Action Forum and before that a research fellow at TechFreedom and the director of operations at the International Center for Law & Economics. This piece originally appeared in the Exformation Newsletter on November 9, 2023, and is reprinted with permission.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending