Connect with us


House and Senate Tackle Semiconductor Chip Shortages and Seek to Boost U.S. Production

How COVID and supply chain woes are driving $52 billion in incentives for domestic production of semiconductor chips.



Photo of Intel CEO Pat Gelsinger courtesy Web Summit

WASHINGTON, April 21, 2022 – Lawmakers on both sides of the aisle in Washington have made explicit the need to tackle the supply chain crisis and semiconductor shortage by introducing and pushing forward separate pieces of legislation that emphasize more domestic autonomy. But the path of these pieces of legislation isn’t exactly cut-and-dried.

Despite a key bill that would make available $52 billion in incentives for the domestic production of semiconductor chips – critical for computers, cars and networking equipment – passing both chambers, one key Republican said his party has been largely shut out of contributing to the language of the legislation.

Representative John Curtis, R-Utah, a member of the House Energy and Commerce Committee, told Broadband Breakfast this month that Republicans have been shut out of legislative discussions for the past year as the Democratic party has enjoyed a majority in the House and the Senate.

Curtis, who is also head of the Conservative Climate Caucus, said he has yet to be approached by the White House on what seems to be a larger piece of the Biden policy plan for semiconductor and supply chain independence.

This sentiment among House Republicans has created friction as the bill heads to conference committee, which is designed to bring both parties to the table to hammer out language they can agree on before it goes to the president for signing.

Conferees were appointed by House Speaker Nancy Pelosi, D-Calif., on April 7, just before the House began its Easter break. Sorting out difference between the House-passed and Senate-passed versions is likely to be one of the first items of business when representatives return to Washington next week.

Broadband Breakfast has been following the intertwining issue of the supply chain crisis and the U.S.’s ambitions to be more independent when it comes to producing the chips needed to power the future.

As the Covid-19 pandemic has held up key supplies in foreign lands and as domestic law and policy have moved to shore up national security by banning Chinese-made products from the country’s networks and more, this publication will outline the key bills that both chambers are mulling over.

The U.S. Innovation and Competition Act, and the COMPETES Act

The first of these bills is the United States Innovation and Competition Act, S.1260. Passed by the Senate in June 2021 with a margin of 68-32, this bill provides funding through fiscal year 2026 to support domestic semiconductor manufacturing, research and development and supply chain security. It also targets funding for wireless supply chain innovation.

The initial goal was for this bill to go to the House for votes, but the House instead presented a bill with similar goals – the America Creating Opportunities for Manufacturing, Pre-Eminence in Technology, and Economic Strength Act of 2022 (America COMPETES Act of 2022, H.R. 4521) – which was introduced in the summer of 2021.

It passed the House on February 4 by a margin of 222-210, demonstrating the tension between the parties, and then passed the Senate in late March by a 68-28 spread.

The Competes Act must now go to the aforementioned conference committee to iron out the party differences. After that is done, the bill will make another voting trip through both chambers and, if successful, out to the president for signing.

“Over the past year, the House and Senate have acted independently to pass their own versions of competitiveness legislation,” Senate Majority Leader Chuck Schumer, D-N.Y., said on March 17, as he addressed the Senate. “To reconcile the differences between these bills, both chambers must enter a conference before we send the final product to the president’s desk.”

The CHIPS funding proposal

The separate bills have common ground in that they both call for injecting $52 billion over five years into the Creating Helpful Incentives to Produce Semiconductors (CHIPS) For America Fund, a Treasury Department coffer that was created through the 2021 National Defense Authorization Act.

That money will be divided into a $39-billion financial incentives program and an $11-billion research and development program.

Mike Molnar, the founding director of the Advanced Manufacturing National Program Office, which is responsible for the Manufacturing USA program, said during a question-and-answer session about the CHIPS program Tuesday that the money is “not too much” and that both the incentives program and the R&D program must be paired for their goal to be achieved.

The federal government has been fielding comments since January about the makeup of the program, which will go toward producing chips and semiconductors that may otherwise be imported from countries like Taiwan, China, and South Korea.

Pressure mounts for some form of legislation

In March, a Senate Committee on Commerce, Science, and Transportation heard that only 12 percent of chip manufacturing occurs in America, and 6 percent of that comes from Intel. However, while Intel – which appeared with three other technology companies at the hearing – has remained primarily U.S.-based, Intel CEO Pat Gelsinger said that other countries can make the same chips it makes for 30-80 percent cheaper.

Intel is currently scheduled to break ground this year on a $20 billion semiconductor manufacturing “mega-site” in rural Ohio.

Gelsinger also shared that many overseas companies have government subsidies and incentives, which makes it easier and cheaper for them to make the chips. He said this is a key reason why Congress needs to pass the competitive legislative bill with the CHIPS funding included in the final product.

In February of 2021, President Joe Biden signed the executive order on America’s supply chains to begin efforts to restore America’s supply chains, and this past February, exactly one year later, the administration released a comprehensive plan based on the results of their work correlated with the executive order.

The report evaluated the current state of the supply chain, including as it affects technology, how the U.S. can develop its own manufacturing assets, where those assets would fit in the current supply chain, and how it would affect competition.

It’s reports such as those that add credence to lawmakers pushing for funding pieces such as those under the America COMPETES Act, which may still have some ways to go before the president himself can approve it.

Reporter Ashlan Gruwell studied political science at Brigham Young University. She has immersed herself in principles of American politics and voter behavior. She also enjoys traveling internationally and hopes to visit the Nordic Region of Europe next.

Artificial Intelligence

Companies Must Be Transparent About Their Use of Artificial Intelligence

Making the use of AI known is key to addressing any pitfalls, researchers said.



WASHINGTON, September 20, 2023 – Researchers at an artificial intelligence workshop Tuesday said companies should be transparent about their use of algorithmic AI in things like hiring processes and content writing. 

Andrew Bell, a fellow at the New York University Center for Responsible AI, said that making the use of AI known is key to addressing any pitfalls AI might have. 

Algorithmic AI is behind systems like chatbots which can generate texts and answers to questions. It is used in hiring processes to quickly screen resumes or in journalism to write articles. 

According to Bell, ‘algorithmic transparency’ is the idea that “information about decisions made by algorithms should be visible to those who use, regulate, and are affected by the systems that employ those algorithms.”

The need for this kind of transparency comes after events like Amazons’ old AI recruiting tool showed bias toward women in the hiring process, or when OpenAI, the company that created ChatGPT, was probed by the FTC for generating misinformation. 

Incidents like these have brought the topic of regulating AI and making sure it is transparent to the forefront of Senate conversations.

Senate committee hears need for AI regulation

The Senate’s subcommittee on consumer protection on September 12 heard about proposals to make AI use more transparent, including disclaiming when AI is being used and developing tools to predict and understand risk associated with different AI models.

Similar transparency methods were mentioned by Bell and his supervisor Julia Stoyanovich, the Director of the Center for Responsible AI at New York University, a research center that explores how AI can be made safe and accessible as the technology evolves. 

According to Bell, a transparency label on algorithmic AI would “[provide] insight into ingredients of an algorithm.” Similar to a nutrition label, a transparency label would identify all the factors that go into algorithmic decision making.  

Data visualization was another option suggested by Bell, which would require a company to put up a public-facing document that explains the way their AI works, and how it generates the decisions it spits out. 

Adding in those disclaimers creates a better ecosystem between AI and AI users, increasing levels of trust between all stakeholders involved, explained Bell.

Bell and his supervisor built their workshop around an Algorithm Transparency Playbook, a document they published that has straightforward guidelines on why transparency is important and ways companies can go about it. 

Tech lobbying groups like the Computer and Communications Industry Association, which represent Big Tech companies, however, have spoken out in the past against the Senate regulating AI, claiming that it could stifle innovation. 

Continue Reading

Artificial Intelligence

Congress Should Mandate AI Guidelines for Transparency and Labeling, Say Witnesses

Transparency around data collection and risk assessments should be mandated by law, especially in high-risk applications of AI.



Screenshot of the Business Software Alliance's Victoria Espinel at the Commerce subcommittee hearing

WASHINGTON, September 12, 2023 – The United States should enact legislation mandating transparency from companies making and using artificial intelligence models, experts told the Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security on Tuesday.

It was one of two AI policy hearings on the hill Tuesday, with a Senate Judiciary Committee hearing, as well as an executive branch meeting created under the National AI Advisory Committee.

The Senate Commerce subcommittee asked witnesses how AI-specific regulations should be implemented and what lawmakers should keep in mind when drafting potential legislation. 

“The unwillingness of leading vendors to disclose the attributes and provenance of the data they’ve used to train models needs to be urgently addressed,” said Ramayya Krishnan, dean of Carnegie Mellon University’s college of information systems and public policy.

Addressing problems with transparency of AI systems

Addressing the lack of transparency might look like standardized documentation outlining data sources and bias assessments, Krishnan said. That documentation could be verified by auditors and function “like a nutrition label” for users.

Witnesses from both private industry and human rights advocacy agreed legally binding guidelines – both for transparency and risk management – will be necessary. 

Victoria Espinel, CEO of the Business Software Alliance, a trade group representing software companies, said the AI risk management framework developed in March by the National Institute of Standards and Technology was important, “but we do not think it is sufficient.”

“We think it would be best if legislation required companies in high-risk situations to be doing impact assessments and have internal risk management programs,” she said.

Those mandates – along with other transparency requirements discussed by the panel – should look different for companies that develop AI models and those that use them, and should only apply in the most high-risk applications, panelists said.

That last suggestion is in line with legislation being discussed in the European Union, which would apply differently depending on the assessed risk of a model’s use.

“High-risk” uses of AI, according to the witnesses, are situations in which an AI model is making consequential decisions, like in healthcare, hiring processes, and driving. Less consequential machine-learning models like those powering voice assistants and autocorrect would be subject to less government scrutiny under this framework.

Labeling AI-generated content

The panel also discussed the need to label AI-generated content.

“It is unreasonable to expect consumers to spot deceptive yet realistic imagery and voices,” said Sam Gregory, director of human right advocacy group WITNESS. “Guidance to look for a six fingered hand or spot virtual errors in a puffer jacket do not help in the long run.”

With elections in the U.S. approaching, panelists agreed mandating labels on AI-generated images and videos will be essential. They said those labels will have to be more comprehensive than visual watermarks, which can be easily removed, and might take the form of cryptographically bound metadata.

Labeling content as being AI-generated will also be important for developers, Krishnan noted, as generative AI models become much less effective when trained on writing or images made by other AIs.

Privacy around these content labels was a concern for panelists. Some protocols for verifying the origins of a piece of content with metadata require the personal information of human creators.

“This is absolutely critical,” said Gregory. “We have to start from the principle that these approaches do not oblige personal information or identity to be a part of them.”

Separately, the executive branch committee that met Tuesday was established under the National AI Initiative Act of 2020, is tasked with advising the president on AI-related matters. The NAIAC gathers representatives from the Departments of State, Defense, Energy and Commerce, together with the Attorney General, Director of National Intelligence, and Director of Science and Technology Policy.

Continue Reading

Artificial Intelligence

Tech Policy Group CCIA Speaks Out Against AI Regulation

The trade group represents major tech companies like Amazon and Google.



WASHINGTON, September 12, 2023 – A policy director at the Computer and Communications Industry Association spoke out on Tuesday against impending artificial intelligence regulations in the European Union and United States.

The CCIA represents some of the biggest tech companies in the world, with members including Amazon, Google, Meta, and Apple.

“The E.U. approach will focus very much on the technology itself, rather than the use of it, which is highly problematic,” said Boniface de Champris, CCIA’s Europe policy manager, at a panel hosted by the Cato Institute. “The requirements would basically inhibit the development and use of cutting edge technology in the E.U.”

This echoes de Champris’s American counterparts, who have argued in front of Congress that AI-specific laws would stifle innovation.

The European Parliament is aiming to reach an agreement by the end of the year on the AI Act, which would put regulations on all AI systems based on their assessed risk level. 

The E.U. also adopted in August the Digital Services Act, legislation that tightens privacy rules and expands transparency requirements. Under the law, users can opt to turn off artificial intelligence-enabled content recommendation.

U.S. President Joe Biden announced in July that seven major AI and tech companies – including CCIA members Amazon, Meta, and Google – made voluntary commitments to various AI safeguards, including information sharing and security testing.

Multiple U.S. agencies are exploring more binding AI regulation. Both the Senate Judiciary committee and Senate consumer protection subcommittee held hearings on potential AI policy later on Tuesday. The judiciary hearing will include testimony from Microsoft president Brad Smith and AI and graphics company NVIDIA’s chief scientist William Daly.

The House Energy and Commerce Committee passed in July the Artificial Intelligence Accountability Act, which gives the National Telecommunications and Information Administration a mandate to study accountability measures for artificial intelligence systems used by telecom companies.

Continue Reading

Signup for Broadband Breakfast News

Broadband Breakfast Research Partner