Connect with us

Artificial Intelligence

Increase US Competitiveness with China Through AI and Spectrum, Experts Urge

‘If the U.S. doesn’t lead, China will.’

Published

on

Screenshot of Representative Mike Gallagher, R-Wisconsin.

WASHINGTON, July 20, 2023 – Maintaining U.S. competitiveness with China requires leveraging artificial intelligence for supply chain monitoring and allocating mid-band spectrum for commercial use, said experts Thursday. 

It is critical that the United States reduces its dependency on China in key areas including microelectronics, electric vehicles, solar panels, pharmaceutical ingredients, rare earth minerals processing, and more, said Rep. Mike Gallagher, R-Wisconsin, at a Punchbowl News event. He added that it is essential that American companies and governments are aware of their own supply chain risks and vulnerable areas.  

Artificial intelligence can be deployed to understand vulnerabilities in the supply chain, said Carrie Wibben, president of government solutions at supply chain management software company Exiger. 

American adversaries have been using AI for a long time to understand where to penetrate American supply chain ecosystem to obtain a strategic advantage over the country, said Wibben. She reported that the Department of Defense is moving quickly to increase visibility in its supply chain and implement new technology.  

AI and supply chains are the two fronts the U.S. competes in to maintain global dominance, said Wibben. She encouraged the coordination of the two to develop a strategy to keep U.S. global competitiveness and increase national security. 

A major concern in Congress is the nation’s reliance on China for its supply chain, added Rep. Raja Krishnamoorthi, D-Illinois. He said that the best solution is diversifying in the private sector, meaning that companies have redundant suppliers.  

In many cases, this can be done without government intervention but where the private sector doesn’t have the knowledge base to replicate these systems, it is essential that the government step in and provide incentives, Krishnamoorthi said. Congress has passed several laws, including the Inflation Reduction Act and the CHIPS and Science Act that invest billions of dollars into American-made clean energy and semiconductors. 

Krishnamoorthi said that the White House is doing what it can to prevent aggression from the Peoples Republic of China materializing into conflict.  

Need more spectrum 

Allocating more licensed spectrum for commercial use to support 5G is essential to maintaining US competitiveness with China, said panelists at a separate American Enterprise Institute event Thursday.  

Next generation wireless mobile network, 5G, enables higher speeds with low latency and more reliability. For a democratic state, 5G will enable more expression, innovation, human freedom, and opportunities to solve world challenges of health and climate, said Clete Johnson, senior fellow at the Center for Strategic and International Studies. For an authoritarian state, the same technology will enable policing of citizens, social control, and an overarching understanding of what people are doing, said Johnson.  

If the U.S. is behind China in allocating the spectrum that 5G rides on, then China will dominate cyber and information operations, including force projections and more capable weaponry, warned Johnson. “If we don’t lead, China will.” 

“Commercial strength is national security,” said Johnson, referring to the need to allocate spectrum for commercial use.  

China recognizes the value of 5G and how this kind of foundation will enable industrial and commercial activity, said Peter Rysavy, president of wireless consultancy Rysavy Research. The country has allocated three times as much spectrum in the mid-band areas for commercial use than the U.S. has, he said.  

No amount of spectrum efficiency and sharing mechanisms will replace having more spectrum available, added Paroma Sanyal, principal at economic consultancy Brattle Group. The U.S. government needs to get more spectrum into the pipeline, she said. 

A former administrator of the National Telecommunications and Information Administration said on a panel last week that national security depends on commercial access to spectrum. “If you take economic security out of the national security equation, you damage national security and vice versa,” John Kneuer said. 

Kneuer suggested that allowing the commercial sector access to more spectrum is beneficial to this goal as it spurs innovation that is a byproduct of increased economic activity that can then spill back into the federal agencies for new capabilities they would not have had otherwise.   

The Federal Communications Commission is evaluating how artificial intelligence can be used in dynamic spectrum sharing to optimize traffic and prevent harmful interference. AI can be used to make congestion control decisions and sense when federal agencies are using the bands to allow commercial use on federally owned spectrum without disrupting high-priority use. 

This comes as the FCC is facing spectrum availability concerns. In its June open meeting, the FCC issued proposed rulemaking that explores how the 42 –42.5 GHz spectrum band might be made available on a shared basis. The agency’s spectrum auction authority, however, expired earlier this year. 

The head of the NTIA announced this week that the national spectrum strategy is set to be complete by the end of the year. It will represent a government-wide approach to maximizing the potential of the nation’s spectrum resources and takes into account input from government agencies and the private sector. 

Rep. Doris Matsui, D-Calif., is heading two bills, the Spectrum Relocation Enhancement Act and the Spectrum Coexistence Act, that would make updates to the spectrum relocation fund that compensates federal agencies to clear spectrum for commercial use and would require the NTIA to conduct a review of federal receiver technology to support more intensive use of limited spectrum.    

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

Still Learning About Artificial Intelligence, Legislators Say Congress Must Act

Markey also urged Meta CEO Mark Zuckerberg to halt the release of an AI-powered chatbot.

Published

on

Sen. Ed Markey. D-Mass. (left), at Politico's AI and Tech Summit

WASHINGTON, September 30, 2023 – Although Congress is still learning key aspects of artificial intelligence, senators and representatives speaking at an AI summit on Wednesday said they believed the urgency of the moment required the passage of “some narrow pieces” of legislation.

The same day that Sen. Ed Markey, D-Mass., sent a letter to Meta CEO Mark Zuckerberg urging him to halt the release of AI-powered chatbots that the social media giant plans to integrate within its platforms, Markey urged the Federal Trade Commission to protect minors from AI-powered software.

Markey, speaking at Politico’s AI and Tech Summit, cited suicide rates amongst minors using social media and a recent warning from the Surgeon General about social media and adolescent mental health.

“We’re not going to be able to handle devices talking to young people in our society without understanding what the safeguards are going to be,” Markey said.

His message to Big Tech was: “Don’t deploy it until we get the answers to what the safeguards are going to be for the young people in our society.”

Similarly, Sen. Todd Young, R-Indiana, said he believed it was “very likely” that Congress would pass “some narrow pieces” of a regime regulating AI.

“I hope we go wider and consider a host of different legislative proposals because our innovators, our entrepreneurs, our researchers, our national security committee, they all say that we need to act in this space and we continue to lead the way of the world and manage the many risks that are out there around the financial markets,” Young said.

Other legislators proposed other specific facets of AI regulation.

Congressman Ted Lieu, D-Calif., proposed a law to prevent AI from autonomously using nuclear weapons. He also suggested a national AI commission.

Such a commission would help create a public record about how and why AI should be regulated. Doing so would be preferable to the approach in which Senate Majority Leader Chuck Schumer, D-N.Y., has been hosting closed-door briefings with tech giants on the topic.

“AI is innovating so quickly that I think it’s important that we have the national AI commission experts,” Lieu said. “There’s quite a lot of legislation to work on that, that can make recommendations from Congress asking what kind of AI we might want to regulate, how we might want to do about doing so and also provide some time for AI to be developed.”

Rep. Jay Obernolte, R-Calif., vice chair of the Congressional Artificial Intelligence Caucus, said that Congress is doing a “great job” educating themselves on AI but that creating legislation that has a human centric framework needs to be properly defined.

“By framework, I don’t mean a bunch of buzzwords flying in close formation, right?” Obernolte said. “What does it mean for AI to be human centered? What role does government have in making sure that they are human centered?”

Continue Reading

Artificial Intelligence

Companies Must Be Transparent About Their Use of Artificial Intelligence

Making the use of AI known is key to addressing any pitfalls, researchers said.

Published

on

https://engineering.nyu.edu/news/we-are-ai-series-nyu-tandon-center-responsible-ai-queens-public-library

WASHINGTON, September 20, 2023 – Researchers at an artificial intelligence workshop Tuesday said companies should be transparent about their use of algorithmic AI in things like hiring processes and content writing. 

Andrew Bell, a fellow at the New York University Center for Responsible AI, said that making the use of AI known is key to addressing any pitfalls AI might have. 

Algorithmic AI is behind systems like chatbots which can generate texts and answers to questions. It is used in hiring processes to quickly screen resumes or in journalism to write articles. 

According to Bell, ‘algorithmic transparency’ is the idea that “information about decisions made by algorithms should be visible to those who use, regulate, and are affected by the systems that employ those algorithms.”

The need for this kind of transparency comes after events like Amazons’ old AI recruiting tool showed bias toward women in the hiring process, or when OpenAI, the company that created ChatGPT, was probed by the FTC for generating misinformation. 

Incidents like these have brought the topic of regulating AI and making sure it is transparent to the forefront of Senate conversations.

Senate committee hears need for AI regulation

The Senate’s subcommittee on consumer protection on September 12 heard about proposals to make AI use more transparent, including disclaiming when AI is being used and developing tools to predict and understand risk associated with different AI models.

Similar transparency methods were mentioned by Bell and his supervisor Julia Stoyanovich, the Director of the Center for Responsible AI at New York University, a research center that explores how AI can be made safe and accessible as the technology evolves. 

According to Bell, a transparency label on algorithmic AI would “[provide] insight into ingredients of an algorithm.” Similar to a nutrition label, a transparency label would identify all the factors that go into algorithmic decision making.  

Data visualization was another option suggested by Bell, which would require a company to put up a public-facing document that explains the way their AI works, and how it generates the decisions it spits out. 

Adding in those disclaimers creates a better ecosystem between AI and AI users, increasing levels of trust between all stakeholders involved, explained Bell.

Bell and his supervisor built their workshop around an Algorithm Transparency Playbook, a document they published that has straightforward guidelines on why transparency is important and ways companies can go about it. 

Tech lobbying groups like the Computer and Communications Industry Association, which represent Big Tech companies, however, have spoken out in the past against the Senate regulating AI, claiming that it could stifle innovation. 

Continue Reading

Artificial Intelligence

Congress Should Mandate AI Guidelines for Transparency and Labeling, Say Witnesses

Transparency around data collection and risk assessments should be mandated by law, especially in high-risk applications of AI.

Published

on

Screenshot of the Business Software Alliance's Victoria Espinel at the Commerce subcommittee hearing

WASHINGTON, September 12, 2023 – The United States should enact legislation mandating transparency from companies making and using artificial intelligence models, experts told the Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security on Tuesday.

It was one of two AI policy hearings on the hill Tuesday, with a Senate Judiciary Committee hearing, as well as an executive branch meeting created under the National AI Advisory Committee.

The Senate Commerce subcommittee asked witnesses how AI-specific regulations should be implemented and what lawmakers should keep in mind when drafting potential legislation. 

“The unwillingness of leading vendors to disclose the attributes and provenance of the data they’ve used to train models needs to be urgently addressed,” said Ramayya Krishnan, dean of Carnegie Mellon University’s college of information systems and public policy.

Addressing problems with transparency of AI systems

Addressing the lack of transparency might look like standardized documentation outlining data sources and bias assessments, Krishnan said. That documentation could be verified by auditors and function “like a nutrition label” for users.

Witnesses from both private industry and human rights advocacy agreed legally binding guidelines – both for transparency and risk management – will be necessary. 

Victoria Espinel, CEO of the Business Software Alliance, a trade group representing software companies, said the AI risk management framework developed in March by the National Institute of Standards and Technology was important, “but we do not think it is sufficient.”

“We think it would be best if legislation required companies in high-risk situations to be doing impact assessments and have internal risk management programs,” she said.

Those mandates – along with other transparency requirements discussed by the panel – should look different for companies that develop AI models and those that use them, and should only apply in the most high-risk applications, panelists said.

That last suggestion is in line with legislation being discussed in the European Union, which would apply differently depending on the assessed risk of a model’s use.

“High-risk” uses of AI, according to the witnesses, are situations in which an AI model is making consequential decisions, like in healthcare, hiring processes, and driving. Less consequential machine-learning models like those powering voice assistants and autocorrect would be subject to less government scrutiny under this framework.

Labeling AI-generated content

The panel also discussed the need to label AI-generated content.

“It is unreasonable to expect consumers to spot deceptive yet realistic imagery and voices,” said Sam Gregory, director of human right advocacy group WITNESS. “Guidance to look for a six fingered hand or spot virtual errors in a puffer jacket do not help in the long run.”

With elections in the U.S. approaching, panelists agreed mandating labels on AI-generated images and videos will be essential. They said those labels will have to be more comprehensive than visual watermarks, which can be easily removed, and might take the form of cryptographically bound metadata.

Labeling content as being AI-generated will also be important for developers, Krishnan noted, as generative AI models become much less effective when trained on writing or images made by other AIs.

Privacy around these content labels was a concern for panelists. Some protocols for verifying the origins of a piece of content with metadata require the personal information of human creators.

“This is absolutely critical,” said Gregory. “We have to start from the principle that these approaches do not oblige personal information or identity to be a part of them.”

Separately, the executive branch committee that met Tuesday was established under the National AI Initiative Act of 2020, is tasked with advising the president on AI-related matters. The NAIAC gathers representatives from the Departments of State, Defense, Energy and Commerce, together with the Attorney General, Director of National Intelligence, and Director of Science and Technology Policy.

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending