Connect with us

Artificial Intelligence

Advances in AI Less About Flashy Robots and More About ‘Creeping Incrementalism’

Artificial intelligence, disguised as helpful hints on web search results, is already having an active effect on society. 

Published

on

WASHINGTON, February 2, 2022 — Experts in artificial intelligence said that the future of AI is less about flashy robots with facial expressions and more about subtle advancements that entice users to give away more time and information.

“We’re no longer in the emerging phase of AI,” says Chloe Autio, advisor and senior manager at the Cantellus Group, a boutique consultancy focused on strategy and governance of emerging technologies like AI, in an exchange with Broadband Breakfast Editor and Publisher Drew Clark.

Autio and fellow Broadband Breakfast Live Online speaker, Sarah Oh, senior fellow at the Technology Policy Institute, made clear that AI is no longer a far-away concept that won’t be realized for many years, instead it is already a part of our everyday lives.

“For me, it’s less of this fear that we’ll all turn into robots or that robots will turn into us,” says Autio. Instead, Autio says her concern sprouts from the growing dependency society has on AI technologies without knowing it.

These technologies vary from Alexa and Siri to apps like Instagram and Twitter. “Social media platforms have changed to optimize for engagement and participation,” said Autio.

Chloe Autio

In doing this, social media platforms are utilizing AI technologies that help them learn more about users. Autio also gave the example of advanced search engines that give users responses in complete sentences rather than just a list of resources.

“People need to be more wary of these sorts of advancements through creeping incrementalism,” warned Autio.

Oh echoes these concerns in a more generalized way: “It’s like electricity, it’s a general technology. It empowers both negative and positive uses,” she says.

While the conversation did highlight some of the exciting potential AI holds, the fears of AI’s potential, if not already active, effect on society were abundantly apparent.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. You can also PARTICIPATE in the current Broadband Breakfast Live Online event. REGISTER HERE.

Wednesday, January 26, 2022, 12 Noon ET — AI’s Impact on Media, Law, Finance and Government

Artificial Intelligence is continuing to transform wide realms of our society and economy, and machine-based intelligence is just getting started. In this forward-focused session of Broadband Breakfast Live Online, we’ll speak with thinkers, innovators, and policy-makers about how journalism, law, finance and government services have been or will be transformed by AI. Join us for a world of discovery, as well as caution, about policies that need to be in place to harness the power of AI.

Panelists for this Broadband Breakfast Live Online session:

  • Chloe Autio, Advisor, The Cantellus Group
  • Dr. Sarah Oh, Senior Fellow, Technology Policy Institute
  • Other guests have been invited
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Chloe Autio is an Advisor and Senior Manager at The Cantellus Group, a boutique consultancy focused on strategy and governance of emerging technologies like AI. Chloe specializes in AI policy and applied practice, most recently as a Director of Public Policy at Intel Corp. Chloe is a founding board member of the DC chapter of Women in Security and Privacy (WISP) and holds an economics degree from UC Berkeley where she also studied technology policy.

Sarah Oh is a Senior Fellow at the Technology Policy Institute. She has presented research to the Western Economic Association and Telecommunications Policy Research Conference, witness testimony to the Senate Commerce Committee Subcommittee on Communications, Technology, Innovation, and the Internet, and has co-authored work published in the Northwestern Journal of Technology & Intellectual Property, Berkeley Technology Law Journal, and other peer-reviewed journals. Dr. Oh completed her Ph.D. in Economics from George Mason University, and holds a J.D. from Scalia Law School and a B.S. in Management Science and Engineering from Stanford University.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

Experts Debate Artificial Intelligence Licensing Legislation

Licensing requirements will distract from wide scale testing and will limit competition, an event heard.

Published

on

Photo of B Cavello of Aspen Institute, Austin Carson of SeedAI, Aalok Mehta of OpenAI

WASHINGTON, May 23, 2023 – Experts on artificial intelligence disagree on whether licensing is the proper legislation for the technology. 

If adopted, licensing requirements would require companies to obtain a federal license prior to developing AI technology. Last week, OpenAI CEO Sam Altman testified that Congress should consider a series of licensing and testing requirements for AI models above a threshold of capability. 

At a Public Knowledge event Monday, Aalok Mehta, head of US Public Policy at OpenAI, added licensing is a means to ensuring that AI developers put together safety practices. By establishing licensing rules, we are developing external validation tools that will improve consumer experience, he said. 

Generative AI — the model used by chatbots including OpenAI’s widely popular ChatGPT and Google’s Bard — is AI designed to produce content rather than simply processing information, which could have widespread effects on copyright disputes and disinformation, experts have said. Many industry experts have urged for more federal AI regulation, claiming that widespread AI applications could lead to broad societal risks including an uptick in online disinformation, technological displacement, algorithmic discrimination, and other harms. 

Some industry leaders, however, are concerned that calls for licensing are a way of shutting the door to competition and new startups by large companies like OpenAI and Google.  

B Cavello, director of emerging technologies at the Aspen Institute, said Monday that licensing requirements place burdens on competition, particularly small start-ups. 

Implementing licensing requirements can place a threshold that defines a set of players allowed to play in the AI space and a set that are not, said B. Licensing can make it more difficult for smaller players to gain traction in the competitive space, B said.  

Already the resources required to support these systems create a barrier that can be really tough to break through, B continued. While there should be mandates for greater testing and transparency, it can also present unique challenges we should seek to avoid, B said.  

Austin Carson, founder and president of SeedAI, said a licensing model would not get to the heart of the issue, which is to make sure AI developers are consciously testing and measuring their own models. 

The most important thing is to support the development of an ecosystem that revolves around assurance and testing, said Carson. Although no mechanisms currently exist for wide-scale testing, it will be critical to the support of this technology, he said. 

Base-level testing at this scale will require that all parties participate, Carson emphasized. We need all parties to feel a sense of accountability for the systems they host, he said. 

Christina Montgomery, AI ethics board chair at IBM, urged Congress to adopt precision regulation approach to AI that would govern AI in specific use cases, not regulating the technology itself in her testimony last week.  

Continue Reading

Artificial Intelligence

Senate Witnesses Call For AI Transparency

Regulatory AI transparency will increase federal agency and company accountability to the public.

Published

on

Photo of Richard Eppink of the American Civil Liberties Union of Idaho Foundation

WASHINGTON, May 16, 2023 – Congress should increase regulatory requirements for transparency in artificial intelligence while adopting the technology in federal agencies, said witnesses at a Senate Homeland Security and Governmental Affairs Committee hearing on Tuesday. 

Many industry experts have urged for more federal AI regulation, claiming that widespread AI applications could lead to broad societal risks including an uptick in online disinformation, technological displacement, algorithmic discrimination, and other harms. 

The hearing addressed implementing AI in federal agencies. Congress is concerned about ensuring that the United States government is prepared to capitalize on the capabilities afforded by AI technology while also protecting the constitutional rights of citizens, said Sen. Gary Peters, D-Michigan.   

The United States “is suffering from a lack of leadership and prioritization on these topics,” stated Lynne Parker, director of AI Tennessee Initiative at the University of Tennessee in her comments. 

In a separate hearing Tuesday, CEO of OpenAI Sam Altman said that is is “essential that powerful AI is developed with democratic values in mind which mean US leadership is critical.”

Applications of AI are immensely beneficial, said Altman. However, “we think that regular intervention by governments will be crucial to mitigate the risks of increasingly powerful models.”

To do so, Altman suggested that the U.S. government consider a combination of licensing and testing requirements for the development and release of AI models above a certain threshold of capability.

Companies like OpenAI can partner with governments to ensure AI models adhere to a set of safety requirements, facilitate efficient processes, and examine opportunities for global coordination, he said.

Building accountability into AI systems

Siezing this moment to modernize the government’s systems will strengthen the country, said Daniel Ho, professor at Stanford Law School, encouraging Congress to lead by example to implement accountable AI practices.  

An accountable system ensures that agencies are responsible to report to the public and those that AI algorithms directly affect, added Richard Eppink of the American Civil Liberties Union of Idaho Foundation. 

A serious risk to implementing AI is that it can conceal how the systems work, including the bad data that they could be trained on, said Eppink. This can prevent accountability to the public and puts citizen’s constitutional rights at risk, he said. 

To prevent this, the federal government should implement transparency requirements and governance standards that would include transparency during the implementation process, said Eppink. Citizens have the right to the same information that the government has so we can maintain accountability, he concluded.  

Parker suggested that Congress appoint a Chief AI Director at each agency that would help develop Ai strategies for each agency and establish an interagency Chief AI Council to govern the use of the technology in the Federal government. 

Getting technical talent into the workforce is the predicate to a range of issues we are facing today, agreed Ho, claiming that less than two percent of AI personnel is in government. He urged Congress to establish pathways and trajectories for technical agencies to attract AI talent to public service.   

Congress considers AI regulation

Congress’s attention has been captured by growing AI regulatory concerns.  

In April, Senator Check Schumer, D-N.Y., proposed a high-level AI policy framework focused on ensuring transparency and accountability by requiring companies to allow independent experts to review and test AI technologies and make results available publicly. 

Later in April, Representative Yvette Clarke, D-N.Y., introduced a bill that would require the disclosure of AI-generated content in political ads. 

The Biden Administration announced on May 4 that it will invest $140 million in funding to launch seven new National AI Research Institutes, which investment will bring the total number of Institutes to 25 across the country.  

Continue Reading

Artificial Intelligence

‘Watershed Moment’ Has Experts Calling for Increased Federal Regulation of AI

New AI developments could impact jobs that have traditionally been considered safe from technological displacement.

Published

on

Screenshot of Reggie Townsend, vice president of the data ethics practice at the SAS Institute, at the Brookings Institute event

WASHINGTON, April 28, 2023 — As artificial intelligence technologies continue to rapidly develop, many industry leaders are calling for increased federal regulation to address potential technological displacement, algorithmic discrimination and other harms — while other experts warn that such regulation could stifle innovation.

“It’s fair to say that this is a watershed moment,” said Reggie Townsend, vice president of the data ethics practice at the SAS Institute, at a panel hosted Wednesday by the Brookings Institution. “But we have to be honest about this as well, which is to say, there will be displacement.”

Screenshot of Reggie Townsend, vice president of the data ethics practice at the SAS Institute, at the Brookings Institute event

While some AI displacement is comparable to previous technological advances that popularized self-checkout machines and ATMs, Townsend argued that the current moment “feels a little bit different… because of the urgency attached to it.”

Recent AI developments have the potential to impact job categories that have traditionally been considered safe from technological displacement, agreed Cameron Kerry, a distinguished visiting fellow at Brookings.

In order to best equip people for the coming changes, experts emphasized the importance of increasing public knowledge of how AI technologies work. Townsend compared this goal to the general baseline knowledge that most people have about electricity. “We’ve got to raise our level of common understanding about AI similar to the way we all know not to put a fork in the sockets,” he said.

Some potential harms of AI may be mitigated by public education, but a strong regulatory framework is critical to ensure that industry players adhere to responsible development practices, said Susan Gonzales, founder and CEO at AIandYou.

“Leaders of certain companies are coming out and they’re communicating their commitment to trustworthy and responsible AI — but then meanwhile, the week before, they decimated their ethical AI departments,” Gonzales added.

Some experts caution against overregulation in low-risk use cases

However, some experts warn that the regulations themselves could cause harm. Overly strict regulations could hamper further AI innovation and limit the benefits that have already emerged — which range from increasing workplace productivity to more effectively detecting certain types of cancer, said Daniel Castro, director of the Center for Data Innovation, at a Broadband Breakfast event on Wednesday.

“We should want to see this technology being deployed,” Castro said. “There are areas where it will likely have lifesaving impacts; it will have very positive impacts on the economy. And so part of our policy conversation should also be, not just how do we make sure things don’t go wrong, but how do we make sure things go right.”

Effective AI oversight should distinguish between the different risk levels of various AI use cases before determining the appropriate regulatory approaches, said Aaron Cooper, vice president of global policy for the software industry group BSA.

“The AI system for [configuring a] router doesn’t have the same considerations as the AI system for an employment case, or even in a self-driving vehicle,” he said.

There are already laws that govern many potential cases of AI-related harms, even if those laws do not specifically refer to AI, Cooper noted.

“We just think that in high-risk situations, there are some extra steps that the developer and the deployer of the AI system can take to help mitigate that risk and limit the possibility of it happening in the first place,” he said.

Multiple entities considering AI governance

Very little legislation currently governs the use of AI in the United States, but the issue has recently garnered significant attention from Congress, the Federal Trade Commission, the National Telecommunications and Information Administration and other federal entities.

The National Artificial Intelligence Advisory Committee on Tuesday released a draft report detailing recommendations based on its first year of research, concluding that AI “requires immediate, significant and sustained government attention.”

One of the report’s most important action items is increasing sociotechnical research on AI systems and their impacts, said EqualAI CEO Miriam Vogel, who chairs the committee.

Throughout the AI development process, Vogel explained, each human touchpoint presents the risk of incorporating the developer’s biases — as well as a crucial opportunity for identifying and fixing these issues before they become embedded.

Vogel also countered the idea that regulation would necessarily stifle future AI development.

“If we don’t have more people participating in the process, with a broad array of perspectives, our AI will suffer,” she said. “There are study after study that show that the broader diversity in who is… building your AI, the better your AI system will be.”

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, April 26, 2023, 12 Noon ET – Should AI Be Regulated?

The recent explosion in artificial intelligence has generated significant excitement, but it has also amplified concerns about how the powerful technology should be regulated — and highlighted the lack of safeguards currently in place. What are the potential risks associated with artificial intelligence deployment? Which concerns are likely just fearmongering? And what are the respective roles of government and industry players in determining future regulatory structures?

Panelists

  • Daniel Castro, Vice President, Information Technology and Innovation Foundation and Director, Center for Data Innovation
  • Aaron Cooper, Vice President of Global Policy, BSA | The Software Alliance
  • Rebecca Klar (moderator), Technology Policy Reporter, The Hill

Panelist resources

 

Daniel Castro is vice president at the Information Technology and Innovation Foundation and director of ITIF’s Center for Data Innovation. Castro writes and speaks on a variety of issues related to information technology and internet policy, including privacy, security, intellectual property, Internet governance, e-government and accessibility for people with disabilities. In 2013, Castro was named to FedScoop’s list of the “top 25 most influential people under 40 in government and tech.”

Aaron Cooper serves as vice president of Global Policy for BSA | The Software Alliance. In this role, Cooper leads BSA’s global policy team and contributes to the advancement of BSA members’ policy priorities around the world that affect the development of emerging technologies, including data privacy, cybersecurity, AI regulation, data flows and digital trade. He testifies before Congress and is a frequent speaker on data governance and other issues important to the software industry.

Rebecca Klar is a technology policy reporter at The Hill, covering data privacy, antitrust law, online disinformation and other issues facing the evolving tech world. She is a native New Yorker and graduated from Binghamton University. She previously covered local news at The York Dispatch in York, Pa. and The Island Now in Nassau County, N.Y.

Graphic from Free-Vectors.Net used with permission

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook.

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending