Connect with us

Artificial Intelligence

Artificial Intelligence Aims to Enhance Human Capabilities, But Only With Caution and Safeguards

Published

on

Image by Sujin Soman used with permission

January 14, 2021 – Artificial intelligence will continue to gradually revolutionize productivity and functionality as long as policy concerns are addressed, said three separate panel of experts on Tuesday at CES 2021.

Various AI technologies will contribute $16 trillion to the economy by 2030. Better machines, better software, and an explosion of applications affecting everyday lives will continue. AI has enhanced productivity, improved safety, and made the world more accessible.

As AI does not replace but enhances human work, it presents an enormous set of curated data that must be entrusted with established guidelines and transparency.

IBM Vice President Bridget Karlin raised a concern involving ethics, and not building bias. Depending on companies’ purposes for using AI, models can be ethical when they are engineered to be fair and properly calibrated.

It is incumbent upon software developers to identify the requirements for ethical and non-biased data collection, she said.

What about AI’s involvement in creating and spreading fake news? Kevin Guo, CEO of Hive, said this is a service still in process.

It is essential for AI engineers to research and implement data fairness and remove bias.

Impacts of AI on health care

On health care, participants in a separate panel said AI leads to improved outcomes and lower costs. But to Christina Silcox, a digital health policy fellow, the question is: How can people trust something that can’t be seen or understood?

Understanding how technologies are created helps, said Jesse Ehrenfeld from the Board of Trustees of American Medical Association. But he acknowledged that all data is in some way biased. He said he can’t tell the number of times data flows have generated different meanings than expected.

Christina Silcox said that it was critical to understand how software will work overtime after being put into place.

Indeed, communication and transparency are key for trust and growth of AI, said Senior Regulatory Specialist Pat Baird. Depending on who the stakeholders are, there will need customization of such communication.

Trustworthy AI will re-humanize health care, letting computers do what they were built to be done, and allow the health care workers to work with people, he said.

What about gender and racial bias?

Another panel during 2021 CES discussed gender and racial bias in a business setting – and how AI can contribute show and reflect equal representation.

Annie Jean-Baptiste of Google said that humble inclusion fills innovation. Kimberly Sterling of ResMed declared that people are not going anywhere, and that AI will not replace people’s brains.

All three panel discussions pondered the future of AI. Most agreed that AI exists to supplement human ambition, enabling everyday businesses to become smarter, adjust to new inputs and perform human-like-tasks.

For richer data sets, one of the solutions that panelists proposed to break out the “black box” of AI, make sure to assemble those capabilities and understand the sources of the data, to go back and test the models, and to be able to look holistically at outcomes.

Reporter Samuel Triginelli was born in Brazil and grew up speaking Portuguese and English, and later learned French and Spanish. He studied communications at Brigham Young University, where he also worked as a product administrator and UX/UI designer. He wants a world with better internet access for all.

Artificial Intelligence

Experts Debate Artificial Intelligence Licensing Legislation

Licensing requirements will distract from wide scale testing and will limit competition, an event heard.

Published

on

Photo of B Cavello of Aspen Institute, Austin Carson of SeedAI, Aalok Mehta of OpenAI

WASHINGTON, May 23, 2023 – Experts on artificial intelligence disagree on whether licensing is the proper legislation for the technology. 

If adopted, licensing requirements would require companies to obtain a federal license prior to developing AI technology. Last week, OpenAI CEO Sam Altman testified that Congress should consider a series of licensing and testing requirements for AI models above a threshold of capability. 

At a Public Knowledge event Monday, Aalok Mehta, head of US Public Policy at OpenAI, added licensing is a means to ensuring that AI developers put together safety practices. By establishing licensing rules, we are developing external validation tools that will improve consumer experience, he said. 

Generative AI — the model used by chatbots including OpenAI’s widely popular ChatGPT and Google’s Bard — is AI designed to produce content rather than simply processing information, which could have widespread effects on copyright disputes and disinformation, experts have said. Many industry experts have urged for more federal AI regulation, claiming that widespread AI applications could lead to broad societal risks including an uptick in online disinformation, technological displacement, algorithmic discrimination, and other harms. 

Some industry leaders, however, are concerned that calls for licensing are a way of shutting the door to competition and new startups by large companies like OpenAI and Google.  

B Cavello, director of emerging technologies at the Aspen Institute, said Monday that licensing requirements place burdens on competition, particularly small start-ups. 

Implementing licensing requirements can place a threshold that defines a set of players allowed to play in the AI space and a set that are not, said B. Licensing can make it more difficult for smaller players to gain traction in the competitive space, B said.  

Already the resources required to support these systems create a barrier that can be really tough to break through, B continued. While there should be mandates for greater testing and transparency, it can also present unique challenges we should seek to avoid, B said.  

Austin Carson, founder and president of SeedAI, said a licensing model would not get to the heart of the issue, which is to make sure AI developers are consciously testing and measuring their own models. 

The most important thing is to support the development of an ecosystem that revolves around assurance and testing, said Carson. Although no mechanisms currently exist for wide-scale testing, it will be critical to the support of this technology, he said. 

Base-level testing at this scale will require that all parties participate, Carson emphasized. We need all parties to feel a sense of accountability for the systems they host, he said. 

Christina Montgomery, AI ethics board chair at IBM, urged Congress to adopt precision regulation approach to AI that would govern AI in specific use cases, not regulating the technology itself in her testimony last week.  

Continue Reading

Artificial Intelligence

Senate Witnesses Call For AI Transparency

Regulatory AI transparency will increase federal agency and company accountability to the public.

Published

on

Photo of Richard Eppink of the American Civil Liberties Union of Idaho Foundation

WASHINGTON, May 16, 2023 – Congress should increase regulatory requirements for transparency in artificial intelligence while adopting the technology in federal agencies, said witnesses at a Senate Homeland Security and Governmental Affairs Committee hearing on Tuesday. 

Many industry experts have urged for more federal AI regulation, claiming that widespread AI applications could lead to broad societal risks including an uptick in online disinformation, technological displacement, algorithmic discrimination, and other harms. 

The hearing addressed implementing AI in federal agencies. Congress is concerned about ensuring that the United States government is prepared to capitalize on the capabilities afforded by AI technology while also protecting the constitutional rights of citizens, said Sen. Gary Peters, D-Michigan.   

The United States “is suffering from a lack of leadership and prioritization on these topics,” stated Lynne Parker, director of AI Tennessee Initiative at the University of Tennessee in her comments. 

In a separate hearing Tuesday, CEO of OpenAI Sam Altman said that is is “essential that powerful AI is developed with democratic values in mind which mean US leadership is critical.”

Applications of AI are immensely beneficial, said Altman. However, “we think that regular intervention by governments will be crucial to mitigate the risks of increasingly powerful models.”

To do so, Altman suggested that the U.S. government consider a combination of licensing and testing requirements for the development and release of AI models above a certain threshold of capability.

Companies like OpenAI can partner with governments to ensure AI models adhere to a set of safety requirements, facilitate efficient processes, and examine opportunities for global coordination, he said.

Building accountability into AI systems

Siezing this moment to modernize the government’s systems will strengthen the country, said Daniel Ho, professor at Stanford Law School, encouraging Congress to lead by example to implement accountable AI practices.  

An accountable system ensures that agencies are responsible to report to the public and those that AI algorithms directly affect, added Richard Eppink of the American Civil Liberties Union of Idaho Foundation. 

A serious risk to implementing AI is that it can conceal how the systems work, including the bad data that they could be trained on, said Eppink. This can prevent accountability to the public and puts citizen’s constitutional rights at risk, he said. 

To prevent this, the federal government should implement transparency requirements and governance standards that would include transparency during the implementation process, said Eppink. Citizens have the right to the same information that the government has so we can maintain accountability, he concluded.  

Parker suggested that Congress appoint a Chief AI Director at each agency that would help develop Ai strategies for each agency and establish an interagency Chief AI Council to govern the use of the technology in the Federal government. 

Getting technical talent into the workforce is the predicate to a range of issues we are facing today, agreed Ho, claiming that less than two percent of AI personnel is in government. He urged Congress to establish pathways and trajectories for technical agencies to attract AI talent to public service.   

Congress considers AI regulation

Congress’s attention has been captured by growing AI regulatory concerns.  

In April, Senator Check Schumer, D-N.Y., proposed a high-level AI policy framework focused on ensuring transparency and accountability by requiring companies to allow independent experts to review and test AI technologies and make results available publicly. 

Later in April, Representative Yvette Clarke, D-N.Y., introduced a bill that would require the disclosure of AI-generated content in political ads. 

The Biden Administration announced on May 4 that it will invest $140 million in funding to launch seven new National AI Research Institutes, which investment will bring the total number of Institutes to 25 across the country.  

Continue Reading

Artificial Intelligence

‘Watershed Moment’ Has Experts Calling for Increased Federal Regulation of AI

New AI developments could impact jobs that have traditionally been considered safe from technological displacement.

Published

on

Screenshot of Reggie Townsend, vice president of the data ethics practice at the SAS Institute, at the Brookings Institute event

WASHINGTON, April 28, 2023 — As artificial intelligence technologies continue to rapidly develop, many industry leaders are calling for increased federal regulation to address potential technological displacement, algorithmic discrimination and other harms — while other experts warn that such regulation could stifle innovation.

“It’s fair to say that this is a watershed moment,” said Reggie Townsend, vice president of the data ethics practice at the SAS Institute, at a panel hosted Wednesday by the Brookings Institution. “But we have to be honest about this as well, which is to say, there will be displacement.”

Screenshot of Reggie Townsend, vice president of the data ethics practice at the SAS Institute, at the Brookings Institute event

While some AI displacement is comparable to previous technological advances that popularized self-checkout machines and ATMs, Townsend argued that the current moment “feels a little bit different… because of the urgency attached to it.”

Recent AI developments have the potential to impact job categories that have traditionally been considered safe from technological displacement, agreed Cameron Kerry, a distinguished visiting fellow at Brookings.

In order to best equip people for the coming changes, experts emphasized the importance of increasing public knowledge of how AI technologies work. Townsend compared this goal to the general baseline knowledge that most people have about electricity. “We’ve got to raise our level of common understanding about AI similar to the way we all know not to put a fork in the sockets,” he said.

Some potential harms of AI may be mitigated by public education, but a strong regulatory framework is critical to ensure that industry players adhere to responsible development practices, said Susan Gonzales, founder and CEO at AIandYou.

“Leaders of certain companies are coming out and they’re communicating their commitment to trustworthy and responsible AI — but then meanwhile, the week before, they decimated their ethical AI departments,” Gonzales added.

Some experts caution against overregulation in low-risk use cases

However, some experts warn that the regulations themselves could cause harm. Overly strict regulations could hamper further AI innovation and limit the benefits that have already emerged — which range from increasing workplace productivity to more effectively detecting certain types of cancer, said Daniel Castro, director of the Center for Data Innovation, at a Broadband Breakfast event on Wednesday.

“We should want to see this technology being deployed,” Castro said. “There are areas where it will likely have lifesaving impacts; it will have very positive impacts on the economy. And so part of our policy conversation should also be, not just how do we make sure things don’t go wrong, but how do we make sure things go right.”

Effective AI oversight should distinguish between the different risk levels of various AI use cases before determining the appropriate regulatory approaches, said Aaron Cooper, vice president of global policy for the software industry group BSA.

“The AI system for [configuring a] router doesn’t have the same considerations as the AI system for an employment case, or even in a self-driving vehicle,” he said.

There are already laws that govern many potential cases of AI-related harms, even if those laws do not specifically refer to AI, Cooper noted.

“We just think that in high-risk situations, there are some extra steps that the developer and the deployer of the AI system can take to help mitigate that risk and limit the possibility of it happening in the first place,” he said.

Multiple entities considering AI governance

Very little legislation currently governs the use of AI in the United States, but the issue has recently garnered significant attention from Congress, the Federal Trade Commission, the National Telecommunications and Information Administration and other federal entities.

The National Artificial Intelligence Advisory Committee on Tuesday released a draft report detailing recommendations based on its first year of research, concluding that AI “requires immediate, significant and sustained government attention.”

One of the report’s most important action items is increasing sociotechnical research on AI systems and their impacts, said EqualAI CEO Miriam Vogel, who chairs the committee.

Throughout the AI development process, Vogel explained, each human touchpoint presents the risk of incorporating the developer’s biases — as well as a crucial opportunity for identifying and fixing these issues before they become embedded.

Vogel also countered the idea that regulation would necessarily stifle future AI development.

“If we don’t have more people participating in the process, with a broad array of perspectives, our AI will suffer,” she said. “There are study after study that show that the broader diversity in who is… building your AI, the better your AI system will be.”

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, April 26, 2023, 12 Noon ET – Should AI Be Regulated?

The recent explosion in artificial intelligence has generated significant excitement, but it has also amplified concerns about how the powerful technology should be regulated — and highlighted the lack of safeguards currently in place. What are the potential risks associated with artificial intelligence deployment? Which concerns are likely just fearmongering? And what are the respective roles of government and industry players in determining future regulatory structures?

Panelists

  • Daniel Castro, Vice President, Information Technology and Innovation Foundation and Director, Center for Data Innovation
  • Aaron Cooper, Vice President of Global Policy, BSA | The Software Alliance
  • Rebecca Klar (moderator), Technology Policy Reporter, The Hill

Panelist resources

 

Daniel Castro is vice president at the Information Technology and Innovation Foundation and director of ITIF’s Center for Data Innovation. Castro writes and speaks on a variety of issues related to information technology and internet policy, including privacy, security, intellectual property, Internet governance, e-government and accessibility for people with disabilities. In 2013, Castro was named to FedScoop’s list of the “top 25 most influential people under 40 in government and tech.”

Aaron Cooper serves as vice president of Global Policy for BSA | The Software Alliance. In this role, Cooper leads BSA’s global policy team and contributes to the advancement of BSA members’ policy priorities around the world that affect the development of emerging technologies, including data privacy, cybersecurity, AI regulation, data flows and digital trade. He testifies before Congress and is a frequent speaker on data governance and other issues important to the software industry.

Rebecca Klar is a technology policy reporter at The Hill, covering data privacy, antitrust law, online disinformation and other issues facing the evolving tech world. She is a native New Yorker and graduated from Binghamton University. She previously covered local news at The York Dispatch in York, Pa. and The Island Now in Nassau County, N.Y.

Graphic from Free-Vectors.Net used with permission

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook.

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending