Connect with us

Artificial Intelligence

Henry Kissinger: AI Will Prompt Consideration of What it Means to Be Human

Event with the former Secretary of State discusses our current lack of knowledge on how to responsibly harness AI’s power.

Published

on

Former Secretary of State Henry Kissinger

WASHINGTON, December 24, 2021 – Former Secretary of State Henry Kissinger says that further use of artificial intelligence will call into question what it means to be human, and that the technology cannot solve all those problems humans fail to address on their own.

Kissinger spoke at a Council on Foreign Relations event highlighting his new book “The Age of AI: And Our Human Future” on Monday along with co-author and former Google CEO Eric Schmidt in a conversation moderated by PBS NewsHour anchor Judy Woodruff.

Schmidt remarked throughout the event on unanswered questions about AI despite common use of the technology.

He emphasized that the computer systems may be able to solve complex problems, such as in physics dealing with dark matter or dark energy, but that the humans who built the technology may not be able to determine how exactly the computer solved the problems.

Along the lines of this potential for dangerous use of the technology, he stated how AI development, though sometimes a force for good, “plays” with human lives.

He pointed out that to deal with this great technological power, almost every country now has created a governmental to oversee the ethics of AI development.

Schmidt stated that western values must be the dominant values in AI platforms that influence everyday life such as ones that have key implications for democracy.

With all the consideration on how to make AI work so it is effective but also utilitarian, Kissinger noted how much human thinking must go into managing the “thinking” these machines do, and that “a mere technological edge is not in itself decisive” in terms of AI that can compete with adversaries such as China’s diplomatic technological might.

Reporter T.J. York received his degree in political science from the University of Southern California. He has experience working for elected officials and in campaign research. He is interested in the effects of politics in the tech sector.

Artificial Intelligence

Congress Should Mandate AI Guidelines for Transparency and Labeling, Say Witnesses

Transparency around data collection and risk assessments should be mandated by law, especially in high-risk applications of AI.

Published

on

Screenshot of the Business Software Alliance's Victoria Espinel at the Commerce subcommittee hearing

WASHINGTON, September 12, 2023 – The United States should enact legislation mandating transparency from companies making and using artificial intelligence models, experts told the Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security on Tuesday.

It was one of two AI policy hearings on the hill Tuesday, with a Senate Judiciary Committee hearing, as well as an executive branch meeting created under the National AI Advisory Committee.

The Senate Commerce subcommittee asked witnesses how AI-specific regulations should be implemented and what lawmakers should keep in mind when drafting potential legislation. 

“The unwillingness of leading vendors to disclose the attributes and provenance of the data they’ve used to train models needs to be urgently addressed,” said Ramayya Krishnan, dean of Carnegie Mellon University’s college of information systems and public policy.

Addressing problems with transparency of AI systems

Addressing the lack of transparency might look like standardized documentation outlining data sources and bias assessments, Krishnan said. That documentation could be verified by auditors and function “like a nutrition label” for users.

Witnesses from both private industry and human rights advocacy agreed legally binding guidelines – both for transparency and risk management – will be necessary. 

Victoria Espinel, CEO of the Business Software Alliance, a trade group representing software companies, said the AI risk management framework developed in March by the National Institute of Standards and Technology was important, “but we do not think it is sufficient.”

“We think it would be best if legislation required companies in high-risk situations to be doing impact assessments and have internal risk management programs,” she said.

Those mandates – along with other transparency requirements discussed by the panel – should look different for companies that develop AI models and those that use them, and should only apply in the most high-risk applications, panelists said.

That last suggestion is in line with legislation being discussed in the European Union, which would apply differently depending on the assessed risk of a model’s use.

“High-risk” uses of AI, according to the witnesses, are situations in which an AI model is making consequential decisions, like in healthcare, hiring processes, and driving. Less consequential machine-learning models like those powering voice assistants and autocorrect would be subject to less government scrutiny under this framework.

Labeling AI-generated content

The panel also discussed the need to label AI-generated content.

“It is unreasonable to expect consumers to spot deceptive yet realistic imagery and voices,” said Sam Gregory, director of human right advocacy group WITNESS. “Guidance to look for a six fingered hand or spot virtual errors in a puffer jacket do not help in the long run.”

With elections in the U.S. approaching, panelists agreed mandating labels on AI-generated images and videos will be essential. They said those labels will have to be more comprehensive than visual watermarks, which can be easily removed, and might take the form of cryptographically bound metadata.

Labeling content as being AI-generated will also be important for developers, Krishnan noted, as generative AI models become much less effective when trained on writing or images made by other AIs.

Privacy around these content labels was a concern for panelists. Some protocols for verifying the origins of a piece of content with metadata require the personal information of human creators.

“This is absolutely critical,” said Gregory. “We have to start from the principle that these approaches do not oblige personal information or identity to be a part of them.”

Separately, the executive branch committee that met Tuesday was established under the National AI Initiative Act of 2020, is tasked with advising the president on AI-related matters. The NAIAC gathers representatives from the Departments of State, Defense, Energy and Commerce, together with the Attorney General, Director of National Intelligence, and Director of Science and Technology Policy.

Continue Reading

Artificial Intelligence

Tech Policy Group CCIA Speaks Out Against AI Regulation

The trade group represents major tech companies like Amazon and Google.

Published

on

WASHINGTON, September 12, 2023 – A policy director at the Computer and Communications Industry Association spoke out on Tuesday against impending artificial intelligence regulations in the European Union and United States.

The CCIA represents some of the biggest tech companies in the world, with members including Amazon, Google, Meta, and Apple.

“The E.U. approach will focus very much on the technology itself, rather than the use of it, which is highly problematic,” said Boniface de Champris, CCIA’s Europe policy manager, at a panel hosted by the Cato Institute. “The requirements would basically inhibit the development and use of cutting edge technology in the E.U.”

This echoes de Champris’s American counterparts, who have argued in front of Congress that AI-specific laws would stifle innovation.

The European Parliament is aiming to reach an agreement by the end of the year on the AI Act, which would put regulations on all AI systems based on their assessed risk level. 

The E.U. also adopted in August the Digital Services Act, legislation that tightens privacy rules and expands transparency requirements. Under the law, users can opt to turn off artificial intelligence-enabled content recommendation.

U.S. President Joe Biden announced in July that seven major AI and tech companies – including CCIA members Amazon, Meta, and Google – made voluntary commitments to various AI safeguards, including information sharing and security testing.

Multiple U.S. agencies are exploring more binding AI regulation. Both the Senate Judiciary committee and Senate consumer protection subcommittee held hearings on potential AI policy later on Tuesday. The judiciary hearing will include testimony from Microsoft president Brad Smith and AI and graphics company NVIDIA’s chief scientist William Daly.

The House Energy and Commerce Committee passed in July the Artificial Intelligence Accountability Act, which gives the National Telecommunications and Information Administration a mandate to study accountability measures for artificial intelligence systems used by telecom companies.

Continue Reading

Artificial Intelligence

Rep. Suzan DelBene: Want Protection From AI? The First Step Is a National Privacy Law

A national privacy standard would ensure a baseline set of protections and would restrict companies from storing and selling personal data.

Published

on

The author of this Expert Opinion is Suzan DelBene, Washington Representative

In the six months since a new chatbot confessed its love for a reporter before taking a darker turn, the world has woken up to how artificial intelligence can dramatically change our lives and how it can go awry. AI is quickly being integrated into nearly every aspect of our economy and daily lives. However, in our nation’s capital, laws aren’t keeping up with the rapid evolution of technology.

Policymakers have many decisions to make around artificial intelligence, such as how it can be used in sensitive areas such as financial markets, health care, and national security. They will need to decide intellectual property rights around AI-created content. There will also need to be guardrails to prevent the dissemination of mis- and disinformation. But before we build the second and third story of this regulatory house, we need to lay a strong foundation and that must center around a national data privacy standard.

To understand this bedrock need, it’s important to look at how artificial intelligence was developed. AI needs an immense quantity of data. The generative language tool ChatGPT was trained on 45 terabytes of data, or the equivalent of over 200 days’ worth of HD video. That information may have included our posts on social media and online forums that have likely taught ChatGPT how we write and communicate with each other. That’s because this data is largely unprotected and widely available to third-party companies willing to pay for it. AI developers do not need to disclose where they get their input data from because the U.S. has no national privacy law.

While data studies have existed for centuries and can have major benefits, they are often centered around consent to use that information. Medical studies often use patient health data and outcomes, but that information needs the approval of the study participants in most cases. That’s because in the 1990s Congress gave health information a basic level of protection but that law only protects data shared between patients and their health care providers. The same is not true for other health platforms, like fitness apps, or most other data we generate today, including our conversations online and geolocation information.

Currently, the companies that collect our data are in control of it. Google for years scanned Gmail inboxes to sell users targeted ads, before abandoning the practice. Zoom recently had to update its data collection policy after it was accused of using customers’ audio and video to train its AI products. We’ve all downloaded an app on our phone and immediately accepted the terms and conditions window without actually reading it. Companies can and often do change the terms regarding how much of our information they collect and how they use it. A national privacy standard would ensure a baseline set of protections no matter where someone lives in the U.S. and restrict companies from storing and selling our personal data.

Ensuring there’s transparency and accountability in what data goes into AI is also important for a quality and responsible product. If input data is biased, we’re going to get a biased outcome, or better put ‘garbage in, garbage out.’ Facial recognition is one application of artificial intelligence. Largely these systems have been trained by and with data from white people. That’s led to clear biases when communities of color interact with this technology.

The United States must be a global leader on artificial intelligence policy but other countries are not waiting as we sit still. The European Union has moved faster on AI regulations because it passed its privacy law in 2018. The Chinese government has also moved quickly on AI but in an alarmingly anti-democratic way. If we want a seat at the international table to set the long-term direction for AI that reflects our core American values, we must have our own national data privacy law to start.

The Biden administration has taken some encouraging steps to begin putting guardrails around AI but it is constrained by Congress’ inaction. The White House recently announced voluntary artificial intelligence standards, which include a section on data privacy. Voluntary guidelines don’t come with accountability and the federal government can only enforce the rules on the books, which are woefully outdated.

That’s why Congress needs to step up and set the rules of the road. Strong national standards like privacy must be uniform throughout the country, rather than the state-by-state approach we have now. It has to put people back in control of their information instead of companies. It must also be enforceable so that the government can hold bad actors accountable. These are the components of the legislation I have introduced over the past few Congresses and the bipartisan proposal the Energy & Commerce Committee advanced last year.

As with all things in Congress, it comes down to a matter of priorities. With artificial intelligence expanding so fast, we can no longer wait to take up this issue. We were behind on technology policy already, but we fall further behind as other countries take the lead. We must act quickly and set a robust foundation. That has to include a strong, enforceable national privacy standard.

Congresswoman Suzan K. DelBene represents Washington’s 1st District in the United States House of Representatives. This piece was originally published in Newsweek, and is reprinted with permission. 

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

 

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending