Connect with us

Artificial Intelligence

National Security Commission on AI Votes Unanimously on Recommendations in First Public Meeting

Jericho Casper

Published

on

Screenshot of Bob Work, vice chair of the NSCIA, from the webcast

July 21, 2020 — The National Security Commission on Artificial Intelligence held its first live-streamed meeting on Monday, following a recent federal court ruling that required the commission to comply with the Federal Advisory Committee Act by opening its meetings to the public and regularly publishing its records.

NSCIA Chair Eric Schmidt opened the meeting, in which commissioners were set to deliver and vote on recommendations outlined in their Quarter 2 Report, by praising transparency and thanking the public for their interest in the work being conducted.

Members of the commission, who were tasked with developing recommendations on the use of artificial intelligence in national security and defense, moved to unanimously approve all 35 recommendations in the commission’s Quarter 2 Report.

Screenshot of NSCAI members from the webcast

Members agreed that successful and responsible adoption of AI requires federal initiative in addition to technical progress.

The recommendations focused on creating systemic initiatives to increase AI research and development initiatives across the country.

The report found that government AI strategies are currently threatened by bureaucratic impediments and that “the U.S. government is not adequately leveraging basic, commercial AI to improve business practices and save taxpayer dollars.”

“Departments and agencies must modernize to become more effective and cost-efficient,” it continued.

The commissioners voted to recommend that the U.S. government “identify, prioritize, coordinate, and urgently implement national security-focused AI research and development investments.”

“AI can help the U.S. Government execute core national security missions, if we let it,” the report stated.

Commissioners further recommended advancing the Department of Defense’s and the Department of State’s internal AI research and development capabilities.

Members recommended that the Department of State incorporate AI-related topics into technology training modules in order to increase the overall digital literacy of people working in the government.

The commissioners proposed that the Department of State and Congress expedite efforts to establish the Bureau of Cyberspace Security and Emerging Technology, which would work to align national security responsibilities related to cybersecurity and emerging technologies with the department’s international security effort.

The challenge of bridging the technology talent gap in U.S. government can be solved by systematically incentivizing the development of AI and tech skills, panelists said.

It’s important for the government to “tap into the expertise of those who would like to be cyber defense agents, but for whatever reason cannot be, due to barriers,” said José-Marie Griffiths, president of Dakota State University.

The commissioners recommended the government move to create a National Reserve Digital Corps, modeled after the Reserve Officer Training Corps.

They further recommended creating a U.S. Digital Service Academy, a new academy to train future civil servants in digital skills in order to fill gaps in the current digital workforce.

The commissioners recognized that American colleges and universities are not meeting the demand for undergraduate student interest in AI and computer science generally, noting that American AI talent often depends heavily on international students and workers.

Former Federal Communications Commissioner Mignon Clyburn spoke to the importance of training and recruiting AI talent in non-discriminatory ways, saying that “talent comes in many forms and in many places — always be mindful to be inclusive.”

The commissioners recognized the importance of creating a framework that develops AI ethically and responsibly, voting unanimously to recommend that government agency’s deployment of AI solutions always align with American democratic and institutional values.

Artificial Intelligence

Staying Ahead On Artificial Intelligence Requires International Cooperation

Benjamin Kahn

Published

on

Screenshot from the webinar

March 4, 2021—Artificial intelligence is present in most facets of American digital life, but experts are in a constant race to identify and address potential dangers before they impact consumers.

From making a simple search on Google to listening to music on Spotify to streaming Tiger King on Netflix, AI is everywhere. Predictive algorithms learn from a consumer’s viewing habits and attempt to direct consumers to other content an algorithm thinks a consumer will be interested in.

While this can be extremely convenient for consumers, it also raises many concerns.

Jaisha Wray, associate administrator for international affairs at the National Telecommunications and Information Administration, was a panelist at a conference hosted Tuesday by the Federal Communications Bar Association.

Wray identified three key areas of interest that are at the forefront of AI policy: content moderation, algorithm transparency, and the establishment of common-ground policies between foreign governments.

In addition to all the aforementioned uses for AI, it also has proven to be an indispensable tool for websites like Facebook, Alphabet’s Youtube, and myriad other social media platforms in auto-moderating their content. While most social media platforms employ humans to review various decisions made by AI (such as Facebook’s Oversight Board), most content is first handled by AI moderators.

According to Tubefilter, in 2019 more than 500 hours of video content were uploaded to Youtube every minute; in less than 20 minutes, a year’s worth of content is uploaded.

Content moderation, algorithm transparency, foreign alignment

On this scale, AI is necessary to police the website, even if it not a perfect system. “[AI] is like a thread that’s woven into every issue that we work on and every venue,” Wray explained. She described how both governments and private entities have looked to AI to not only moderate somewhat mundane things such copyright issues, but also national security issues like violent extremist content.

Her second point pertained to algorithm transparency. She outlined how entities outside of the U.S. have sought to address this concern by providing consumers with the opportunity to have their content reviewed by humans before a final decision is made. Wray pointed to the European General Protection Regulation, “which enshrines the principle that every person has the right not to be subject to a decision solely based on automated processing.”

Her final point raised the issue of coordinating these efforts between different international jurisdictions—namely the U.S. and its allies. “We’re really trying to hone-in on where our values align and where we can find common ground.” She added that coordination does not end with allies, however, and that it is key that the U.S. also coordinate with authoritarian regimes, allied or otherwise.

She said that the primary task facing the U.S. right now is simply trying to determine which issues are worth prioritizing when it comes to coordinating with foreign governments—whether that is addressing the spread of AI, how to police AI multilaterally, or how to address the use of AI by adversarial authoritarian regimes.

Technology needs to be built with security in mind

One of Wray’s co-panelists, Evelyn Remaley, who is the associate administrator for the NTIA’s Office of Policy Analysis and Development, said all multilateral cybersecurity efforts related to AI must be approached from a position of what she called a “zero-trust model.” She explained that this model operates from the presupposition that technology should not and cannot be trusted.

“We have to build in controls and standards from the bottom-up to make sure that we are building in the security layer by layer,” Remaley said. “It’s really that premise of ensuring that we realize that we’re always going to have vulnerabilities within this technical development space.”

Remaley said that increasing competition and collaboration can only be safely achieved with a zero-trust mindset.

Continue Reading

Artificial Intelligence

Connectivity Will Need To Keep Up With The Advent Of New Tech, Says Expert

Samuel Triginelli

Published

on

Screenshot from the webinar

February 24, 2021 – It used to be that technology had to keep up with the deployment of the growing ubiquity of broadband innovations. But the pace of technological advancements in the home is starting a conversation about whether connectivity can keep up.

That’s according to Shawn DuBravac, an accountant and author of a book about how big data will transform our everyday lives, who argues that the pandemic has illustrated the need for broader connections in the home to meet the need of future technologies. He was speaking on Tuesday at the conference of NTCA – The Rural Broadband Association.

Emerging consumer technologies, such as Samsung’s robots, which will perform tasks including loading a dishwasher, serving wine, and setting a dinner table, are redefining the conversation about how connectivity at home will manage them, DuBravac argues.

Health companies are also introducing “companion robots” focused on interacting with seniors. With its artificial intelligence and sensors, these robots develop a personality to adapt to the needs of consumers so social distancing does not become a disadvantage for care.

As such, the pandemic has grown the telehealth industry. With more people avoiding going to hospitals, the creation of watches, belts, scales that are connected to share information with medical professionals is further requiring better broadband connectivity to keep up.

But it’s not like the industry isn’t paying attention. Mesh network technologies, which utilize multiple router-like devices to enhance coverage inside the home, have started to emerge just as smart-home technologies illustrated the need for broader connectivity that better enhanced coverage as Wi-Fi signals experienced degradation through walls.

Continue Reading

Artificial Intelligence

AI the Most Important Change in Health Care Since Introduction of the MRI, Say Experts

Samuel Triginelli

Published

on

Screenshot from the webinar

February 7, 2021 – Artificial Intelligence is the most important technological change in health care since the introduction of the MRI, experts said at a Thursday panel discussion about European tech sponsored by the Information Technology and Innovation Foundation.

AI will not be replacing doctors and nurses, but empowering decision-maker with new resources, according to those participating in the discussion on “How Can Europe Enhance the Benefits of AI-Enabled Health Care?”

For example, pharmaceutical companies are using AI for the speedy development of vaccines, panelists said. Additionally, AI is helping address the uneven ratio of skilled doctors to patients, assist health-care professionals in complex procedures, and deliver personalized health care to patients.

Yet, for AI technologies to reach their potential, European Union actors need to create regulations governing transparency, they said.

How AI works in healthcare

AI works through big collections of data that validate algorithms. These help explain certain solutions and detect anomalies in the data set of patients.

But algorithm-creation needs to be held to higher standards than they are currently. Systemic errors can easily enter in on a large scale, said Elmar Kotter, chairperson of the eHealth and Informatics Subcommittee of the European Society of Radiologists.

AI should have been used more during the early stages of the COVID-19 pandemic, said Maria Manuel Marques, on the Special Committee on Artificial Intelligence in a Digital Age.

AI helps treat more patients at a faster rate, and with consistency and agility, said Chris Walker, chair of the working group on digital health for the European Federation of Pharmaceutical Industries and Associations. It helps provide new insights and improve treatment by allowing early-stage treatment of diseases.

Europe faces great challenges because of people’s misconception of what AI can do, panelists said. It is not to replacing doctors and nurses, but empowering with decision-making resources.

More trust would come if companies would conduct safe experimentation by testing and showing examples of how AI can improve the life of health care workers and patients, said Marques.

Regulations of data is crucial for hospitals to trust the products. Moreover, patients must have privacy with their information. Regulations will help them understand what’s been done in the manufacture of AI system, and to what use data will be put.

Ander Elustondo Jauregui, policy officer for Digital Health, added that data quality was an important indicator of the maturity of an AI system. That providing assurances for doctors, he said.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending