WASHINGTON, May 30, 2019 – The development of artificial intelligence will bring extreme changes to the future of warfare, a panel of scientists said Thursday, calling the impact of current advances analogous to the development of agriculture or the domestication of the horse.
The panel was hosted by the Hudson Institute, a conservative think tank founded by military and industrial strategist Herman Kahn. Speakers on the panel discussed the ways in which the Department of Defense can implement new technologies, as well as the problems that could arise as a result.
One common concern with AI in military decisions was the potentially faster escalation in the use of force. For example, during the Cuban Missile Crisis, AI might have recommended acting sooner, possibly leading to catastrophic results.
But Navy AI Lead Colonel Jeff Kojac argued that the opposite could also be true: A young platoon commander in a high-pressure situation could utilize the help of an unmanned aerial system in determining to not open fire on a non-combative group.
Additionally, Lindsey R. Sheppard, associate fellow at the Center for Strategic and International Studies, refuted this fear by explaining that a significant amount of cognitive psychology research demonstrates that more information does not necessarily lead to a faster decision.
Hudson Senior Fellow William Schneider Jr. also thought that the potential benefits outweighed the risks, pointing out that AI gives the military the opportunity to head off a crisis before it occurs.
In regard to 5G networks, Schneider claimed that they present a “substantial” risk because of what can be integrated into the technology. He cited a recent Human Rights Watch report describing a mass surveillance app that collects an “intrusive, massive collection of personal information.” Having a large inventory of data-based services presents a wide range of potential breaches.
The panelists also discussed how to mitigate the consequences of AI’s current limitations and vulnerabilities. Sheppard emphasized the importance of placing computing data as far out on the network’s edge as possible.
For example, Apple’s facial recognition technology used to send the captured image to a central server, compare it to a stored image, and send it back; this entire process is now done on the device itself, freeing important server space. This model could be applied to the structure of cloud architecture in military settings as well.
Dr. Alexander Kott, chief scientist for the Army Research Laboratory, described the need for a complex mix of decentralized clouds at the edge, making them more resilient to attack. Col. Kojac pointed out that an additional component of resilience is agility, recommending an incremental approach to developing these technologies over the more traditional “waterfall” approach.
Not only will the technology require agility, the people operating it will need to be flexible in order to make the rise of AI feasible. That barrier was highlighted by several audience members, too. Kojac called an AI literate force a “categorical imperative,” and Sheppard supported this idea by suggesting that all forces involved in the deployment of these technologies should be required to know how to program.
This should be made easier because the workforce currently entering the military is fundamentally different from what it was a decade ago. Troops now serve for longer periods of time and have higher education requirements. Additionally, many have a more technologically rich background, such that Schneider called them “digital natives.” He said that AI ultimately provides a “basis for optimism” for having the potential to save lives on the front lines.
On a civilian level, Sheppard also highlighted the need for a top to bottom recognition of the importance of analytics within company cultures.
(Photo of panelists at the Hudson Institute event by Drew Clark.)
Int’l Ethical Framework for Auto Drones Needed Before Widescale Implementation
Observers say the risks inherent in letting autonomous drones roam requires an ethical framework.
July 19, 2021 — Autonomous drones could potentially serve as a replacement for military dogs in future warfare, said GeoTech Center Director David Bray during a panel discussion hosted by the Atlantic Council last month, but ethical concerns have observers clamoring for a framework for their use.
Military dogs, trained to assist soldiers on the battlefield, are currently a great asset to the military. AI-enabled autonomous systems, such as drones, are developing capabilities that would allow them to assist in the same way — for example, inspecting inaccessible areas and detecting fires and leaks early to minimize the chance of on-the-job injuries.
However, concerns have been raised about the ability to impact human lives, including the recent issue of an autonomous drone possibly hunting down humans in asymmetric warfare and anti-terrorist operations.
As artificial intelligence continues to develop at a rapid rate, society must determine what, if any, limitations should be implemented on a global scale. “If nobody starts raising the questions now, then it’s something that will be a missed opportunity,” Bray said.
Sally Grant, vice president at Lucd AI, agreed with Bray’s concerns, pointing out the controversies surrounding the uncharted territory of autonomous drones. Panelists proposed the possibility of an international limitation agreement with regards to AI-enabled autonomous systems that can exercise lethal force.
Timothy Clement-Jones, who was a member of the U.K. Parliament’s committee on artificial intelligence, called for international ethical guidelines, saying, “I want to see a development of an ethical risk-based approach to AI development and application.”
Many panelists emphasized the immense risk involve if this technology gets in the wrong hands. Panelists provided examples stretching from terrorist groups to the paparazzi, and the power they could possess with that much access.
Training is vital, Grant said, and soldiers need to feel comfortable with this machinery while not becoming over-reliant. The idea of implementing AI-enabled autonomous systems into missions, including during national disasters, is that soldiers can use it as guidance to make the most informed decisions.
“AI needs to be our servant not our master,” Clement agreed, emphasizing that soldiers can use it as a tool to help them and not as guidance to follow. He compared AI technology with the use of phone navigation, pointing to the importance of keeping a map in the glove compartment in case the technology fails.
The panelists emphasized the importance of remaining transparent and developing an international agreement with an ethical risk-based approach to AI development and application in these technologies, especially if they might enter the battlefield as a reliable companion someday.
Deepfakes Could Pose A Threat to National Security, But Experts Are Split On How To Handle It
Experts disagree on the right response to video manipulation — is more tech or a societal shift the right solution?
June 3, 2021—The emerging and growing phenomenon of video manipulation known as deepfakes could pose a threat to the country’s national security, policy makers and technology experts said at an online conference Wednesday, but how best to address them divided the panel.
A deepfake is a highly technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. A well done deepfake can make a person appear to do things that they never actually did and say things that they never actually said.
“The way the technology has evolved, it is literally impossible for a human to actually detect that something is a deepfake,” said Ashish Jaiman, the director of technology operations at Microsoft, at an online event hosted by the Information Technology and Innovation Foundation.
Experts are wary of the associated implications of this technology being increasingly offered to the general population, but how best to address the brewing dilemma has them split. Some believe better technology aimed at detecting deepfakes is the answer, while others say that a shift in social perspective is necessary. Others argue that such a societal shift would be dangerous, and that the solution actually lies in the hands of journalists.
Deepfakes pose a threat to democracy
Such technology posed no problem when only Hollywood had the means to portray such impressive special effects, says Rep. Anthony Gonzalez, R-Ohio, but the technology has progressed to a point that allows most anybody to get their hands on it. He says that with the spread of disinformation, and the challenges that poses to establishing a well-informed public, deepfakes could be weaponized to spread lies and affect elections.
As of yet, however, no evidence exists that deepfakes have been used for this purpose, according to Daniel Kimmage, the acting coordinator for the Global Engagement Center of the Department of State. But he, along with the other panelists, agree that the technology could be used to influence elections and further already growing seeds of mistrust in the information media. They believe that its best to act preemptively and solve the problem before it becomes a crisis.
“Once people realize they can’t trust the images and videos they’re seeing, not only will they not believe the lies, they aren’t going to believe the truth,” said Dana Rao, executive vice president of software company Adobe.
New technology as a solution
Jaiman says Microsoft has been developing sophisticated technologies aimed at detecting deepfakes for over two years now. Deborah Johnson, emeritus technology professor at the University of Virginia School of Engineering, refers to this method as an “arms race,” in which we must develop technology that detects deepfakes at a faster rate than the deepfake technology progresses.
But Jaiman was the first to admit that, despite Microsoft’s hard work, detecting deepfakes remains a grueling challenge. Apparently, it’s much harder to detect a deepfake than it is to create one, he said. He believes that a societal response is necessary, and that technology will be inherently insufficient to address the problem.
Societal shift as a solution
Jaiman argues that people need to be skeptical consumers of information. He believes that until the technology catches up and deepfakes can more easily be detected and misinformation can easily be snuffed, people need to approach online information with the perspective that they could easily be deceived.
But critics believe this approach of encouraging skepticism could be problematic. Gabriela Ivens, the head of open source research at Human Rights Watch, says that “it becomes very problematic if people’s first reactions are not to believe anything.” Ivens’ job revolves around researching and exposing human rights violations, but says that the growing mistrust of media outlets will make it harder for her to gain the necessary public support.
She believes that a “zero-trust society” must be resisted.
Vint Cerf, the vice president and chief internet evangelist at Google, says that it is up to journalists to prevent the growing spread of distrust. He accused journalists not of deliberately lying, but often times misleading the public. He believes that the true risk of deepfakes lies in their ability to corrode America’s trust in truth, and that it is up to journalists to restore that trust already beginning to corrode by being completely transparent and honest in their reporting.
Complexity, Lack of Expertise Could Hamper Economic Benefits Of Artificial Intelligence
Artificial intelligence is said to open up a new age of economic development, but its complexity could hamper its rollout.
May 24, 2021 — One of the great challenges to adopting artificial intelligence is the lack of understanding of it, according to a panel hosted by the Atlantic Council’s new GeoTech Center.
The panel last week discussed the economic benefits of AI and how global policy leaders can leverage it to achieve sustainable economic growth with government buy-in. But getting the government excited and getting them to actually do something about it are two completely different tasks.
That’s because there exists little government understanding or planning around this emerging market, according to Keith Strier, vice-president of worldwide AI initiatives at NVIDIA, a tech company that designs graphics processing units.
If the trend continues, the consequences could be globally impactful, leading to a widening of the global economic divide and could even pose national security threats, he said.
“AI is the new critical infrastructure… It’s about the future of GDP,” said Strier.
Lack of understanding stems from complexity
The reason for a lack of government understanding stems from the complexity of AI research, and the lack of consensus among experts, Strier said. He noted that the metrics used to quantify AI performance are “deceptively complex” and technical. Experts struggle to even find consensus on defining AI, only adding to its already intrinsic complexity.
This divergence in expert opinion makes the research markedly difficult to break down and communicate to policy makers in digestible, useful ways.
“Policy is just not evidence based,” Strier said. “It’s not well informed.”
World economic divide could widen
Charles Jennings, AI entrepreneur and founder of internet technology company NeuralEye, warned of AI’s potential to widen the economic divide worldwide.
Currently, the 500 fastest computers in the world are split up between just 29 different countries, leaving the remaining 170 struggling to produce computing power. As computers become faster, the countries best suited to reap the economic benefits will do so at a rate that far outpaces less developed countries.
Jennings also believes that there exists security issues associated with the lack of AI understanding in government, claiming that the public’s increasing dependence on it matched with a lack of regulation could lead to a public safety threat. He is adamant that it’s time to bridge the gap between enterprise and policy.
Strier says there are three essential questions governments must answer: How much domestic AI compute capacity do we have? How does this compare to other nations? Do we have enough capacity to support our national AI ambitions?
Answering these questions would help governments address the AI question in terms of their own national values and interests. This would help create a framework that could mitigate the potential negative consequences which might otherwise affect us.
- Christopher Ali: Is Broadband Like Getting Bran Flakes to the Home?
- Lack of Public Broadband Pricing Information a Cause of Digital Divide, Say Advocates
- Christopher Ali’s New Book Dissects Failures of Rural Broadband Policy and Leadership
- Washington’s Antitrust Push Could Create ‘Chilling Effect’ on Startups, Observers Say
- Apple Blacklists Fortnite, T-Mobile Expands Home Internet, Ajit Pai Reflects on Virginia’s Broadband Leadership
- Topic 4 at Digital Infrastructure Investment 2021: The Future of Shared Infrastructure
Signup for Broadband Breakfast
Broadband Roundup1 month ago
Senators Intro App Bill, Groups Drop TracFone Buy Complaint, States Want Shorter Robocall Deadline
Infrastructure4 months ago
AT&T CEO Says $60-$80 Billion in Federal Dollars Should Suffice to Bridge Digital Divide
Antitrust3 months ago
Experts Disagree Over Need, Feasibility of Global Standards for Antitrust Rules
Infrastructure2 months ago
Lumen Responds to Allegations it Underbuilds While Collecting Public Funds
Artificial Intelligence4 months ago
Deepfakes Could Pose A Threat to National Security, But Experts Are Split On How To Handle It
Broadband Roundup1 month ago
Mapping Comment Deadline Extended, AT&T Gets Federal Contract, 5G and LTE Drive Microwave Demand
Broadband Roundup3 months ago
AT&T Labelling Over 1B Robocalls, NTIA Updates Broadband Guide, Fiber Assoc. Says Current Speeds Inadequate
Antitrust3 months ago
House Judiciary Committee Clears Six Antitrust Bills Targeting Big Tech Companies