WASHINGTON, May 30, 2019 – The development of artificial intelligence will bring extreme changes to the future of warfare, a panel of scientists said Thursday, calling the impact of current advances analogous to the development of agriculture or the domestication of the horse.
The panel was hosted by the Hudson Institute, a conservative think tank founded by military and industrial strategist Herman Kahn. Speakers on the panel discussed the ways in which the Department of Defense can implement new technologies, as well as the problems that could arise as a result.
One common concern with AI in military decisions was the potentially faster escalation in the use of force. For example, during the Cuban Missile Crisis, AI might have recommended acting sooner, possibly leading to catastrophic results.
But Navy AI Lead Colonel Jeff Kojac argued that the opposite could also be true: A young platoon commander in a high-pressure situation could utilize the help of an unmanned aerial system in determining to not open fire on a non-combative group.
Additionally, Lindsey R. Sheppard, associate fellow at the Center for Strategic and International Studies, refuted this fear by explaining that a significant amount of cognitive psychology research demonstrates that more information does not necessarily lead to a faster decision.
Hudson Senior Fellow William Schneider Jr. also thought that the potential benefits outweighed the risks, pointing out that AI gives the military the opportunity to head off a crisis before it occurs.
In regard to 5G networks, Schneider claimed that they present a “substantial” risk because of what can be integrated into the technology. He cited a recent Human Rights Watch report describing a mass surveillance app that collects an “intrusive, massive collection of personal information.” Having a large inventory of data-based services presents a wide range of potential breaches.
The panelists also discussed how to mitigate the consequences of AI’s current limitations and vulnerabilities. Sheppard emphasized the importance of placing computing data as far out on the network’s edge as possible.
For example, Apple’s facial recognition technology used to send the captured image to a central server, compare it to a stored image, and send it back; this entire process is now done on the device itself, freeing important server space. This model could be applied to the structure of cloud architecture in military settings as well.
Dr. Alexander Kott, chief scientist for the Army Research Laboratory, described the need for a complex mix of decentralized clouds at the edge, making them more resilient to attack. Col. Kojac pointed out that an additional component of resilience is agility, recommending an incremental approach to developing these technologies over the more traditional “waterfall” approach.
Not only will the technology require agility, the people operating it will need to be flexible in order to make the rise of AI feasible. That barrier was highlighted by several audience members, too. Kojac called an AI literate force a “categorical imperative,” and Sheppard supported this idea by suggesting that all forces involved in the deployment of these technologies should be required to know how to program.
This should be made easier because the workforce currently entering the military is fundamentally different from what it was a decade ago. Troops now serve for longer periods of time and have higher education requirements. Additionally, many have a more technologically rich background, such that Schneider called them “digital natives.” He said that AI ultimately provides a “basis for optimism” for having the potential to save lives on the front lines.
On a civilian level, Sheppard also highlighted the need for a top to bottom recognition of the importance of analytics within company cultures.
(Photo of panelists at the Hudson Institute event by Drew Clark.)