Connect with us

Artificial Intelligence

Contact Tracing App Can Assist in Reopening Localities Safely, According to AI Task Force Panelists

Published

on

Screenshot of infectious disease physician Dr. Krutika Kuppalli from the webcast

July 9, 2020 — In the absence of federal leadership, the number of coronavirus cases has continued to climb in the United States, reaching a new single-day record number of infections on Wednesday with 60,000 cases recorded in 24 hours, according to infectious disease physician Dr. Krutika Kuppalli.

In a Wednesday hearing, members of the House Financial Services Committee’s task force on artificial intelligence were joined by contact tracing experts to discuss the importance of exposure notification and contact tracing apps in fighting the ongoing pandemic.

While some questioned the usefulness of tracking apps, many fought for their importance in allowing localities to safely reopen.

Kuppali called for the U.S. to learn from the global community by developing a national plan led by science.

She criticized the existing “patchwork system,” in which every municipality and state is making its own decisions. This approach makes it very difficult to combat the spread of the disease, she said.

Kuppali outlined three common components of successful domestic plans, crucial in fighting the pandemic: the development of a comprehensive national plan led by science, the rapid scaling up of testing and the implementation of contact tracing apps.

“Until we have a vaccine, maintaining cases will rely on surveillance, testing, contact tracing and isolation,” she said.

“We are still having problems with isolation and contact tracing,” Kuppali added, expressing frustration with the lack of federal initiative and overall progress. “We have been having these problems for months.”

According to the panelists, two-thirds of Americans say they would not trust a contact tracing app developed by major tech companies or the federal government.

Current adoption rates of contact tracing apps in the United States are extremely low, which panelists attributed to the fact that downloading these apps is often presented as a tradeoff to civil liberties.

Rep. Barry Loudermilk, R-Ga., emphasized the importance of trust in getting Americans on-board with tracing apps, noting that it is extremely critical that citizens understand how data is being used.

Two experts on the panel have already developed software which could assist in the reopening process while avoiding the sacrifice of individuals’ privacy.

Ryan McClendon, CEO and founder of the CVKey project, which aims to help communities reopen responsibly during the COVID-19 pandemic without compromising privacy, argued for the importance of using Bluetooth signals in tracing apps instead of GPS location data.

He maintained that the interface created by Apple and Google, which utilizes Bluetooth signals, could be extremely useful in countering the disease.

CVKey centralizes information for users, in an attempt to lessen the confusion caused by everchanging public health policies.

The app includes a symptom checker, clear guidelines on policies in the user’s area and a CVKey pass, which businesses can utilize to only allow low risk customers in.

Ramesh Raskar, MIT professor and founder of PathCheck, also argued for the worth of the Bluetooth tracking software created by Apple and Google.

PathCheck utilizes similar software, including a customizable mobile app and a production-ready exposure notification server based on the Google open source project.

Raskar argued that contact tracing apps can play a big role by allowing the country to track the spread of the disease cheaply, quickly and at scale.

He further contended that any app utilized should be built transparently and be open to scrutiny from the public.

McClendon said that local institutions, such as employers, universities and schools, play an important role in maximizing app adoption and so workplaces should be utilizing contact tracing to protect their workforce.

“We need 60 to 70 percent adoption for these apps to be useful,” said McClendon. “One of the best ways to do that is to work with local institutions — it is simply a marketing challenge.”

“Workplaces could become hot spots and shut down again, which people don’t like,” McClendon continued. “Preventing the shut down by keeping the communities safe is a strong argument for adoption, if we can communicate that message.”

Some panelists maintained doubt, saying that Americans are simply unlikely to adopt these apps.

“I can just tell you for a fact, my most rural counties are not going to utilize these apps,” said Rep. Anthony Gonzalez, R-Ohio, adding that he doesn’t blame them.

The experts contended that this is the greatest modern threat the country has seen and that how legislators choose to manage this disease will be their legacy.

Especially as a nation that enjoys boasting of its tech dominance, Kuppali said, the U.S., should lead in the arena.

Former Assistant Editor Jericho Casper graduated from the University of Virginia studying media policy. She grew up in Newport News in an area heavily impacted by the digital divide. She has a passion for universal access and a vendetta against anyone who stands in the way of her getting better broadband. She is now Associate Broadband Researcher at the Institute for Local Self Reliance's Community Broadband Network Initiative.

Artificial Intelligence

Int’l Ethical Framework for Auto Drones Needed Before Widescale Implementation

Observers say the risks inherent in letting autonomous drones roam requires an ethical framework.

Published

on

Timothy Clement-Jones was a member of the U.K. Parliament's committee on artificial intelligence

July 19, 2021 — Autonomous drones could potentially serve as a replacement for military dogs in future warfare, said GeoTech Center Director David Bray during a panel discussion hosted by the Atlantic Council last month, but ethical concerns have observers clamoring for a framework for their use.

Military dogs, trained to assist soldiers on the battlefield, are currently a great asset to the military. AI-enabled autonomous systems, such as drones, are developing capabilities that would allow them to assist in the same way — for example, inspecting inaccessible areas and detecting fires and leaks early to minimize the chance of on-the-job injuries.

However, concerns have been raised about the ability to impact human lives, including the recent issue of an autonomous drone possibly hunting down humans in asymmetric warfare and anti-terrorist operations.

As artificial intelligence continues to develop at a rapid rate, society must determine what, if any, limitations should be implemented on a global scale. “If nobody starts raising the questions now, then it’s something that will be a missed opportunity,” Bray said.

Sally Grant, vice president at Lucd AI, agreed with Bray’s concerns, pointing out the controversies surrounding the uncharted territory of autonomous drones. Panelists proposed the possibility of an international limitation agreement with regards to AI-enabled autonomous systems that can exercise lethal force.

Timothy Clement-Jones, who was a member of the U.K. Parliament’s committee on artificial intelligence, called for international ethical guidelines, saying, “I want to see a development of an ethical risk-based approach to AI development and application.”

Many panelists emphasized the immense risk involve if this technology gets in the wrong hands. Panelists provided examples stretching from terrorist groups to the paparazzi, and the power they could possess with that much access.

Training is vital, Grant said, and soldiers need to feel comfortable with this machinery while not becoming over-reliant. The idea of implementing AI-enabled autonomous systems into missions, including during national disasters, is that soldiers can use it as guidance to make the most informed decisions.

“AI needs to be our servant not our master,” Clement agreed, emphasizing that soldiers can use it as a tool to help them and not as guidance to follow. He compared AI technology with the use of phone navigation, pointing to the importance of keeping a map in the glove compartment in case the technology fails.

The panelists emphasized the importance of remaining transparent and developing an international agreement with an ethical risk-based approach to AI development and application in these technologies, especially if they might enter the battlefield as a reliable companion someday.

Continue Reading

Artificial Intelligence

Deepfakes Could Pose A Threat to National Security, But Experts Are Split On How To Handle It

Experts disagree on the right response to video manipulation — is more tech or a societal shift the right solution?

Published

on

Rep. Anthony Gonzalez, R-Ohio

June 3, 2021—The emerging and growing phenomenon of video manipulation known as deepfakes could pose a threat to the country’s national security, policy makers and technology experts said at an online conference Wednesday, but how best to address them divided the panel.

A deepfake is a highly technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. A well done deepfake can make a person appear to do things that they never actually did and say things that they never actually said.

“The way the technology has evolved, it is literally impossible for a human to actually detect that something is a deepfake,” said Ashish Jaiman, the director of technology operations at Microsoft, at an online event hosted by the Information Technology and Innovation Foundation.

Experts are wary of the associated implications of this technology being increasingly offered to the general population, but how best to address the brewing dilemma has them split. Some believe better technology aimed at detecting deepfakes is the answer, while others say that a shift in social perspective is necessary. Others argue that such a societal shift would be dangerous, and that the solution actually lies in the hands of journalists.

Deepfakes pose a threat to democracy

Such technology posed no problem when only Hollywood had the means to portray such impressive special effects, says Rep. Anthony Gonzalez, R-Ohio, but the technology has progressed to a point that allows most anybody to get their hands on it. He says that with the spread of disinformation, and the challenges that poses to establishing a well-informed public, deepfakes could be weaponized to spread lies and affect elections.

As of yet, however, no evidence exists that deepfakes have been used for this purpose, according to Daniel Kimmage, the acting coordinator for the Global Engagement Center of the Department of State. But he, along with the other panelists, agree that the technology could be used to influence elections and further already growing seeds of mistrust in the information media. They believe that its best to act preemptively and solve the problem before it becomes a crisis.

“Once people realize they can’t trust the images and videos they’re seeing, not only will they not believe the lies, they aren’t going to believe the truth,” said Dana Rao, executive vice president of software company Adobe.

New technology as a solution

Jaiman says Microsoft has been developing sophisticated technologies aimed at detecting deepfakes for over two years now. Deborah Johnson, emeritus technology professor at the University of Virginia School of Engineering, refers to this method as an “arms race,” in which we must develop technology that detects deepfakes at a faster rate than the deepfake technology progresses.

But Jaiman was the first to admit that, despite Microsoft’s hard work, detecting deepfakes remains a grueling challenge. Apparently, it’s much harder to detect a deepfake than it is to create one, he said. He believes that a societal response is necessary, and that technology will be inherently insufficient to address the problem.

Societal shift as a solution

Jaiman argues that people need to be skeptical consumers of information. He believes that until the technology catches up and deepfakes can more easily be detected and misinformation can easily be snuffed, people need to approach online information with the perspective that they could easily be deceived.

But critics believe this approach of encouraging skepticism could be problematic. Gabriela Ivens, the head of open source research at Human Rights Watch, says that “it becomes very problematic if people’s first reactions are not to believe anything.” Ivens’ job revolves around researching and exposing human rights violations, but says that the growing mistrust of media outlets will make it harder for her to gain the necessary public support.

She believes that a “zero-trust society” must be resisted.

Vint Cerf, the vice president and chief internet evangelist at Google, says that it is up to journalists to prevent the growing spread of distrust. He accused journalists not of deliberately lying, but often times misleading the public. He believes that the true risk of deepfakes lies in their ability to corrode America’s trust in truth, and that it is up to journalists to restore that trust already beginning to corrode by being completely transparent and honest in their reporting.

Continue Reading

Artificial Intelligence

Complexity, Lack of Expertise Could Hamper Economic Benefits Of Artificial Intelligence

Artificial intelligence is said to open up a new age of economic development, but its complexity could hamper its rollout.

Published

on

Keith Strier of NVIDIA

May 24, 2021 — One of the great challenges to adopting artificial intelligence is the lack of understanding of it, according to a panel hosted by the Atlantic Council’s new GeoTech Center.

The panel last week discussed the economic benefits of AI and how global policy leaders can leverage it to achieve sustainable economic growth with government buy-in. But getting the government excited and getting them to actually do something about it are two completely different tasks.

That’s because there exists little government understanding or planning around this emerging market, according to Keith Strier, vice-president of worldwide AI initiatives at NVIDIA, a tech company that designs graphics processing units.

If the trend continues, the consequences could be globally impactful, leading to a widening of the global economic divide and could even pose national security threats, he said.

“AI is the new critical infrastructure… It’s about the future of GDP,” said Strier.

Lack of understanding stems from complexity 

The reason for a lack of government understanding stems from the complexity of AI research, and the lack of consensus among experts, Strier said. He noted that the metrics used to quantify AI performance are “deceptively complex” and technical. Experts struggle to even find consensus on defining AI, only adding to its already intrinsic complexity.

This divergence in expert opinion makes the research markedly difficult to break down and communicate to policy makers in digestible, useful ways.

“Policy is just not evidence based,” Strier said. “It’s not well informed.”

World economic divide could widen 

Charles Jennings, AI entrepreneur and founder of internet technology company NeuralEye, warned of AI’s potential to widen the economic divide worldwide.

Currently, the 500 fastest computers in the world are split up between just 29 different countries, leaving the remaining 170 struggling to produce computing power. As computers become faster, the countries best suited to reap the economic benefits will do so at a rate that far outpaces less developed countries.

Jennings also believes that there exists security issues associated with the lack of AI understanding in government, claiming that the public’s increasing dependence on it matched with a lack of regulation could lead to a public safety threat. He is adamant that it’s time to bridge the gap between enterprise and policy.

Strier says there are three essential questions governments must answer: How much domestic AI compute capacity do we have? How does this compare to other nations? Do we have enough capacity to support our national AI ambitions?

Answering these questions would help governments address the AI question in terms of their own national values and interests. This would help create a framework that could mitigate the potential negative consequences which might otherwise affect us.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending