Connect with us

Artificial Intelligence

AI the Most Important Change in Health Care Since Introduction of the MRI, Say Experts

Published

on

Screenshot from the webinar

February 7, 2021 – Artificial Intelligence is the most important technological change in health care since the introduction of the MRI, experts said at a Thursday panel discussion about European tech sponsored by the Information Technology and Innovation Foundation.

AI will not be replacing doctors and nurses, but empowering decision-maker with new resources, according to those participating in the discussion on “How Can Europe Enhance the Benefits of AI-Enabled Health Care?”

For example, pharmaceutical companies are using AI for the speedy development of vaccines, panelists said. Additionally, AI is helping address the uneven ratio of skilled doctors to patients, assist health-care professionals in complex procedures, and deliver personalized health care to patients.

Yet, for AI technologies to reach their potential, European Union actors need to create regulations governing transparency, they said.

How AI works in healthcare

AI works through big collections of data that validate algorithms. These help explain certain solutions and detect anomalies in the data set of patients.

But algorithm-creation needs to be held to higher standards than they are currently. Systemic errors can easily enter in on a large scale, said Elmar Kotter, chairperson of the eHealth and Informatics Subcommittee of the European Society of Radiologists.

AI should have been used more during the early stages of the COVID-19 pandemic, said Maria Manuel Marques, on the Special Committee on Artificial Intelligence in a Digital Age.

AI helps treat more patients at a faster rate, and with consistency and agility, said Chris Walker, chair of the working group on digital health for the European Federation of Pharmaceutical Industries and Associations. It helps provide new insights and improve treatment by allowing early-stage treatment of diseases.

Europe faces great challenges because of people’s misconception of what AI can do, panelists said. It is not to replacing doctors and nurses, but empowering with decision-making resources.

More trust would come if companies would conduct safe experimentation by testing and showing examples of how AI can improve the life of health care workers and patients, said Marques.

Regulations of data is crucial for hospitals to trust the products. Moreover, patients must have privacy with their information. Regulations will help them understand what’s been done in the manufacture of AI system, and to what use data will be put.

Ander Elustondo Jauregui, policy officer for Digital Health, added that data quality was an important indicator of the maturity of an AI system. That providing assurances for doctors, he said.

Artificial Intelligence

Int’l Ethical Framework for Auto Drones Needed Before Widescale Implementation

Observers say the risks inherent in letting autonomous drones roam requires an ethical framework.

Published

on

Timothy Clement-Jones was a member of the U.K. Parliament's committee on artificial intelligence

July 19, 2021 — Autonomous drones could potentially serve as a replacement for military dogs in future warfare, said GeoTech Center Director David Bray during a panel discussion hosted by the Atlantic Council last month, but ethical concerns have observers clamoring for a framework for their use.

Military dogs, trained to assist soldiers on the battlefield, are currently a great asset to the military. AI-enabled autonomous systems, such as drones, are developing capabilities that would allow them to assist in the same way — for example, inspecting inaccessible areas and detecting fires and leaks early to minimize the chance of on-the-job injuries.

However, concerns have been raised about the ability to impact human lives, including the recent issue of an autonomous drone possibly hunting down humans in asymmetric warfare and anti-terrorist operations.

As artificial intelligence continues to develop at a rapid rate, society must determine what, if any, limitations should be implemented on a global scale. “If nobody starts raising the questions now, then it’s something that will be a missed opportunity,” Bray said.

Sally Grant, vice president at Lucd AI, agreed with Bray’s concerns, pointing out the controversies surrounding the uncharted territory of autonomous drones. Panelists proposed the possibility of an international limitation agreement with regards to AI-enabled autonomous systems that can exercise lethal force.

Timothy Clement-Jones, who was a member of the U.K. Parliament’s committee on artificial intelligence, called for international ethical guidelines, saying, “I want to see a development of an ethical risk-based approach to AI development and application.”

Many panelists emphasized the immense risk involve if this technology gets in the wrong hands. Panelists provided examples stretching from terrorist groups to the paparazzi, and the power they could possess with that much access.

Training is vital, Grant said, and soldiers need to feel comfortable with this machinery while not becoming over-reliant. The idea of implementing AI-enabled autonomous systems into missions, including during national disasters, is that soldiers can use it as guidance to make the most informed decisions.

“AI needs to be our servant not our master,” Clement agreed, emphasizing that soldiers can use it as a tool to help them and not as guidance to follow. He compared AI technology with the use of phone navigation, pointing to the importance of keeping a map in the glove compartment in case the technology fails.

The panelists emphasized the importance of remaining transparent and developing an international agreement with an ethical risk-based approach to AI development and application in these technologies, especially if they might enter the battlefield as a reliable companion someday.

Continue Reading

Artificial Intelligence

Deepfakes Could Pose A Threat to National Security, But Experts Are Split On How To Handle It

Experts disagree on the right response to video manipulation — is more tech or a societal shift the right solution?

Published

on

Rep. Anthony Gonzalez, R-Ohio

June 3, 2021—The emerging and growing phenomenon of video manipulation known as deepfakes could pose a threat to the country’s national security, policy makers and technology experts said at an online conference Wednesday, but how best to address them divided the panel.

A deepfake is a highly technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. A well done deepfake can make a person appear to do things that they never actually did and say things that they never actually said.

“The way the technology has evolved, it is literally impossible for a human to actually detect that something is a deepfake,” said Ashish Jaiman, the director of technology operations at Microsoft, at an online event hosted by the Information Technology and Innovation Foundation.

Experts are wary of the associated implications of this technology being increasingly offered to the general population, but how best to address the brewing dilemma has them split. Some believe better technology aimed at detecting deepfakes is the answer, while others say that a shift in social perspective is necessary. Others argue that such a societal shift would be dangerous, and that the solution actually lies in the hands of journalists.

Deepfakes pose a threat to democracy

Such technology posed no problem when only Hollywood had the means to portray such impressive special effects, says Rep. Anthony Gonzalez, R-Ohio, but the technology has progressed to a point that allows most anybody to get their hands on it. He says that with the spread of disinformation, and the challenges that poses to establishing a well-informed public, deepfakes could be weaponized to spread lies and affect elections.

As of yet, however, no evidence exists that deepfakes have been used for this purpose, according to Daniel Kimmage, the acting coordinator for the Global Engagement Center of the Department of State. But he, along with the other panelists, agree that the technology could be used to influence elections and further already growing seeds of mistrust in the information media. They believe that its best to act preemptively and solve the problem before it becomes a crisis.

“Once people realize they can’t trust the images and videos they’re seeing, not only will they not believe the lies, they aren’t going to believe the truth,” said Dana Rao, executive vice president of software company Adobe.

New technology as a solution

Jaiman says Microsoft has been developing sophisticated technologies aimed at detecting deepfakes for over two years now. Deborah Johnson, emeritus technology professor at the University of Virginia School of Engineering, refers to this method as an “arms race,” in which we must develop technology that detects deepfakes at a faster rate than the deepfake technology progresses.

But Jaiman was the first to admit that, despite Microsoft’s hard work, detecting deepfakes remains a grueling challenge. Apparently, it’s much harder to detect a deepfake than it is to create one, he said. He believes that a societal response is necessary, and that technology will be inherently insufficient to address the problem.

Societal shift as a solution

Jaiman argues that people need to be skeptical consumers of information. He believes that until the technology catches up and deepfakes can more easily be detected and misinformation can easily be snuffed, people need to approach online information with the perspective that they could easily be deceived.

But critics believe this approach of encouraging skepticism could be problematic. Gabriela Ivens, the head of open source research at Human Rights Watch, says that “it becomes very problematic if people’s first reactions are not to believe anything.” Ivens’ job revolves around researching and exposing human rights violations, but says that the growing mistrust of media outlets will make it harder for her to gain the necessary public support.

She believes that a “zero-trust society” must be resisted.

Vint Cerf, the vice president and chief internet evangelist at Google, says that it is up to journalists to prevent the growing spread of distrust. He accused journalists not of deliberately lying, but often times misleading the public. He believes that the true risk of deepfakes lies in their ability to corrode America’s trust in truth, and that it is up to journalists to restore that trust already beginning to corrode by being completely transparent and honest in their reporting.

Continue Reading

Artificial Intelligence

Complexity, Lack of Expertise Could Hamper Economic Benefits Of Artificial Intelligence

Artificial intelligence is said to open up a new age of economic development, but its complexity could hamper its rollout.

Published

on

Keith Strier of NVIDIA

May 24, 2021 — One of the great challenges to adopting artificial intelligence is the lack of understanding of it, according to a panel hosted by the Atlantic Council’s new GeoTech Center.

The panel last week discussed the economic benefits of AI and how global policy leaders can leverage it to achieve sustainable economic growth with government buy-in. But getting the government excited and getting them to actually do something about it are two completely different tasks.

That’s because there exists little government understanding or planning around this emerging market, according to Keith Strier, vice-president of worldwide AI initiatives at NVIDIA, a tech company that designs graphics processing units.

If the trend continues, the consequences could be globally impactful, leading to a widening of the global economic divide and could even pose national security threats, he said.

“AI is the new critical infrastructure… It’s about the future of GDP,” said Strier.

Lack of understanding stems from complexity 

The reason for a lack of government understanding stems from the complexity of AI research, and the lack of consensus among experts, Strier said. He noted that the metrics used to quantify AI performance are “deceptively complex” and technical. Experts struggle to even find consensus on defining AI, only adding to its already intrinsic complexity.

This divergence in expert opinion makes the research markedly difficult to break down and communicate to policy makers in digestible, useful ways.

“Policy is just not evidence based,” Strier said. “It’s not well informed.”

World economic divide could widen 

Charles Jennings, AI entrepreneur and founder of internet technology company NeuralEye, warned of AI’s potential to widen the economic divide worldwide.

Currently, the 500 fastest computers in the world are split up between just 29 different countries, leaving the remaining 170 struggling to produce computing power. As computers become faster, the countries best suited to reap the economic benefits will do so at a rate that far outpaces less developed countries.

Jennings also believes that there exists security issues associated with the lack of AI understanding in government, claiming that the public’s increasing dependence on it matched with a lack of regulation could lead to a public safety threat. He is adamant that it’s time to bridge the gap between enterprise and policy.

Strier says there are three essential questions governments must answer: How much domestic AI compute capacity do we have? How does this compare to other nations? Do we have enough capacity to support our national AI ambitions?

Answering these questions would help governments address the AI question in terms of their own national values and interests. This would help create a framework that could mitigate the potential negative consequences which might otherwise affect us.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending