WASHINGTON, December 24, 2021 – Former Secretary of State Henry Kissinger says that further use of artificial intelligence will call into question what it means to be human, and that the technology cannot solve all those problems humans fail to address on their own.
Kissinger spoke at a Council on Foreign Relations event highlighting his new book “The Age of AI: And Our Human Future” on Monday along with co-author and former Google CEO Eric Schmidt in a conversation moderated by PBS NewsHour anchor Judy Woodruff.
Schmidt remarked throughout the event on unanswered questions about AI despite common use of the technology.
He emphasized that the computer systems may be able to solve complex problems, such as in physics dealing with dark matter or dark energy, but that the humans who built the technology may not be able to determine how exactly the computer solved the problems.
Along the lines of this potential for dangerous use of the technology, he stated how AI development, though sometimes a force for good, “plays” with human lives.
He pointed out that to deal with this great technological power, almost every country now has created a governmental to oversee the ethics of AI development.
Schmidt stated that western values must be the dominant values in AI platforms that influence everyday life such as ones that have key implications for democracy.
With all the consideration on how to make AI work so it is effective but also utilitarian, Kissinger noted how much human thinking must go into managing the “thinking” these machines do, and that “a mere technological edge is not in itself decisive” in terms of AI that can compete with adversaries such as China’s diplomatic technological might.
CES 2022: Artificial Intelligence Needs to Resonate with People for Widespread Acceptance
Even though stakeholders may want technologies that yield better results, they may be uncomfortable with artificial intelligence.
LAS VEGAS, January 6, 2022 – To get artificial intelligence into the mainstream, the industry needs to appease not just regulators, but stakeholders as well.
Pat Baird, regulatory head for software standards at electronics maker Philips, said at the Consumer Electronics Show Thursday that for AI technology to be successfully implemented in a field like medicine, everyone touched by it needs to be comfortable with it.
“A lot of people want to know more information, more information, more information before you dare use that [technology] on me one of the members of my family,” Baird said, “I totally get that, but it is interesting – some of the myths that we see in Hollywood compared to how the technology [actually functions],” adding to be successful you have to win the approval of all stakeholders, not just regulators.
“It is a fine line to take and walk,” Baird said. “I think we need to make sure that the lawmakers really understand the benefits and the risks about this – not all AI is the same. Not all applications are the same.”
Like accidents involving autonomous vehicles, rare accidents for AI can set the technology back years, Baird said. “One of the things that I worry about is when something bad happens that’s kind of reflected on the entire industry.”
Baird noted that many people come prepared with preconceived biases against AI that make them susceptible to skepticism or hesitancy that a technology is safe or will work.
But he did not go so far as to say these biases against AI are putting a “thumb on the scale” against AI, “but [that thumb] is floating near the scale right now.”
“That is one of the things that I’m worried about,” he said. “Because this technology can make a difference. I want to help my patients, damn it, and if this can only improve performance by a couple percent, that is important to that family that you just helped with that [technology].”
Joseph Murphy, vice president of marketing at AI company Sensory Inc., said, “Just like everything in life it’s a tricky balance of innovation, and then putting up the speed bumps to innovation. It’s a process that has to happen.”
On Wednesday, Sally Lange Witkowski, founder of business consulting firm Slang Consulting, said that companies should be educating consumers about the benefits of 5G for widespread adoption.
Vaccine Makers Promote Use of Artificial Intelligence for Development
Artificial Intelligence assists in the development of vaccine research and trial testing, makers say.
WASHINGTON, December 15, 2021 – Artificial intelligence is helping accelerate the development of COVID-19 vaccines.
Leaders in Janssen’s and Moderna’s research and development groups said Tuesday that AI will help drug makers create better, more effective vaccines for patients.
Speaking at Bloomberg’s Technology Summit on Tuesday, Najat Khan, Janssen’s research and development global head of strategy, said AI is speeding up the delivery of new vaccines for populations in need. (Janssen is a subsidiary of Johnson & Johnson.)
“We use AI and machine learning to predict performance of clinical sites for potential [vaccine] trial sites,” Khan said. AI can help researchers target patients for trials to obtain more comprehensive data sets. Vaccine developers spend time, money, and resources finding patients to participate in clinical trials.
Khan said “only four percent” of eligible patients join a clinical trial. AI can help researchers focus their efforts to identify patients to participate, she said.
Outstanding concerns with AI
Despite AI’s usefulness in vaccine development, Khan said there is still a gap that exists between the information available in healthcare and what’s useful for AI. “There’s lots of data generated in health care, but it’s not connected,” Khan stated. “If it’s not connected, it’s fragmented.”
The problem, Khan said, is the varying systems health clinics use to input and store patients’ information. “Different systems across different clinics needs the same data,” Khan added. “I can go to two different clinics, each one year apart, and my data would be separate.”
On a large scale, mismatched datasets lead to “an over-index of patient information in some areas and an under-index in others,” she said.
For better innovation in treating and curing diseases, health providers need better ways to gather share data while complying with patient privacy concerns, Khan added.
One of health care providers’ challenges is effective data minimization and ensuring that health entities only use patient data according to the patient’s consent over the use of their data. The industry is starting to see progress with tokenization, Khan said, which anonymizes data and links with other data sources for a specific patient-focused purpose.
“This allows us to do even more with AI,” Khan said.
Int’l Ethical Framework for Auto Drones Needed Before Widescale Implementation
Observers say the risks inherent in letting autonomous drones roam requires an ethical framework.
July 19, 2021 — Autonomous drones could potentially serve as a replacement for military dogs in future warfare, said GeoTech Center Director David Bray during a panel discussion hosted by the Atlantic Council last month, but ethical concerns have observers clamoring for a framework for their use.
Military dogs, trained to assist soldiers on the battlefield, are currently a great asset to the military. AI-enabled autonomous systems, such as drones, are developing capabilities that would allow them to assist in the same way — for example, inspecting inaccessible areas and detecting fires and leaks early to minimize the chance of on-the-job injuries.
However, concerns have been raised about the ability to impact human lives, including the recent issue of an autonomous drone possibly hunting down humans in asymmetric warfare and anti-terrorist operations.
As artificial intelligence continues to develop at a rapid rate, society must determine what, if any, limitations should be implemented on a global scale. “If nobody starts raising the questions now, then it’s something that will be a missed opportunity,” Bray said.
Sally Grant, vice president at Lucd AI, agreed with Bray’s concerns, pointing out the controversies surrounding the uncharted territory of autonomous drones. Panelists proposed the possibility of an international limitation agreement with regards to AI-enabled autonomous systems that can exercise lethal force.
Timothy Clement-Jones, who was a member of the U.K. Parliament’s committee on artificial intelligence, called for international ethical guidelines, saying, “I want to see a development of an ethical risk-based approach to AI development and application.”
Many panelists emphasized the immense risk involve if this technology gets in the wrong hands. Panelists provided examples stretching from terrorist groups to the paparazzi, and the power they could possess with that much access.
Training is vital, Grant said, and soldiers need to feel comfortable with this machinery while not becoming over-reliant. The idea of implementing AI-enabled autonomous systems into missions, including during national disasters, is that soldiers can use it as guidance to make the most informed decisions.
“AI needs to be our servant not our master,” Clement agreed, emphasizing that soldiers can use it as a tool to help them and not as guidance to follow. He compared AI technology with the use of phone navigation, pointing to the importance of keeping a map in the glove compartment in case the technology fails.
The panelists emphasized the importance of remaining transparent and developing an international agreement with an ethical risk-based approach to AI development and application in these technologies, especially if they might enter the battlefield as a reliable companion someday.
- Biden On Lookout for Cyberattacks with Russia Massing on Border of Ukraine
- CES 2022: Next Generation of TVs Have Application for Remote Learning, Promoters Say
- Digital Inclusion Leaders a Critical Step to Closing Digital Divide: National League of Cities
- USDA Hires Lumen, Ligado Marketing Services, IRS Facial ID, New Public Knowledge Hire
- Preparing Collaboration Model, Data Collection Suggested Before Infrastructure Money Flows
- Telework Here to Stay, But Devices Need Beefed Up Security
Signup for Broadband Breakfast
Broadband Roundup3 months ago
Cox’s Wireless Deal with Verizon Dies, Apple Appeals Epic Games Case, AT&T’s Fiber Investment
Broadband Roundup3 months ago
AT&T Hurricane Survey, FCC Announces $1.1B from Emergency Connectivity Fund, Comcast’s Utah Plans
Broadband Roundup4 months ago
Facebook Pauses Instagram for Kids, $1.2B from Emergency Connectivity Fund, Ransomware Attacks
Broadband Roundup3 months ago
Facebook Changes and Second Whistleblower, Comcast’s Spam Call Feature, AT&T Picks Ericsson for 5G
Broadband Roundup4 months ago
O’Rielly ‘Perplexed’ By Delay in Rosenworcel Decision, China Mobile Domesticating Contracts, AT&T Partners with Frontier
Expert Opinion4 months ago
Mike Harris: Investing in Open Access Fiber Optics is Investing in the Future
Spectrum2 months ago
More Experts Weigh In On Possibility 12 GHz Band Can Be Shared with 5G Services
Funding4 months ago
Pandemic and Funding Programs Increasing Investments in Broadband and M&A, Conference Hears