Connect with us

Artificial Intelligence

Amid Privacy Worries, Artificial Intelligence May Offer Positive Outlook on Usage of Data Analytics

Heather Heimbach

Published

on

WASHINGTON, June 25, 2018 – Carnegie Mellon University academics provided a rare positive outlook on what data analytics can do to improve society, a stark contrast to recent outrage over data privacy scandals.

At a Wednesday Capitol Hill briefing on, “Artificial Intelligence for Good”, Carnegie Mellon hosted an optimistic evaluation of developments of AI in private sector and academia.

Among the numerous possibilities for future good, AI may be able to help identify and provide aid to those who are contemplating self-harm, said CMU Associate Professor Jason Hong.

For those who are planning to harm themselves, for example, the words ibuprofen, advil and the crying emoji are about a 13 times stronger signal” than the words “kill or “suicide,” he said.

AI can be used to help combat human trafficking and to identify wildlife more accurately

Jay Qi, data scientist at the data analytics and deep learning company Uptake, created software to combat human trafficking in foreign countries.

This technology that may help people on the ground determine who is a victim of human trafficking. He also spoke of “using AI to classify images” for nature conservation organizations to help them find wildlife in the camera traps more efficiently.

Audience members remained skeptical of Panglossian optimism

Despite positive developments and innovations, audience members remained skeptical of the optimistic outlook.

The audience questioned how biased data sets–resulting in a biased AI system– may be used to perpetuate biases in an increasingly data-driven society. A map of New York shown during the presentation showed a significant lack of data points from people in a low-income area, leading to doubts over whether such a hole in a data set could lead to inaccuracies in data analysis.

However, Qi retained a positive outlook. “Having a system is still better than not having a system,” Qi said.

(Illustration of AI by geralt used with permission.)

 

 

 

5G

5G Will Disrupt Every Part of the U.S. Economy, Predicts FCC Chairman Ajit Pai

Liana Sowa

Published

on

Screenshot of FCC Chairman Ajit Pai from the event

WASHINGTON, June 25, 2018 – Carnegie Mellon University academics provided a rare positive outlook on what data analytics can do to improve society, a stark contrast to recent outrage over data privacy scandals.

At a Wednesday Capitol Hill briefing on, “Artificial Intelligence for Good”, Carnegie Mellon hosted an optimistic evaluation of developments of AI in private sector and academia.

Among the numerous possibilities for future good, AI may be able to help identify and provide aid to those who are contemplating self-harm, said CMU Associate Professor Jason Hong.

For those who are planning to harm themselves, for example, the words ibuprofen, advil and the crying emoji are about a 13 times stronger signal” than the words “kill or “suicide,” he said.

AI can be used to help combat human trafficking and to identify wildlife more accurately

Jay Qi, data scientist at the data analytics and deep learning company Uptake, created software to combat human trafficking in foreign countries.

This technology that may help people on the ground determine who is a victim of human trafficking. He also spoke of “using AI to classify images” for nature conservation organizations to help them find wildlife in the camera traps more efficiently.

Audience members remained skeptical of Panglossian optimism

Despite positive developments and innovations, audience members remained skeptical of the optimistic outlook.

The audience questioned how biased data sets–resulting in a biased AI system– may be used to perpetuate biases in an increasingly data-driven society. A map of New York shown during the presentation showed a significant lack of data points from people in a low-income area, leading to doubts over whether such a hole in a data set could lead to inaccuracies in data analysis.

However, Qi retained a positive outlook. “Having a system is still better than not having a system,” Qi said.

(Illustration of AI by geralt used with permission.)

 

 

 

Continue Reading

Artificial Intelligence

Labeling and Rating Potentially Harmful AI Systems Is Inherently Complex, Say Brookings Panelists

Jericho Casper

Published

on

Photo of Mark MacCarthy by the Federal Trade Commission from November 2018

WASHINGTON, June 25, 2018 – Carnegie Mellon University academics provided a rare positive outlook on what data analytics can do to improve society, a stark contrast to recent outrage over data privacy scandals.

At a Wednesday Capitol Hill briefing on, “Artificial Intelligence for Good”, Carnegie Mellon hosted an optimistic evaluation of developments of AI in private sector and academia.

Among the numerous possibilities for future good, AI may be able to help identify and provide aid to those who are contemplating self-harm, said CMU Associate Professor Jason Hong.

For those who are planning to harm themselves, for example, the words ibuprofen, advil and the crying emoji are about a 13 times stronger signal” than the words “kill or “suicide,” he said.

AI can be used to help combat human trafficking and to identify wildlife more accurately

Jay Qi, data scientist at the data analytics and deep learning company Uptake, created software to combat human trafficking in foreign countries.

This technology that may help people on the ground determine who is a victim of human trafficking. He also spoke of “using AI to classify images” for nature conservation organizations to help them find wildlife in the camera traps more efficiently.

Audience members remained skeptical of Panglossian optimism

Despite positive developments and innovations, audience members remained skeptical of the optimistic outlook.

The audience questioned how biased data sets–resulting in a biased AI system– may be used to perpetuate biases in an increasingly data-driven society. A map of New York shown during the presentation showed a significant lack of data points from people in a low-income area, leading to doubts over whether such a hole in a data set could lead to inaccuracies in data analysis.

However, Qi retained a positive outlook. “Having a system is still better than not having a system,” Qi said.

(Illustration of AI by geralt used with permission.)

 

 

 

Continue Reading

Artificial Intelligence

National Security Commission on AI Votes Unanimously on Recommendations in First Public Meeting

Jericho Casper

Published

on

Screenshot of Bob Work, vice chair of the NSCIA, from the webcast

WASHINGTON, June 25, 2018 – Carnegie Mellon University academics provided a rare positive outlook on what data analytics can do to improve society, a stark contrast to recent outrage over data privacy scandals.

At a Wednesday Capitol Hill briefing on, “Artificial Intelligence for Good”, Carnegie Mellon hosted an optimistic evaluation of developments of AI in private sector and academia.

Among the numerous possibilities for future good, AI may be able to help identify and provide aid to those who are contemplating self-harm, said CMU Associate Professor Jason Hong.

For those who are planning to harm themselves, for example, the words ibuprofen, advil and the crying emoji are about a 13 times stronger signal” than the words “kill or “suicide,” he said.

AI can be used to help combat human trafficking and to identify wildlife more accurately

Jay Qi, data scientist at the data analytics and deep learning company Uptake, created software to combat human trafficking in foreign countries.

This technology that may help people on the ground determine who is a victim of human trafficking. He also spoke of “using AI to classify images” for nature conservation organizations to help them find wildlife in the camera traps more efficiently.

Audience members remained skeptical of Panglossian optimism

Despite positive developments and innovations, audience members remained skeptical of the optimistic outlook.

The audience questioned how biased data sets–resulting in a biased AI system– may be used to perpetuate biases in an increasingly data-driven society. A map of New York shown during the presentation showed a significant lack of data points from people in a low-income area, leading to doubts over whether such a hole in a data set could lead to inaccuracies in data analysis.

However, Qi retained a positive outlook. “Having a system is still better than not having a system,” Qi said.

(Illustration of AI by geralt used with permission.)

 

 

 

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending