WASHINGTON, June 25, 2018 - Carnegie Mellon University academics provided a rare positive outlook on what data analytics can do to improve society, a stark contrast to recent outrage over data privacy scandals.
At a Wednesday Capitol Hill briefing on, “Artificial Intelligence for Good”, Carnegie Mellon hosted an optimistic evaluation of developments of AI in private sector and academia.
Among the numerous possibilities for future good, AI may be able to help identify and provide aid to those who are contemplating self-harm, said CMU Associate Professor Jason Hong.
For those who are planning to harm themselves, for example, the words ibuprofen, advil and the crying emoji are about a 13 times stronger signal” than the words “kill or “suicide,” he said.
AI can be used to help combat human trafficking and to identify wildlife more accurately
Jay Qi, data scientist at the data analytics and deep learning company Uptake, created software to combat human trafficking in foreign countries.
This technology that may help people on the ground determine who is a victim of human trafficking. He also spoke of “using AI to classify images” for nature conservation organizations to help them find wildlife in the camera traps more efficiently.
Audience members remained skeptical of Panglossian optimism
Despite positive developments and innovations, audience members remained skeptical of the optimistic outlook.
The audience questioned how biased data sets--resulting in a biased AI system-- may be used to perpetuate biases in an increasingly data-driven society. A map of New York shown during the presentation showed a significant lack of data points from people in a low-income area, leading to doubts over whether such a hole in a data set could lead to inaccuracies in data analysis.
However, Qi retained a positive outlook. “Having a system is still better than not having a system,” Qi said.