Connect with us

Artificial Intelligence

Contact Tracing App Can Assist in Reopening Localities Safely, According to AI Task Force Panelists

Published

on

Screenshot of infectious disease physician Dr. Krutika Kuppalli from the webcast

July 9, 2020 — In the absence of federal leadership, the number of coronavirus cases has continued to climb in the United States, reaching a new single-day record number of infections on Wednesday with 60,000 cases recorded in 24 hours, according to infectious disease physician Dr. Krutika Kuppalli.

In a Wednesday hearing, members of the House Financial Services Committee’s task force on artificial intelligence were joined by contact tracing experts to discuss the importance of exposure notification and contact tracing apps in fighting the ongoing pandemic.

While some questioned the usefulness of tracking apps, many fought for their importance in allowing localities to safely reopen.

Kuppali called for the U.S. to learn from the global community by developing a national plan led by science.

She criticized the existing “patchwork system,” in which every municipality and state is making its own decisions. This approach makes it very difficult to combat the spread of the disease, she said.

Kuppali outlined three common components of successful domestic plans, crucial in fighting the pandemic: the development of a comprehensive national plan led by science, the rapid scaling up of testing and the implementation of contact tracing apps.

“Until we have a vaccine, maintaining cases will rely on surveillance, testing, contact tracing and isolation,” she said.

“We are still having problems with isolation and contact tracing,” Kuppali added, expressing frustration with the lack of federal initiative and overall progress. “We have been having these problems for months.”

According to the panelists, two-thirds of Americans say they would not trust a contact tracing app developed by major tech companies or the federal government.

Current adoption rates of contact tracing apps in the United States are extremely low, which panelists attributed to the fact that downloading these apps is often presented as a tradeoff to civil liberties.

Rep. Barry Loudermilk, R-Ga., emphasized the importance of trust in getting Americans on-board with tracing apps, noting that it is extremely critical that citizens understand how data is being used.

Two experts on the panel have already developed software which could assist in the reopening process while avoiding the sacrifice of individuals’ privacy.

Ryan McClendon, CEO and founder of the CVKey project, which aims to help communities reopen responsibly during the COVID-19 pandemic without compromising privacy, argued for the importance of using Bluetooth signals in tracing apps instead of GPS location data.

He maintained that the interface created by Apple and Google, which utilizes Bluetooth signals, could be extremely useful in countering the disease.

CVKey centralizes information for users, in an attempt to lessen the confusion caused by everchanging public health policies.

The app includes a symptom checker, clear guidelines on policies in the user’s area and a CVKey pass, which businesses can utilize to only allow low risk customers in.

Ramesh Raskar, MIT professor and founder of PathCheck, also argued for the worth of the Bluetooth tracking software created by Apple and Google.

PathCheck utilizes similar software, including a customizable mobile app and a production-ready exposure notification server based on the Google open source project.

Raskar argued that contact tracing apps can play a big role by allowing the country to track the spread of the disease cheaply, quickly and at scale.

He further contended that any app utilized should be built transparently and be open to scrutiny from the public.

McClendon said that local institutions, such as employers, universities and schools, play an important role in maximizing app adoption and so workplaces should be utilizing contact tracing to protect their workforce.

“We need 60 to 70 percent adoption for these apps to be useful,” said McClendon. “One of the best ways to do that is to work with local institutions — it is simply a marketing challenge.”

“Workplaces could become hot spots and shut down again, which people don’t like,” McClendon continued. “Preventing the shut down by keeping the communities safe is a strong argument for adoption, if we can communicate that message.”

Some panelists maintained doubt, saying that Americans are simply unlikely to adopt these apps.

“I can just tell you for a fact, my most rural counties are not going to utilize these apps,” said Rep. Anthony Gonzalez, R-Ohio, adding that he doesn’t blame them.

The experts contended that this is the greatest modern threat the country has seen and that how legislators choose to manage this disease will be their legacy.

Especially as a nation that enjoys boasting of its tech dominance, Kuppali said, the U.S., should lead in the arena.

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

Sen. Bennet Urges Companies to Consider ‘Alarming’ Child Safety Risks in AI Chatbot Race

Several leading tech companies have rushed to integrate their own AI-powered applications

Published

on

Photo of Sen. Michael Bennet in 2019 by Gage Skidmore, used with permission

WASHINGTON, March 22, 2023 — Sen. Michael Bennet, D-Colo., on Tuesday urged the companies behind generative artificial intelligence products to anticipate and mitigate the potential harms that AI-powered chatbots pose to underage users.

“The race to deploy generative AI cannot come at the expense of our children,” Bennet wrote in a letter to the heads of Google, OpenAI, Meta, Microsoft and Snap. “Responsible deployment requires clear policies and frameworks to promote safety, anticipate risk and mitigate harm.”

In response to the explosive popularity of OpenAI’s ChatGPT, several leading tech companies have rushed to integrate their own AI-powered applications. Microsoft recently released an AI-powered version of its Bing search engine, and Google has announced plans to make a conversational AI service “widely available to the public in the coming weeks.”

Social media platforms have followed suit, with Meta CEO Mark Zuckerberg saying the company plans to “turbocharge” its AI development the same day Snapchat launched a GPT-powered chatbot called My AI.

These chatbots have already demonstrated “alarming” interactions, Bennet wrote. In response to a researcher posing as a child, My AI gave instructions for lying to parents about an upcoming trip with a 31-year-old man and for covering up a bruise ahead of a visit from Child Protective Services.

A Snap Newsroom post announcing the chatbot acknowledged that “as with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything.”

Bennet criticized the company for deploying My AI despite knowledge of its shortcomings, noting that 59 percent of teens aged 13 to 17 use Snapchat. “Younger users are at an earlier stage of cognitive, emotional, and intellectual development, making them more impressionable, impulsive, and less equipped to distinguish fact from fiction,” he wrote.

These concerns are compounded by an escalating youth mental health crisis, Bennet added. In 2021, more than half of teen girls reported feeling persistently sad or hopeless and one in three seriously contemplated suicide, according to a recent report from the Centers for Disease Control and Prevention.

“Against this backdrop, it is not difficult to see the risk of exposing young people to chatbots that have at times engaged in verbal abuse, encouraged deception and suggested self-harm,” the senator wrote.

Bennet’s letter comes as lawmakers from both parties are expressing growing concerns about technology’s impact on young users. Legislation aimed at safeguarding children’s online privacy has gained broad bipartisan support, and several other measures — ranging from a minimum age requirement for social media usage to a slew of regulations for tech companies — have been proposed.

Many industry experts have also called for increased AI regulation, noting that very little legislation currently governs the powerful technology.

Continue Reading

Artificial Intelligence

Oversight Committee Members Concerned About New AI, As Witnesses Propose Some Solutions

Federal government can examine algorithms for generative AI, and coordinate with states on AI labor training.

Published

on

By

Photo of Eric Schmidt from December 2011 by Kmeron used with permission

WASHINGTON, March 14, 2023 –  In response to lawmakers’ concerns over the impacts on certain artificial intelligence technologies, experts said at an oversight subcommittee hearing on Wednesday that more government regulation would be necessary to stem their negative impacts.

Relatively new machine learning technology known as generative AI, which is designed to create content on its own, has taken the world by storm. Specific applications such as the recently surfaced ChatGPT, which can write out entire novels from basic user inputs, has drawn both marvel and concern.

Such AI technology can be used to encourage cheating behaviors in academia as well as harm people through the use of  deep fakes, which uses AI to superimpose a user in a video. Such AI can be used to produce “revenge pornography” to harass, silence and blackmail victims.

Aleksander Mądry, professor of Cadence Design Systems of Massachusetts Institute of Technology, told the subcommittee that AI is a very fast moving technology, meaning the government needs to step in to confirm the objectives of the companies and whether the algorithms match the societal benefits and values. These generative AI technologies are often limited to their human programming and can also display biases.

Rep. Marjorie Taylor Greene, R-Georgia, raised concerns about this type of AI replacing human jobs. Eric Schmidt, former Google CEO and now chair of the AI development initiative known as the Special Competitive Studies Project, said that if this AI can be well-directed, it can aid people in obtaining higher incomes and actually creating more jobs.

To that point, Rep. Stephen Lynch, D-Massachusetts., raised the question of how much progress the government has made or still needs in AI development.

Schmidt said governments across the country need to look at bolstering the labor force to keep up.

“I just don’t see the progress in government to reform the way of hiring and promoting technical people,” he said. “This technology is too new. You need new students, new ideas, new invention – I think that’s the fastest way.

“On the federal level, the easiest thing to do is to come up with some program that’s ministered by the state or by leading universities and getting them money so that they can build these programs.”

Schmidt urged lawmakers last year to create a digital service academy to train more young American students on AI, cybersecurity and cryptocurrency, reported Axios.

Continue Reading

Artificial Intelligence

Congress Should Focus on Tech Regulation, Said Former Tech Industry Lobbyist

Congress should shift focus from speech debates to regulation on emerging technologies, says expert.

Published

on

Photo of Adam Conner, vice president of technology policy at American Progress

WASHINGTON, March 9, 2023 – Congress should focus on technology regulation, particularly for emerging technology, rather than speech debates, said Adam Conner, vice president of technology policy at American Progress at Broadband Breakfast’s Big Tech and Speech Summit Thursday.

Conner challenged the view of many in industry who assume that any change to current laws, including section 230, would only make the internet worse.  

Conner, who aims to build a progressive technology policy platform and agenda, spent the past 15 years working as a Washington employee for several Silicon Valley companies, including Slack Technologies and Brigade. In 2007, Conner founded Facebook’s Washington office.

Instead, Conner argues that this mindset traps industry leaders in the assumption that the internet is currently the best it could ever be. This is a fallacy, he claims. To avoid this mindset, Conner suggests that the industry focus on regulation for new and emerging technology like artificial intelligence. 

Recent AI innovations, like ChatGPT, create the most human readable AI experience ever made through text, images, and videos, Conner said. The penetration of AI will completely change the discussion about protecting free speech, he said, urging Congress to draft laws now to ensure its safe use in the United States. 

Congress should start its AI regulation with privacy, anti-trust, and child safety laws, he said. Doing so will prove to American citizens that the internet can, in fact, be better than it is now and will promote future policy amendments, he said.

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending