Artificial Intelligence
Staying Ahead On Artificial Intelligence Requires International Cooperation

March 4, 2021—Artificial intelligence is present in most facets of American digital life, but experts are in a constant race to identify and address potential dangers before they impact consumers.
From making a simple search on Google to listening to music on Spotify to streaming Tiger King on Netflix, AI is everywhere. Predictive algorithms learn from a consumer’s viewing habits and attempt to direct consumers to other content an algorithm thinks a consumer will be interested in.
While this can be extremely convenient for consumers, it also raises many concerns.
Jaisha Wray, associate administrator for international affairs at the National Telecommunications and Information Administration, was a panelist at a conference hosted Tuesday by the Federal Communications Bar Association.
Wray identified three key areas of interest that are at the forefront of AI policy: content moderation, algorithm transparency, and the establishment of common-ground policies between foreign governments.
In addition to all the aforementioned uses for AI, it also has proven to be an indispensable tool for websites like Facebook, Alphabet’s Youtube, and myriad other social media platforms in auto-moderating their content. While most social media platforms employ humans to review various decisions made by AI (such as Facebook’s Oversight Board), most content is first handled by AI moderators.
According to Tubefilter, in 2019 more than 500 hours of video content were uploaded to Youtube every minute; in less than 20 minutes, a year’s worth of content is uploaded.
Content moderation, algorithm transparency, foreign alignment
On this scale, AI is necessary to police the website, even if it not a perfect system. “[AI] is like a thread that’s woven into every issue that we work on and every venue,” Wray explained. She described how both governments and private entities have looked to AI to not only moderate somewhat mundane things such copyright issues, but also national security issues like violent extremist content.
Her second point pertained to algorithm transparency. She outlined how entities outside of the U.S. have sought to address this concern by providing consumers with the opportunity to have their content reviewed by humans before a final decision is made. Wray pointed to the European General Protection Regulation, “which enshrines the principle that every person has the right not to be subject to a decision solely based on automated processing.”
Her final point raised the issue of coordinating these efforts between different international jurisdictions—namely the U.S. and its allies. “We’re really trying to hone-in on where our values align and where we can find common ground.” She added that coordination does not end with allies, however, and that it is key that the U.S. also coordinate with authoritarian regimes, allied or otherwise.
She said that the primary task facing the U.S. right now is simply trying to determine which issues are worth prioritizing when it comes to coordinating with foreign governments—whether that is addressing the spread of AI, how to police AI multilaterally, or how to address the use of AI by adversarial authoritarian regimes.
Technology needs to be built with security in mind
One of Wray’s co-panelists, Evelyn Remaley, who is the associate administrator for the NTIA’s Office of Policy Analysis and Development, said all multilateral cybersecurity efforts related to AI must be approached from a position of what she called a “zero-trust model.” She explained that this model operates from the presupposition that technology should not and cannot be trusted.
“We have to build in controls and standards from the bottom-up to make sure that we are building in the security layer by layer,” Remaley said. “It’s really that premise of ensuring that we realize that we’re always going to have vulnerabilities within this technical development space.”
Remaley said that increasing competition and collaboration can only be safely achieved with a zero-trust mindset.
Artificial Intelligence
Sen. Bennet Urges Companies to Consider ‘Alarming’ Child Safety Risks in AI Chatbot Race
Several leading tech companies have rushed to integrate their own AI-powered applications

WASHINGTON, March 22, 2023 — Sen. Michael Bennet, D-Colo., on Tuesday urged the companies behind generative artificial intelligence products to anticipate and mitigate the potential harms that AI-powered chatbots pose to underage users.
“The race to deploy generative AI cannot come at the expense of our children,” Bennet wrote in a letter to the heads of Google, OpenAI, Meta, Microsoft and Snap. “Responsible deployment requires clear policies and frameworks to promote safety, anticipate risk and mitigate harm.”
In response to the explosive popularity of OpenAI’s ChatGPT, several leading tech companies have rushed to integrate their own AI-powered applications. Microsoft recently released an AI-powered version of its Bing search engine, and Google has announced plans to make a conversational AI service “widely available to the public in the coming weeks.”
Social media platforms have followed suit, with Meta CEO Mark Zuckerberg saying the company plans to “turbocharge” its AI development the same day Snapchat launched a GPT-powered chatbot called My AI.
These chatbots have already demonstrated “alarming” interactions, Bennet wrote. In response to a researcher posing as a child, My AI gave instructions for lying to parents about an upcoming trip with a 31-year-old man and for covering up a bruise ahead of a visit from Child Protective Services.
A Snap Newsroom post announcing the chatbot acknowledged that “as with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything.”
Bennet criticized the company for deploying My AI despite knowledge of its shortcomings, noting that 59 percent of teens aged 13 to 17 use Snapchat. “Younger users are at an earlier stage of cognitive, emotional, and intellectual development, making them more impressionable, impulsive, and less equipped to distinguish fact from fiction,” he wrote.
These concerns are compounded by an escalating youth mental health crisis, Bennet added. In 2021, more than half of teen girls reported feeling persistently sad or hopeless and one in three seriously contemplated suicide, according to a recent report from the Centers for Disease Control and Prevention.
“Against this backdrop, it is not difficult to see the risk of exposing young people to chatbots that have at times engaged in verbal abuse, encouraged deception and suggested self-harm,” the senator wrote.
Bennet’s letter comes as lawmakers from both parties are expressing growing concerns about technology’s impact on young users. Legislation aimed at safeguarding children’s online privacy has gained broad bipartisan support, and several other measures — ranging from a minimum age requirement for social media usage to a slew of regulations for tech companies — have been proposed.
Many industry experts have also called for increased AI regulation, noting that very little legislation currently governs the powerful technology.
Artificial Intelligence
Oversight Committee Members Concerned About New AI, As Witnesses Propose Some Solutions
Federal government can examine algorithms for generative AI, and coordinate with states on AI labor training.

WASHINGTON, March 14, 2023 – In response to lawmakers’ concerns over the impacts on certain artificial intelligence technologies, experts said at an oversight subcommittee hearing on Wednesday that more government regulation would be necessary to stem their negative impacts.
Relatively new machine learning technology known as generative AI, which is designed to create content on its own, has taken the world by storm. Specific applications such as the recently surfaced ChatGPT, which can write out entire novels from basic user inputs, has drawn both marvel and concern.
Such AI technology can be used to encourage cheating behaviors in academia as well as harm people through the use of deep fakes, which uses AI to superimpose a user in a video. Such AI can be used to produce “revenge pornography” to harass, silence and blackmail victims.
Aleksander Mądry, professor of Cadence Design Systems of Massachusetts Institute of Technology, told the subcommittee that AI is a very fast moving technology, meaning the government needs to step in to confirm the objectives of the companies and whether the algorithms match the societal benefits and values. These generative AI technologies are often limited to their human programming and can also display biases.
Rep. Marjorie Taylor Greene, R-Georgia, raised concerns about this type of AI replacing human jobs. Eric Schmidt, former Google CEO and now chair of the AI development initiative known as the Special Competitive Studies Project, said that if this AI can be well-directed, it can aid people in obtaining higher incomes and actually creating more jobs.
To that point, Rep. Stephen Lynch, D-Massachusetts., raised the question of how much progress the government has made or still needs in AI development.
Schmidt said governments across the country need to look at bolstering the labor force to keep up.
“I just don’t see the progress in government to reform the way of hiring and promoting technical people,” he said. “This technology is too new. You need new students, new ideas, new invention – I think that’s the fastest way.
“On the federal level, the easiest thing to do is to come up with some program that’s ministered by the state or by leading universities and getting them money so that they can build these programs.”
Schmidt urged lawmakers last year to create a digital service academy to train more young American students on AI, cybersecurity and cryptocurrency, reported Axios.
Artificial Intelligence
Congress Should Focus on Tech Regulation, Said Former Tech Industry Lobbyist
Congress should shift focus from speech debates to regulation on emerging technologies, says expert.

WASHINGTON, March 9, 2023 – Congress should focus on technology regulation, particularly for emerging technology, rather than speech debates, said Adam Conner, vice president of technology policy at American Progress at Broadband Breakfast’s Big Tech and Speech Summit Thursday.
Conner challenged the view of many in industry who assume that any change to current laws, including section 230, would only make the internet worse.
Conner, who aims to build a progressive technology policy platform and agenda, spent the past 15 years working as a Washington employee for several Silicon Valley companies, including Slack Technologies and Brigade. In 2007, Conner founded Facebook’s Washington office.
Instead, Conner argues that this mindset traps industry leaders in the assumption that the internet is currently the best it could ever be. This is a fallacy, he claims. To avoid this mindset, Conner suggests that the industry focus on regulation for new and emerging technology like artificial intelligence.
Recent AI innovations, like ChatGPT, create the most human readable AI experience ever made through text, images, and videos, Conner said. The penetration of AI will completely change the discussion about protecting free speech, he said, urging Congress to draft laws now to ensure its safe use in the United States.
Congress should start its AI regulation with privacy, anti-trust, and child safety laws, he said. Doing so will prove to American citizens that the internet can, in fact, be better than it is now and will promote future policy amendments, he said.
To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!
-
Fiber4 weeks ago
‘Not a Great Product’: AT&T Not Looking to Invest Heavily in Fixed Wireless
-
Broadband Roundup3 weeks ago
AT&T Floats BEAD in USF Areas, Counties Concerned About FCC Map, Alabama’s $25M for Broadband
-
Big Tech3 weeks ago
Preview the Start of Broadband Breakfast’s Big Tech & Speech Summit
-
Big Tech4 weeks ago
House Innovation, Data, and Commerce Chairman Gus Bilirakis to Keynote Big Tech & Speech Summit
-
Big Tech3 weeks ago
Watch the Webinar of Big Tech & Speech Summit for $9 and Receive Our Breakfast Club Report
-
#broadbandlive2 weeks ago
Broadband Breakfast on March 22, 2023 – Robocalls, STIR/SHAKEN and the Future of Voice Telephony
-
Infrastructure1 week ago
BEAD Build Timelines in Jeopardy if ‘Buy America’ Waivers Not Granted, White House Budget Office Told
-
#broadbandlive3 weeks ago
Broadband Breakfast on March 8: A Status Update on Tribal Broadband