Artificial Intelligence
Scott Heric: Robots Benefit Industrial Processes Most by Enhancing the Efforts of Humans
It is time to understand the impact robots have and the best route to using them to optimize labor practices.

If you have had a cup of coffee lately, you have probably been served by a robot. It may not have been a “baristabot” that took your order or handed you your latte at your local coffee shop, but somewhere along the line from bean to breve, an intelligent machine most likely played a role in producing your coffee.
Employing robots and other intelligent machines in industrial processes is part of a movement that is often referred to as the automation revolution. While it promises to shape the future of many industries, it is not futuristic.
Intelligent machines are already being employed in ways we never thought possible a few years ago. And now is the time to understand the impact they can have and the best route to using them to optimize labor practices.
Are robots taking over the workplace?
Presently, “robot density per employee,” which is a measure used to gauge the degree to which automation is being embraced, stands at 126 robots per 10,000 employees. While that may seem small, it is more than double the number recorded in 2015, a trend that has some concerned.
In early 2020, Massachusetts Institute of Technology issued a report titled “Work of the Future” that was developed in part to address a growing anxiety related to the automation revolution.
In its coverage of the report, MIT Technology Review explained the anxiety in this way: “There’s a growing fear among many American workers that they’re about to be replaced by technology, whether that’s a robot, a more efficient computing system, or a self-driving truck.”
While a robot revolution resulting in a large-scale displacement of human workers is a popular concern that has been explored in an endless number of science fiction movies, it misses the broader potential of an automation revolution. Robots benefit industrial processes most by enhancing the efforts of human workers, not by replacing them.
A recent report by The Wharton School at University of Pennsylvania shows that organizations that increase their automation through the use of robots typically hire more workers. This results from robots enhancing productivity, which grows business and demands an increase in non-robotic jobs. Wharton found that jobs were cut more often in companies that have not embraced the automation revolution. By resisting automation, they fell behind competitors, lost business, and had to let employees go.
What are the next steps?
This new paradigm of robots playing a more integral role in the workplace will not develop in a vacuum. Politically and culturally, people will need to accept intelligent machines and adapt accordingly. The automation revolution will require a shift not only in the way we work, but also in the way we think about work.
In the 1980s, computers entered the workplace. Some resisted, seeing the new technology as a tool that would be used to supplant the systems that were in place at that time.
Today, very few workplaces could survive without computers. Rather than supplanting systems, computers became a tool to optimize systems. Rather than displacing workers, they created a new universe of jobs.
Robots and other intelligent machines offer the same potential to those who are willing to see them as a tool that can be wielded to increase efficiency and productivity. Those who resist will watch from the sidelines as the automation revolution advances.
Scott Heric, Co-Founder of Unionly, has years of experience helping organizations to raise funds online. He helped develop sales and account management for Avvo, growing from 30 to 500 people over seven years. Heric then took a chief of staff role at Snap Mobile Inc., where he oversaw development of the product, marketing, sales, and account management, leading to the company becoming a leading digital fundraising platform in higher education. His company Unionly was acquired in January of 2020. This piece is exclusive to Broadband Breakfast.
Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views reflected in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.
Artificial Intelligence
Sen. Bennet Urges Companies to Consider ‘Alarming’ Child Safety Risks in AI Chatbot Race
Several leading tech companies have rushed to integrate their own AI-powered applications

WASHINGTON, March 22, 2023 — Sen. Michael Bennet, D-Colo., on Tuesday urged the companies behind generative artificial intelligence products to anticipate and mitigate the potential harms that AI-powered chatbots pose to underage users.
“The race to deploy generative AI cannot come at the expense of our children,” Bennet wrote in a letter to the heads of Google, OpenAI, Meta, Microsoft and Snap. “Responsible deployment requires clear policies and frameworks to promote safety, anticipate risk and mitigate harm.”
In response to the explosive popularity of OpenAI’s ChatGPT, several leading tech companies have rushed to integrate their own AI-powered applications. Microsoft recently released an AI-powered version of its Bing search engine, and Google has announced plans to make a conversational AI service “widely available to the public in the coming weeks.”
Social media platforms have followed suit, with Meta CEO Mark Zuckerberg saying the company plans to “turbocharge” its AI development the same day Snapchat launched a GPT-powered chatbot called My AI.
These chatbots have already demonstrated “alarming” interactions, Bennet wrote. In response to a researcher posing as a child, My AI gave instructions for lying to parents about an upcoming trip with a 31-year-old man and for covering up a bruise ahead of a visit from Child Protective Services.
A Snap Newsroom post announcing the chatbot acknowledged that “as with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything.”
Bennet criticized the company for deploying My AI despite knowledge of its shortcomings, noting that 59 percent of teens aged 13 to 17 use Snapchat. “Younger users are at an earlier stage of cognitive, emotional, and intellectual development, making them more impressionable, impulsive, and less equipped to distinguish fact from fiction,” he wrote.
These concerns are compounded by an escalating youth mental health crisis, Bennet added. In 2021, more than half of teen girls reported feeling persistently sad or hopeless and one in three seriously contemplated suicide, according to a recent report from the Centers for Disease Control and Prevention.
“Against this backdrop, it is not difficult to see the risk of exposing young people to chatbots that have at times engaged in verbal abuse, encouraged deception and suggested self-harm,” the senator wrote.
Bennet’s letter comes as lawmakers from both parties are expressing growing concerns about technology’s impact on young users. Legislation aimed at safeguarding children’s online privacy has gained broad bipartisan support, and several other measures — ranging from a minimum age requirement for social media usage to a slew of regulations for tech companies — have been proposed.
Many industry experts have also called for increased AI regulation, noting that very little legislation currently governs the powerful technology.
Artificial Intelligence
Oversight Committee Members Concerned About New AI, As Witnesses Propose Some Solutions
Federal government can examine algorithms for generative AI, and coordinate with states on AI labor training.

WASHINGTON, March 14, 2023 – In response to lawmakers’ concerns over the impacts on certain artificial intelligence technologies, experts said at an oversight subcommittee hearing on Wednesday that more government regulation would be necessary to stem their negative impacts.
Relatively new machine learning technology known as generative AI, which is designed to create content on its own, has taken the world by storm. Specific applications such as the recently surfaced ChatGPT, which can write out entire novels from basic user inputs, has drawn both marvel and concern.
Such AI technology can be used to encourage cheating behaviors in academia as well as harm people through the use of deep fakes, which uses AI to superimpose a user in a video. Such AI can be used to produce “revenge pornography” to harass, silence and blackmail victims.
Aleksander Mądry, professor of Cadence Design Systems of Massachusetts Institute of Technology, told the subcommittee that AI is a very fast moving technology, meaning the government needs to step in to confirm the objectives of the companies and whether the algorithms match the societal benefits and values. These generative AI technologies are often limited to their human programming and can also display biases.
Rep. Marjorie Taylor Greene, R-Georgia, raised concerns about this type of AI replacing human jobs. Eric Schmidt, former Google CEO and now chair of the AI development initiative known as the Special Competitive Studies Project, said that if this AI can be well-directed, it can aid people in obtaining higher incomes and actually creating more jobs.
To that point, Rep. Stephen Lynch, D-Massachusetts., raised the question of how much progress the government has made or still needs in AI development.
Schmidt said governments across the country need to look at bolstering the labor force to keep up.
“I just don’t see the progress in government to reform the way of hiring and promoting technical people,” he said. “This technology is too new. You need new students, new ideas, new invention – I think that’s the fastest way.
“On the federal level, the easiest thing to do is to come up with some program that’s ministered by the state or by leading universities and getting them money so that they can build these programs.”
Schmidt urged lawmakers last year to create a digital service academy to train more young American students on AI, cybersecurity and cryptocurrency, reported Axios.
Artificial Intelligence
Congress Should Focus on Tech Regulation, Said Former Tech Industry Lobbyist
Congress should shift focus from speech debates to regulation on emerging technologies, says expert.

WASHINGTON, March 9, 2023 – Congress should focus on technology regulation, particularly for emerging technology, rather than speech debates, said Adam Conner, vice president of technology policy at American Progress at Broadband Breakfast’s Big Tech and Speech Summit Thursday.
Conner challenged the view of many in industry who assume that any change to current laws, including section 230, would only make the internet worse.
Conner, who aims to build a progressive technology policy platform and agenda, spent the past 15 years working as a Washington employee for several Silicon Valley companies, including Slack Technologies and Brigade. In 2007, Conner founded Facebook’s Washington office.
Instead, Conner argues that this mindset traps industry leaders in the assumption that the internet is currently the best it could ever be. This is a fallacy, he claims. To avoid this mindset, Conner suggests that the industry focus on regulation for new and emerging technology like artificial intelligence.
Recent AI innovations, like ChatGPT, create the most human readable AI experience ever made through text, images, and videos, Conner said. The penetration of AI will completely change the discussion about protecting free speech, he said, urging Congress to draft laws now to ensure its safe use in the United States.
Congress should start its AI regulation with privacy, anti-trust, and child safety laws, he said. Doing so will prove to American citizens that the internet can, in fact, be better than it is now and will promote future policy amendments, he said.
To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!
-
Fiber4 weeks ago
‘Not a Great Product’: AT&T Not Looking to Invest Heavily in Fixed Wireless
-
Broadband Roundup3 weeks ago
AT&T Floats BEAD in USF Areas, Counties Concerned About FCC Map, Alabama’s $25M for Broadband
-
Big Tech2 weeks ago
Preview the Start of Broadband Breakfast’s Big Tech & Speech Summit
-
Big Tech4 weeks ago
House Innovation, Data, and Commerce Chairman Gus Bilirakis to Keynote Big Tech & Speech Summit
-
Big Tech3 weeks ago
Watch the Webinar of Big Tech & Speech Summit for $9 and Receive Our Breakfast Club Report
-
#broadbandlive2 weeks ago
Broadband Breakfast on March 22, 2023 – Robocalls, STIR/SHAKEN and the Future of Voice Telephony
-
Infrastructure1 week ago
BEAD Build Timelines in Jeopardy if ‘Buy America’ Waivers Not Granted, White House Budget Office Told
-
#broadbandlive3 weeks ago
Broadband Breakfast on March 8: A Status Update on Tribal Broadband