Connect with us

Artificial Intelligence

Scott Heric: Robots Benefit Industrial Processes Most by Enhancing the Efforts of Humans

It is time to understand the impact robots have and the best route to using them to optimize labor practices.

Published

on

The author of this Expert Opinion is Scott Heric, co-founder of Unionly

If you have had a cup of coffee lately, you have probably been served by a robot. It may not have been a “baristabot” that took your order or handed you your latte at your local coffee shop, but somewhere along the line from bean to breve, an intelligent machine most likely played a role in producing your coffee.

Employing robots and other intelligent machines in industrial processes is part of a movement that is often referred to as the automation revolution. While it promises to shape the future of many industries, it is not futuristic.

Intelligent machines are already being employed in ways we never thought possible a few years ago. And now is the time to understand the impact they can have and the best route to using them to optimize labor practices.

Are robots taking over the workplace?

Presently, “robot density per employee,” which is a measure used to gauge the degree to which automation is being embraced, stands at 126 robots per 10,000 employees. While that may seem small, it is more than double the number recorded in 2015, a trend that has some concerned.

In early 2020, Massachusetts Institute of Technology issued a report titled “Work of the Future” that was developed in part to address a growing anxiety related to the automation revolution.

In its coverage of the report, MIT Technology Review explained the anxiety in this way: “There’s a growing fear among many American workers that they’re about to be replaced by technology, whether that’s a robot, a more efficient computing system, or a self-driving truck.”

While a robot revolution resulting in a large-scale displacement of human workers is a popular concern that has been explored in an endless number of science fiction movies, it misses the broader potential of an automation revolution. Robots benefit industrial processes most by enhancing the efforts of human workers, not by replacing them.

A recent report by The Wharton School at University of Pennsylvania shows that organizations that increase their automation through the use of robots typically hire more workers. This results from robots enhancing productivity, which grows business and demands an increase in non-robotic jobs. Wharton found that jobs were cut more often in companies that have not embraced the automation revolution. By resisting automation, they fell behind competitors, lost business, and had to let employees go.

What are the next steps?

This new paradigm of robots playing a more integral role in the workplace will not develop in a vacuum. Politically and culturally, people will need to accept intelligent machines and adapt accordingly. The automation revolution will require a shift not only in the way we work, but also in the way we think about work.

In the 1980s, computers entered the workplace. Some resisted, seeing the new technology as a tool that would be used to supplant the systems that were in place at that time.

Today, very few workplaces could survive without computers. Rather than supplanting systems, computers became a tool to optimize systems. Rather than displacing workers, they created a new universe of jobs.

Robots and other intelligent machines offer the same potential to those who are willing to see them as a tool that can be wielded to increase efficiency and productivity. Those who resist will watch from the sidelines as the automation revolution advances.

 Scott Heric, Co-Founder of Unionly, has years of experience helping organizations to raise funds online. He helped develop sales and account management for Avvo, growing from 30 to 500 people over seven years. Heric then took a chief of staff role at Snap Mobile Inc., where he oversaw development of the product, marketing, sales, and account management, leading to the company becoming a leading digital fundraising platform in higher education. His company Unionly was acquired in January of 2020. This piece is exclusive to Broadband Breakfast.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views reflected in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Artificial Intelligence

Deepfakes Pose National Security Threat, Private Sector Tackles Issue

Content manipulation can include misinformation from authoritarian governments.

Published

on

Photo of Dana Roa of Adobe, Paul Lekas of Global Policy (left to right)

WASHINGTON, July 20, 2022 – Content manipulation techniques known as deepfakes are concerning policy makers and forcing the public and private sectors to work together to tackle the problem, a Center for Democracy and Technology event heard on Wednesday.

A deepfake is a technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. Policymakers are concerned that deepfakes could pose a threat to the country’s national security as the technology is being increasingly offered to the general population.

Deepfake concerns that policymakers have identified, said participants at Wednesday’s event, include misinformation from authoritarian governments, faked compromising and abusive images, and illegal profiting from faked celebrity content.

“We should not and cannot have our guard down in the cyberspace,” said Representative John Katko, R-NY, ranking member of House Committee on homeland security.

Adobe pitches technology to identify deepfakes

Software company Adobe released an open-source toolkit to counter deepfake concerns earlier this month, said Dana Rao, executive vice president of Adobe. The companies’ Content Credentials feature is a technology developed over three years that tracks changes made to images, videos, and audio recordings.

Content Credentials is now an opt-in feature in the company’s photo editing software Photoshop that it says will help establish credibility for creators by adding “robust, tamper-evident provenance data about how a piece of content was produced, edited, and published,” read the announcement.

Adobe’s Connect Authenticity Initiative project is dedicated to addressing problems establishing trust after the damage caused by deepfakes. “Once we stop believing in true things, I don’t know how we are going to be able to function in society,” said Rao. “We have to believe in something.”

As part of its initiative, Adobe is working with the public sector in supporting the Deepfake Task Force Act, which was introduced in August of 2021. If adopted, the bill would establish a National Deepfake and Digital task force comprised of members from the private sector, public sector, and academia to address disinformation.

For now, said Cailin Crockett, senior advisor to the White House Gender Policy Council, it is important to educate the public on the threat of disinformation.

Continue Reading

Artificial Intelligence

Should the Federal Government Regulate Artificial Intelligence?

Two experts were on opposite sides of the debate about how to mitigate the downsides of AI.

Published

on

Screenshot of the panel at the Bipartisan Policy Center event Tuesday

WASHINGTON, July 12, 2022 – Representatives from academia and a nonprofit diverged at a Bipartisan Policy Center event Tuesday about whether the government should step in and minimize problems associated with artificial intelligence, including bias and discrimination in algorithms.

“We really do want actors to help us establish national and international guidelines,” said Miriam Vogel, president, and CEO of EqualAI, a nonprofit that seeks to reduce bias in AI. “We are driving full speed without lanes, without speed limits to manage the expectations.”

While acknowledging the benefits of AI in society today, Vogel said its algorithms present risk that often leads to bias and discrimination. She shared the example of how facial recognition misses certain voices or skin tones.

AI is used in various sectors and powers algorithms that cater services to individuals. Panelists referenced the use of AI algorithms in suspect identification for criminal justice, in disease diagnosis in health care, and for movie and employment recommendations.

Vogel said regulation will establish clear expectations for AI companies to minimize such risks.

Adam Thierer, a senior research fellow at the Mercatus Center at George Mason University, said he is “a little skeptical that we should create a regulatory AI structure” and instead proposed educating workers on how to set best practices for risk management. He called this an “educational institution approach.”

He said that because of how long federal law takes to enact, he wants to reach AI workers directly, such as the computer programmers and AI innovators “of tomorrow” to do a better job of “baking best practices” into AI.

“I think baking best practice principles in by design begins with an educational focus,” said Thierer.

Thierer said he wants to give this job to trusted third parties to suggest pathways forward, including ethical evaluations and consultations with AI companies. He said that when it comes to AI rules across different sectors, “we don’t need one overarching standard to rule them all.”

Thierer added that because of how fast AI is changing, “it can’t go through the same regulatory process.” He argued if regulation is put in place, we will lose AI innovators.

Vogel disagreed with Thierer, saying she doesn’t believe that there is a risk of losing innovators with regulating AI, and instead, said, “I see regulation is the partner to innovation.”

She said that because there is no government regulation for AI, companies are left to do it themselves if they choose, referencing the Badge Program at EqualAI that seeks to help companies navigate risks.

“We need to have a governance system put in place to make sure continual testing is taking place,” said Vogel.

Continue Reading

Artificial Intelligence

FTC Commissioner Says Agency Report on AI for Online Harms Did Not Consult Outside Experts

The FTC released a report that warned about the dangers of AI’s use to combat online harms.

Published

on

Photo of FTC Commissioner Noah Phillips

WASHINGTON, June 22, 2022 – Federal Trade Commissioner Noah Phillips said last week that a report by the commission about the use of artificial intelligence to tackle online harms did not consult outside experts as Congress asked.

The FTC’s “Combatting Online Harms through Innovation” report – approved by a 4-1 vote to send to Congress and released on June 16 – warns against using AI as a policy solution for online problems, as the commission says it contains inherent design flaws, bias and discrimination, and features commercial surveillance concerns. The commission concluded that the potential adoption of AI could increase additional harms.

However, the report found that amid the use of AI by Big Tech platforms to address online harms, “lawmakers should consider focusing on developing legal frameworks that would ensure that AI tools do not cause additional harm.”

The one dissenting opinion on the report was from Phillips, who said the FTC did not do the study that was required by Congress. As part of the 2021 Consolidated Appropriations Act, Congress asked the FTC to conduct a study on how artificial intelligence could address online harms such as fake reviews, hate crimes and harassment and child sexual abuse.

“I do not believe we conducted the requisite study, and I do not think the report on AI issued by the Commission takes sufficient care to answer the questions Congress asked,” Phillips said in his dissenting statement.

Phillips said the report mainly focuses on the technology of AI itself and lacks the outside perspective from individuals and companies who use AI and try to combat the harms of AI online, which he said is “precisely what Congress asked us to evaluate.”

Phillips added that in the 12 months the FTC was given to complete this study, “rather than use this time to solicit input from all relevant stakeholders, the Commission chose to conduct a kind of literature review.

Phillips said in his statement he would have liked to see interviews of market participants or surveys conducted, which allegedly isn’t included in the recent report and adds that he is instead concerned about the “quantity of self-reference” used by the FTC in the report.

“Still, we should at least endeavor to produce a report that reflects the full diversity of experiences and viewpoints on these important issues concerning AI.” Phillips also noted the report doesn’t include a serious cost-benefit analysis of using AI to combat online harms.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending