January 14, 2021 – Artificial intelligence will continue to gradually revolutionize productivity and functionality as long as policy concerns are addressed, said three separate panel of experts on Tuesday at CES 2021.
Various AI technologies will contribute $16 trillion to the economy by 2030. Better machines, better software, and an explosion of applications affecting everyday lives will continue. AI has enhanced productivity, improved safety, and made the world more accessible.
As AI does not replace but enhances human work, it presents an enormous set of curated data that must be entrusted with established guidelines and transparency.
IBM Vice President Bridget Karlin raised a concern involving ethics, and not building bias. Depending on companies’ purposes for using AI, models can be ethical when they are engineered to be fair and properly calibrated.
It is incumbent upon software developers to identify the requirements for ethical and non-biased data collection, she said.
What about AI’s involvement in creating and spreading fake news? Kevin Guo, CEO of Hive, said this is a service still in process.
It is essential for AI engineers to research and implement data fairness and remove bias.
Impacts of AI on health care
On health care, participants in a separate panel said AI leads to improved outcomes and lower costs. But to Christina Silcox, a digital health policy fellow, the question is: How can people trust something that can’t be seen or understood?
Understanding how technologies are created helps, said Jesse Ehrenfeld from the Board of Trustees of American Medical Association. But he acknowledged that all data is in some way biased. He said he can’t tell the number of times data flows have generated different meanings than expected.
Christina Silcox said that it was critical to understand how software will work overtime after being put into place.
Indeed, communication and transparency are key for trust and growth of AI, said Senior Regulatory Specialist Pat Baird. Depending on who the stakeholders are, there will need customization of such communication.
Trustworthy AI will re-humanize health care, letting computers do what they were built to be done, and allow the health care workers to work with people, he said.
What about gender and racial bias?
Another panel during 2021 CES discussed gender and racial bias in a business setting – and how AI can contribute show and reflect equal representation.
Annie Jean-Baptiste of Google said that humble inclusion fills innovation. Kimberly Sterling of ResMed declared that people are not going anywhere, and that AI will not replace people’s brains.
All three panel discussions pondered the future of AI. Most agreed that AI exists to supplement human ambition, enabling everyday businesses to become smarter, adjust to new inputs and perform human-like-tasks.
For richer data sets, one of the solutions that panelists proposed to break out the “black box” of AI, make sure to assemble those capabilities and understand the sources of the data, to go back and test the models, and to be able to look holistically at outcomes.
AI Should Compliment and Not Replace Humans, Says Stanford Expert
AI that strictly imitates human behavior can make workers superfluous and concentrate power in the hands of employers.
WASHINGTON, November 4, 2022 – Artificial intelligence should be developed primarily to augment the performance of, not replace, humans, said Erik Brynjolfsson, director of the Stanford Digital Economy Lab, at a Wednesday web event hosted by the Brookings Institution.
AI that complements human efforts can increase wages by driving up worker productivity, Brynjolfsson argued. AI that strictly imitates human behavior, he said, can make workers superfluous – thereby lowering the demand for workers and concentrating economic and political power in the hands of employers – in this case the owners of the AI.
“Complementarity (AI) implies that people remain indispensable for value creation and retain bargaining power in labor markets and in political decision-making,” he wrote in an essay earlier this year.
What’s more, designing AI to mimic existing human behaviors limits innovation, Brynjolfsson argued Wednesday.
“If you are simply taking what’s already being done and using a machine to replace what the human’s doing, that puts an upper bound on how good you can get,” he said. “The bigger value comes from creating an entirely new thing that never existed before.”
Brynjolfsson argued that AI should be crafted to reflect desired societal outcomes. “The tools we have now are more powerful than any we had before, which almost by definition means we have more power to change the world, to shape the world in different ways,” he said.
The AI Bill of Rights
In October, the White House released a blueprint for an “AI Bill of Rights.” The document condemned algorithmic discrimination on the basis of race, sex, religion, or age and emphasized the importance of user privacy. It also endorsed system transparency with users and suggested the use of human alternatives to AI when feasible.
To fully align with the blueprint’s standards, Russell Wald, policy director for Stanford’s Institute for Human-Centered Artificial Intelligence, argued at a recent Brookings event that the nation must develop a larger AI workforce.
Workforce Training Needed to Address Artificial Intelligence Bias, Researchers Suggest
Building on the Blueprint for an AI Bill of Rights by the White House Office of Science and Technology Policy.
WASHINGTON, October 24, 2022–To align with the newly released White House guide on artificial intelligence, Stanford University’s director of policy said at an October Brookings Institution event last week that there needs to be more social and technical workforce training to address artificial intelligence biases.
Released on October 4, the Blueprint for an AI Bill of Rights framework by the White House’s Office of Science and Technology Policy is a guide for companies to follow five principles to ensure the protection of consumer rights from automated harm.
AI algorithms rely on learning the users behavior and disclosed information to customize services and advertising. Due to the nature of this process, algorithms carry the potential to send targeted information or enforce discriminatory eligibility practices based on race or class status, according to critics.
Risk mitigation, which prevents algorithm-based discrimination in AI technology is listed as an ‘expectation of an automated system’ under the “safe and effective systems” section of the White House framework.
Experts at the Brookings virtual event believe that workforce development is the starting point for professionals to learn how to identify risk and obtain the capacity to fulfill this need.
“We don’t have the talent available to do this type of investigative work,” Russell Wald, policy director for Stanford’s Institute for Human-Centered Artificial Intelligence, said at the event.
“We just don’t have a trained workforce ready and so what we really need to do is. I think we should invest in the next generation now and start giving people tools and access and the ability to learn how to do this type of work.”
Nicol Turner-Lee, senior fellow at the Brookings Institution, agreed with Wald, recommending sociologists, philosophers and technologists get involved in the process of AI programming to align with algorithmic discrimination protections – another core principle of the framework.
Core principles and protections suggested in this framework would require lawmakers to create new policies or include them in current safety requirements or civil rights laws. Each principle includes three sections on principles, automated systems and practice by government entities.
In July, Adam Thierer, senior research fellow at the Mercatus Center of George Mason University stated that he is “a little skeptical that we should create a regulatory AI structure,” and instead proposed educating workers on how to set best practices for risk management, calling it an “educational institution approach.”
Deepfakes Pose National Security Threat, Private Sector Tackles Issue
Content manipulation can include misinformation from authoritarian governments.
WASHINGTON, July 20, 2022 – Content manipulation techniques known as deepfakes are concerning policy makers and forcing the public and private sectors to work together to tackle the problem, a Center for Democracy and Technology event heard on Wednesday.
A deepfake is a technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. Policymakers are concerned that deepfakes could pose a threat to the country’s national security as the technology is being increasingly offered to the general population.
Deepfake concerns that policymakers have identified, said participants at Wednesday’s event, include misinformation from authoritarian governments, faked compromising and abusive images, and illegal profiting from faked celebrity content.
“We should not and cannot have our guard down in the cyberspace,” said Representative John Katko, R-NY, ranking member of House Committee on homeland security.
Adobe pitches technology to identify deepfakes
Software company Adobe released an open-source toolkit to counter deepfake concerns earlier this month, said Dana Rao, executive vice president of Adobe. The companies’ Content Credentials feature is a technology developed over three years that tracks changes made to images, videos, and audio recordings.
Content Credentials is now an opt-in feature in the company’s photo editing software Photoshop that it says will help establish credibility for creators by adding “robust, tamper-evident provenance data about how a piece of content was produced, edited, and published,” read the announcement.
Adobe’s Connect Authenticity Initiative project is dedicated to addressing problems establishing trust after the damage caused by deepfakes. “Once we stop believing in true things, I don’t know how we are going to be able to function in society,” said Rao. “We have to believe in something.”
As part of its initiative, Adobe is working with the public sector in supporting the Deepfake Task Force Act, which was introduced in August of 2021. If adopted, the bill would establish a National Deepfake and Digital task force comprised of members from the private sector, public sector, and academia to address disinformation.
For now, said Cailin Crockett, senior advisor to the White House Gender Policy Council, it is important to educate the public on the threat of disinformation.
Signup for Broadband Breakfast
Broadband Breakfast Research Partner
Many States Receive Broadband Planning Grants, Complaints About Charter, Blockchain for Healthcare
Tech Groups, Free Expression Advocates Support Twitter in Landmark Content Moderation Case
Municipal and Co-Op ISPs Raise ‘Anti-Competitive Concerns,’ Says Duke Professor
As States Take Action Against TikTok, Major Privacy Legislation Seems Unlikely
House Bill to Make Broadband Grants Non-Taxable Introduced
NTIA Launching $1.5B Innovation Fund to Explore Alternative Wireless Equipment
Maryland Bans TikTok on State Network, New Head of Open Technology Institute, UScellular Expands 5G
Pierre Trudeau: Life in the Trenches, or Lessons Learned Deploying Broadband in MDUs
Cable Providers Back Hybrid Fiber-Coax Networks in Face of Pure Fiber
Sen. John Thune Launches Broadband Oversight Effort
Keynote Address and Q&A at Digital Infrastructure Investment
FCC Releases National Broadband Map Amid Controversy
Federal Communications Commission Mandates Broadband ‘Nutrition’ Labels
FCC Told No to C-Band Changes, New Tribal Entity Grants, Surfshark Report on Internet Value
Twitter Takeover by Elon Musk Forces Conflict Over Free Speech on Social Networks
Bjorn Capens: Strong Appetite for Rural Broadband Calls for Next Generation Fiber Technology
Broadband Breakfast on December 7, 2022 – What to Expect from Congress on Social Media and Privacy Regulation
Trump’s Twitter Account Reinstated as Truth Social Gets Merger Extension
Amazon Asks FCC to Allow Drones in 60-64 GHz Band in Preparation For New Delivery Service
Industry and Other Internet-Focused Observers React to Release of National Broadband Map
As States Take Action Against TikTok, Major Privacy Legislation Seems Unlikely
Broadband Breakfast on November 30, 2022 – The 12 Days of Broadband
Small ISPs Face Economic, Incumbent Bundling Headwinds: CoBank Economist
Venture Capital, Private Equity and Institutional Investors on Digital Infrastructure Investment
Financing Mechanisms for Community Broadband, Panel 3 at Digital Infrastructure Investment
Right Track or Wrong Track on Mapping? Panel 2 at Digital Infrastructure Investment
What’s the State of the IIJA? Panel 1 at Digital Infrastructure Investment
Broadband Breakfast on December 28, 2022 – New Year Recap: Biggest Stories in Broadband
Broadband Breakfast on December 21, 2022, – Robotics, Telehealth and Future Health
Broadband Breakfast on December 14, 2022 – In the Trenches: Better Broadband for Multi-Dwelling Units
Infrastructure2 weeks ago
Keynote Address and Q&A at Digital Infrastructure Investment
Cybersecurity4 weeks ago
Internet of Things Devices May Provide a Weak Point for Cybersecurity, Says CableLabs
Broadband Roundup3 weeks ago
BAI Buys 1,100 Fiber Miles of Network, Workforce Training Partnership, New Executive at US Cellular
Fiber4 weeks ago
Fiber Providers Need to Go Beyond Speed for Differentiation, Consultant Says
Funding4 weeks ago
After FCC Map Release Date, NTIA Says Infrastructure Money to Be Allocated by June 2023
Broadband Mapping & Data4 weeks ago
Draft National Broadband Map To Be Released November 18, FCC Says
Infrastructure4 weeks ago
Perfect Timing to Attend Digital Infrastructure Investment in Washington on Nov. 17
Spectrum3 weeks ago
Carr Advocates Release of More Spectrum as Deadline to Extend FCC Auction Authority Looms