Connect with us

Artificial Intelligence

Copyright Pros Don’t Know What to Do About Authorless AI Paintings

Published

on

Photo at the U.S. Copyright Office program on AI and copyright by David Jelke

WASHINGTON, February 5, 2020 – Intellectual property experts on Wednesday puzzled over questions of originality and attribution at a conference hosted at the Library of Congress on “Copyright in the Age of Artificial Intelligence.”

Ahmed Elgammal, a computer scientist from Rutgers University, dazzled attendees with a painting made by a computer algorithm that sold at auction for $432,500 in 2018. This is an example of a painting that lacks obvious authorship.

Elgammal demonstrated what a pigeon crossed with a soda can would look like on the AI art website ArtBreeder. And he related the results of a perplexing study published by his laboratory that found 75 percent of human subjects could not distinguish a painting made by a Generative Adversarial Network, which is a computer algorithm, and one made by a human. The percentage was even higher for works of abstract impressionism.

Rob Kasunic, associate register of copyrights at the U.S. Copyright Office of the Library of Congress, tried to provide answers to the questions of authorship brought up by Elgammal.

In doing so, he raised more questions: Does Congress have the constitutional authority to give copyright incentives for AI computer programs? Should congress do that? Is copyright law even the correct vehicle for AI output protection?

Precedent provides limited guidance to these questions, he said.

As a rule, the Copyright Office will not register works produced by nature, animals, or plants, Kasunic said. He offered helpful examples of this rule such as a monkey taking a selfie and the forces of erosion aesthetically shaping the contours of a piece of driftwood.

However, computer programs are unique in that at some level there lies human involvement.

As Acting Register of Copyrights Maria Strong and an earlier speaker asked, “at what point does setting something into motion mean authorship?”

The conference had been opened by Francis Gurry, director general of the World Intellectual Property Organization . In addition to setting the stage for the conversation, Gurry alluded to other AI challenges facing copyright law, such as a publisher lawsuit against Amazon’s Audible over speech-to-text , and deepfakes involving actors.

Artificial Intelligence

Oversight Committee Members Concerned About New AI, As Witnesses Propose Some Solutions

Federal government can examine algorithms for generative AI, and coordinate with states on AI labor training.

Published

on

By

Photo of Eric Schmidt from December 2011 by Kmeron used with permission

WASHINGTON, March 14, 2023 –  In response to lawmakers’ concerns over the impacts on certain artificial intelligence technologies, experts said at an oversight subcommittee hearing on Wednesday that more government regulation would be necessary to stem their negative impacts.

Relatively new machine learning technology known as generative AI, which is designed to create content on its own, has taken the world by storm. Specific applications such as the recently surfaced ChatGPT, which can write out entire novels from basic user inputs, has drawn both marvel and concern.

Such AI technology can be used to encourage cheating behaviors in academia as well as harm people through the use of  deep fakes, which uses AI to superimpose a user in a video. Such AI can be used to produce “revenge pornography” to harass, silence and blackmail victims.

Aleksander Mądry, professor of Cadence Design Systems of Massachusetts Institute of Technology, told the subcommittee that AI is a very fast moving technology, meaning the government needs to step in to confirm the objectives of the companies and whether the algorithms match the societal benefits and values. These generative AI technologies are often limited to their human programming and can also display biases.

Rep. Marjorie Taylor Greene, R-Georgia, raised concerns about this type of AI replacing human jobs. Eric Schmidt, former Google CEO and now chair of the AI development initiative known as the Special Competitive Studies Project, said that if this AI can be well-directed, it can aid people in obtaining higher incomes and actually creating more jobs.

To that point, Rep. Stephen Lynch, D-Massachusetts., raised the question of how much progress the government has made or still needs in AI development.

Schmidt said governments across the country need to look at bolstering the labor force to keep up.

“I just don’t see the progress in government to reform the way of hiring and promoting technical people,” he said. “This technology is too new. You need new students, new ideas, new invention – I think that’s the fastest way.

“On the federal level, the easiest thing to do is to come up with some program that’s ministered by the state or by leading universities and getting them money so that they can build these programs.”

Schmidt urged lawmakers last year to create a digital service academy to train more young American students on AI, cybersecurity and cryptocurrency, reported Axios.

Continue Reading

Artificial Intelligence

Congress Should Focus on Tech Regulation, Said Former Tech Industry Lobbyist

Congress should shift focus from speech debates to regulation on emerging technologies, says expert.

Published

on

Photo of Adam Conner, vice president of technology policy at American Progress

WASHINGTON, March 9, 2023 – Congress should focus on technology regulation, particularly for emerging technology, rather than speech debates, said Adam Conner, vice president of technology policy at American Progress at Broadband Breakfast’s Big Tech and Speech Summit Thursday.

Conner challenged the view of many in industry who assume that any change to current laws, including section 230, would only make the internet worse.  

Conner, who aims to build a progressive technology policy platform and agenda, spent the past 15 years working as a Washington employee for several Silicon Valley companies, including Slack Technologies and Brigade. In 2007, Conner founded Facebook’s Washington office.

Instead, Conner argues that this mindset traps industry leaders in the assumption that the internet is currently the best it could ever be. This is a fallacy, he claims. To avoid this mindset, Conner suggests that the industry focus on regulation for new and emerging technology like artificial intelligence. 

Recent AI innovations, like ChatGPT, create the most human readable AI experience ever made through text, images, and videos, Conner said. The penetration of AI will completely change the discussion about protecting free speech, he said, urging Congress to draft laws now to ensure its safe use in the United States. 

Congress should start its AI regulation with privacy, anti-trust, and child safety laws, he said. Doing so will prove to American citizens that the internet can, in fact, be better than it is now and will promote future policy amendments, he said.

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

Continue Reading

Artificial Intelligence

As ChatGPT’s Popularity Skyrockets, Some Experts Call for AI Regulation

As generative AI models grow more sophisticated, they present increasing risks.

Published

on

Photo by Tada Images/Adobe Stock used with permission

WASHINGTON, February 3, 2023 — Just two months after its viral launch, ChatGPT reached 100 million monthly users in January, reportedly making it the fastest-growing consumer application in history — and raising concerns, both internal and external, about the lack of regulation for generative artificial intelligence.

Many of the potential problems with generative AI models stem from the datasets used to train them. The models will reflect whatever biases, inaccuracies and otherwise harmful content was present in their training data, but too much dataset filtering can detract from performance.

OpenAI has grappled with these concerns for years while developing powerful, publicly available tools such as DALL·E — an AI system that generates realistic images and original art from text descriptions, said Anna Makanju, OpenAI’s head of public policy, a Federal Communications Bar Association event on Friday.

“We knew right off the bat that nonconsensual sexual imagery was going to be a problem, so we thought, ‘Why don’t we just try to go through the dataset and remove any sexual imagery so people can’t generate it,’” Makanju said. “And when we did that, the model could no longer generate women, because it turns out most of the visual images that are available to train a dataset on women are sexual in nature.”

Despite rigorous testing before ChatGPT’s release, early users quickly discovered ways to evade some of the guardrails intended to prevent harmful uses.

The model would not generate offensive content in response to direct requests, but one user found a loophole by asking it to write from the perspective of someone holding racist views — resulting in several paragraphs of explicitly racist text. When some users asked ChatGPT to write code using race and gender to determine whether someone would be a good scientist, the bot replied with a function that only selected white men. Still others were able to use the tool to generate phishing emails and malicious code.

OpenAI quickly responded with adjustments to the model’s filtering algorithms, as well as increased monitoring.

“So far, the approach we’ve taken is we just try to stay away from areas that can be controversial, and we ask the model not to speak to those areas,” Makanju said.

The company has also attempted to limit certain high-impact uses, such as automated hiring. “We don’t feel like at this point we know enough about how our systems function and biases that may impact employment, or if there’s enough accuracy for there to be an automated decision about hiring without a human in the loop,” Makanju explained.

However, Makanju noted that future generative language models will likely reach a point where users can significantly customize them based on personal worldviews. At that point, strong guardrails will need to be in place to prevent the model from behaving in certain harmful ways — for example, encouraging self-harm or giving incorrect medical advice.

Those guardrails should probably be established by external bodies or government agencies, Makanju said. “We recognize that we — a pretty small company in Silicon Valley — are not the best place to make a decision of how this will be used in every single domain, as hard as we try to think about it.”

Little AI regulation currently exists

So far, the U.S. has very little legislation governing the use of AI, although some states regulate automated hiring tools. On Jan. 26, the National Institute of Standards and Technology released the first version of its voluntary AI risk management framework, developed at the direction of Congress.

This regulatory crawl is being rapidly outpaced by the speed of generative AI research. Google reportedly declared a “code red” in response to ChatGPT’s release, speeding the development of multiple AI tools. Chinese tech company Baidu is planning to launch its own AI chatbot in March.

Not every company will respond to harmful uses as quickly as OpenAI, and some may not even attempt to stop them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI. PAI is a nonprofit coalition that develops tools recommendations for AI governance.

Various private organizations, including PAI, have laid out their own ethical frameworks and policy recommendations. There is ongoing discussion about the extent to which these organizations, government agencies and tech companies should be determining AI regulation, Leibowicz said.

“What I’m interested in is, who’s involved in that risk calculus?” she asked. “How are we making those decisions? What types of actual affected communities are we talking to in order to make that calculus? Or is it a group of engineers sitting in a room trying to forecast for the whole world?”

Leibowicz advocated for transparency measures such as requiring standardized “nutrition labels” that would disclose the training dataset for any given AI model — a proposal similar to the label mandate announced in November for internet service providers.

A regulatory framework should be implemented while these technologies are still being created, rather than in response to a future crisis, Makanju said. “It’s very clear that this technology is going to be incorporated into every industry in some way in the coming years, and I worry a little bit about where we are right now in getting there.”

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending