Connect with us

Artificial Intelligence

Panelists, Including Facebook Executive, Call For Increased Content Moderation

Published

on

WASHINGTON, July 22, 2019 — The primary responsibility of moderating online platforms lies with the platforms themselves, making Section 230 protections essential, said panelists at New America’s Open Technology Institute on Thursday.

Although some policymakers are attempting to solve the problem of digital content moderation, Open Technology Institute Director Sarah Morris noted that the First Amendment limits the government’s ability to regulate speech, leaving platforms to handle “the vast majority of decision making.”

“Washington lawmakers don’t have the capacity to address these challenges,” said Francella Ochillo, executive director of Next Century Cities.

It’s up to tech companies to do more than they are currently doing to tackle hate speech on their platforms, said David Snyder, executive director of the First Amendment Coalition.

Facebook Public Policy Manager Shaarik Zafar acknowledged that the tech community needs to do a better job of enforcing its policies—and on creating those policies in the first place.

Content moderation is an extremely difficult process, he said. Although algorithms are fairly good at detecting terrorism and child exploitation, other issues can be more difficult, such as trying to distinguish between journalists and activists raising awareness of atrocities versus people glorifying violent extremism.

No single solution can eliminate hate speech, and any solution found will have to be frequently revisited and updated, said Ochillo. But that doesn’t mean that platforms and others shouldn’t be a significant effort, she said, pointing out that people suffer real-world secondary effects from hateful content posted online.

Zafar emphasized Facebook’s commitment to onboarding additional civil rights expertise as the platform continues to tackle the problem of hate speech.

He also highlighted Facebook’s recently announced external oversight board, which will be made up of a diverse group of experts with experience in content, privacy, free expression, human rights, safety, and other relevant disciplines.

Facebook would defer to the board on difficult content moderation questions, said Zafar, and would follow their recommendation even when company executives disagree.

But as companies take steps to fine-tune and enforce their terms of service, transparency is of the utmost importance, Snyder said.

Content moderation algorithms should be made public so that independent researchers can test them for bias, suggested Sharon Franklin, OTI’s director of surveillance and cybersecurity policy.

Franklin also highlighted the Santa Clara Principles, a set of guidelines for transparency and accountability in content moderation. The principles call on companies to publish the numbers of posts removed and accounts suspended, provide notice to users whose content or account is removed, and create a meaningful appeal process.

Allowing content moderation under Section 230 of the Communications Decency Act has spurred innovation and made it possible for individuals and companies to have access to massive audiences through social media, said Zafar.

Without those protections, he continued, companies might choose to forgo content moderation altogether, leaving all sorts of hate speech, misinformation, and spam on the platforms to the point that they might actually become unusable.

The other potential danger of repealing the law would be companies airing on the side of caution and over-enforcing policies, said Franklin. Section 230 actually leads to less censorship because it allows for nuanced content moderation.

The Open Technology Institute supports Section 230 and is very concerned about the recent attacks that have been made on it, Franklin added.

Section 230 is “far from perfect,” said Snyder, but it’s much better than any of the plans that have been proposed to modify it or than not having it at all.

Facebook and other platforms give voice to a wide range of ideologies, and people from all backgrounds are able to successfully gain significant followings, said Zafar, emphasizing that the company’s purpose is to serve everybody.

(Photo of New America event by Emily McPhie.)

Development Associate Emily McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for campus publication Student Life. She is a founding board member of Code Open Sesame, an organization that teaches computer skills to underprivileged children in six cities across Southern California.

Artificial Intelligence

Int’l Ethical Framework for Auto Drones Needed Before Widescale Implementation

Observers say the risks inherent in letting autonomous drones roam requires an ethical framework.

Published

on

Timothy Clement-Jones was a member of the U.K. Parliament's committee on artificial intelligence

July 19, 2021 — Autonomous drones could potentially serve as a replacement for military dogs in future warfare, said GeoTech Center Director David Bray during a panel discussion hosted by the Atlantic Council last month, but ethical concerns have observers clamoring for a framework for their use.

Military dogs, trained to assist soldiers on the battlefield, are currently a great asset to the military. AI-enabled autonomous systems, such as drones, are developing capabilities that would allow them to assist in the same way — for example, inspecting inaccessible areas and detecting fires and leaks early to minimize the chance of on-the-job injuries.

However, concerns have been raised about the ability to impact human lives, including the recent issue of an autonomous drone possibly hunting down humans in asymmetric warfare and anti-terrorist operations.

As artificial intelligence continues to develop at a rapid rate, society must determine what, if any, limitations should be implemented on a global scale. “If nobody starts raising the questions now, then it’s something that will be a missed opportunity,” Bray said.

Sally Grant, vice president at Lucd AI, agreed with Bray’s concerns, pointing out the controversies surrounding the uncharted territory of autonomous drones. Panelists proposed the possibility of an international limitation agreement with regards to AI-enabled autonomous systems that can exercise lethal force.

Timothy Clement-Jones, who was a member of the U.K. Parliament’s committee on artificial intelligence, called for international ethical guidelines, saying, “I want to see a development of an ethical risk-based approach to AI development and application.”

Many panelists emphasized the immense risk involve if this technology gets in the wrong hands. Panelists provided examples stretching from terrorist groups to the paparazzi, and the power they could possess with that much access.

Training is vital, Grant said, and soldiers need to feel comfortable with this machinery while not becoming over-reliant. The idea of implementing AI-enabled autonomous systems into missions, including during national disasters, is that soldiers can use it as guidance to make the most informed decisions.

“AI needs to be our servant not our master,” Clement agreed, emphasizing that soldiers can use it as a tool to help them and not as guidance to follow. He compared AI technology with the use of phone navigation, pointing to the importance of keeping a map in the glove compartment in case the technology fails.

The panelists emphasized the importance of remaining transparent and developing an international agreement with an ethical risk-based approach to AI development and application in these technologies, especially if they might enter the battlefield as a reliable companion someday.

Continue Reading

Artificial Intelligence

Deepfakes Could Pose A Threat to National Security, But Experts Are Split On How To Handle It

Experts disagree on the right response to video manipulation — is more tech or a societal shift the right solution?

Published

on

Rep. Anthony Gonzalez, R-Ohio

June 3, 2021—The emerging and growing phenomenon of video manipulation known as deepfakes could pose a threat to the country’s national security, policy makers and technology experts said at an online conference Wednesday, but how best to address them divided the panel.

A deepfake is a highly technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. A well done deepfake can make a person appear to do things that they never actually did and say things that they never actually said.

“The way the technology has evolved, it is literally impossible for a human to actually detect that something is a deepfake,” said Ashish Jaiman, the director of technology operations at Microsoft, at an online event hosted by the Information Technology and Innovation Foundation.

Experts are wary of the associated implications of this technology being increasingly offered to the general population, but how best to address the brewing dilemma has them split. Some believe better technology aimed at detecting deepfakes is the answer, while others say that a shift in social perspective is necessary. Others argue that such a societal shift would be dangerous, and that the solution actually lies in the hands of journalists.

Deepfakes pose a threat to democracy

Such technology posed no problem when only Hollywood had the means to portray such impressive special effects, says Rep. Anthony Gonzalez, R-Ohio, but the technology has progressed to a point that allows most anybody to get their hands on it. He says that with the spread of disinformation, and the challenges that poses to establishing a well-informed public, deepfakes could be weaponized to spread lies and affect elections.

As of yet, however, no evidence exists that deepfakes have been used for this purpose, according to Daniel Kimmage, the acting coordinator for the Global Engagement Center of the Department of State. But he, along with the other panelists, agree that the technology could be used to influence elections and further already growing seeds of mistrust in the information media. They believe that its best to act preemptively and solve the problem before it becomes a crisis.

“Once people realize they can’t trust the images and videos they’re seeing, not only will they not believe the lies, they aren’t going to believe the truth,” said Dana Rao, executive vice president of software company Adobe.

New technology as a solution

Jaiman says Microsoft has been developing sophisticated technologies aimed at detecting deepfakes for over two years now. Deborah Johnson, emeritus technology professor at the University of Virginia School of Engineering, refers to this method as an “arms race,” in which we must develop technology that detects deepfakes at a faster rate than the deepfake technology progresses.

But Jaiman was the first to admit that, despite Microsoft’s hard work, detecting deepfakes remains a grueling challenge. Apparently, it’s much harder to detect a deepfake than it is to create one, he said. He believes that a societal response is necessary, and that technology will be inherently insufficient to address the problem.

Societal shift as a solution

Jaiman argues that people need to be skeptical consumers of information. He believes that until the technology catches up and deepfakes can more easily be detected and misinformation can easily be snuffed, people need to approach online information with the perspective that they could easily be deceived.

But critics believe this approach of encouraging skepticism could be problematic. Gabriela Ivens, the head of open source research at Human Rights Watch, says that “it becomes very problematic if people’s first reactions are not to believe anything.” Ivens’ job revolves around researching and exposing human rights violations, but says that the growing mistrust of media outlets will make it harder for her to gain the necessary public support.

She believes that a “zero-trust society” must be resisted.

Vint Cerf, the vice president and chief internet evangelist at Google, says that it is up to journalists to prevent the growing spread of distrust. He accused journalists not of deliberately lying, but often times misleading the public. He believes that the true risk of deepfakes lies in their ability to corrode America’s trust in truth, and that it is up to journalists to restore that trust already beginning to corrode by being completely transparent and honest in their reporting.

Continue Reading

Artificial Intelligence

Complexity, Lack of Expertise Could Hamper Economic Benefits Of Artificial Intelligence

Artificial intelligence is said to open up a new age of economic development, but its complexity could hamper its rollout.

Published

on

Keith Strier of NVIDIA

May 24, 2021 — One of the great challenges to adopting artificial intelligence is the lack of understanding of it, according to a panel hosted by the Atlantic Council’s new GeoTech Center.

The panel last week discussed the economic benefits of AI and how global policy leaders can leverage it to achieve sustainable economic growth with government buy-in. But getting the government excited and getting them to actually do something about it are two completely different tasks.

That’s because there exists little government understanding or planning around this emerging market, according to Keith Strier, vice-president of worldwide AI initiatives at NVIDIA, a tech company that designs graphics processing units.

If the trend continues, the consequences could be globally impactful, leading to a widening of the global economic divide and could even pose national security threats, he said.

“AI is the new critical infrastructure… It’s about the future of GDP,” said Strier.

Lack of understanding stems from complexity 

The reason for a lack of government understanding stems from the complexity of AI research, and the lack of consensus among experts, Strier said. He noted that the metrics used to quantify AI performance are “deceptively complex” and technical. Experts struggle to even find consensus on defining AI, only adding to its already intrinsic complexity.

This divergence in expert opinion makes the research markedly difficult to break down and communicate to policy makers in digestible, useful ways.

“Policy is just not evidence based,” Strier said. “It’s not well informed.”

World economic divide could widen 

Charles Jennings, AI entrepreneur and founder of internet technology company NeuralEye, warned of AI’s potential to widen the economic divide worldwide.

Currently, the 500 fastest computers in the world are split up between just 29 different countries, leaving the remaining 170 struggling to produce computing power. As computers become faster, the countries best suited to reap the economic benefits will do so at a rate that far outpaces less developed countries.

Jennings also believes that there exists security issues associated with the lack of AI understanding in government, claiming that the public’s increasing dependence on it matched with a lack of regulation could lead to a public safety threat. He is adamant that it’s time to bridge the gap between enterprise and policy.

Strier says there are three essential questions governments must answer: How much domestic AI compute capacity do we have? How does this compare to other nations? Do we have enough capacity to support our national AI ambitions?

Answering these questions would help governments address the AI question in terms of their own national values and interests. This would help create a framework that could mitigate the potential negative consequences which might otherwise affect us.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending