Connect with us

Innovation

Telecommunication Industry Working Group Aims to End Robocalls Through Cryptographic Credentials

Elijah Labby

Published

on

Photo of Iconectiv Chief Technology Officer Chris Drake by ITU Pictures used with permission

July 1, 2020 — Every day, Americans are inundated with millions of robocalls. But the Verifying Integrity in End-to-End Signaling Working Group seeks to put an end to them.

The group of the GSM Association, which is chaired by network management company Iconectiv, aims to develop technologies that can identify and intercept internetwork signaling fraud — when nefarious actors route their calls through online programs that make their numbers appear local, increasing the likelihood that recipients will answer.

Such calls can come at great cost to the recipient. If they accept the call, the number is deemed active and can be distributed to other robocallers. In some cases, robocallers will call individuals, allow the phone to ring once, and then hang up, hoping that recipients will return the call and be subject to expensive calling fees.

Technology developed by Iconectiv and other members of the VINES Working Group would log callers known to commit such abuses and warn recipients that the caller is a known scammer before the call connects.

Chris Drake, chief technology officer at Iconectiv, says that the company’s innovations are doing “a lot to contribute to the end of robocalling.”

The majority of such calls come from places where “frankly, the various aspects of government enforcement look the other way,” Drake said.

He cited Caribbean countries, Somalia, and Eastern European countries such as Latvia and Russia as being particularly high abusers of robocall and rerouting technology.

However, methods of ending robocalls are not simply about stopping false calls but also verifying legitimate ones, Drake said.

Iconectiv’s platform verifies businesses that have the service by providing an alphanumeric code or other text that is irreplicable and proves that the call is coming from a legitimate source.

“The reason [we] use a cryptographic credential is the bad guy couldn’t come and claim that,” Drake said. “He’s been verified and get into the carrier’s channel because he doesn’t have the credentials cryptographically to present himself as Iconectiv.”

Drake said that Iconectiv and other members of the VINES Working Group have worked closely with the Federal Communications Commission to deter robocalls in earlier iterations of what eventually became the TRACED Act, but he said that there is still legislative red tape that he’d like to see cut, such as the right to revoke consent to legal calling lists.

A revoking consent capability, similar to those used for email mailing lists, would be useful “if you’ve ever tried to get off a list when someone calls, if you answered and you find out that’s some kind of pitch, or worse, you asked to get off the list and it feels like the next day you’re on ten more lists,” he said.

A provision for such a law was in earlier drafts of anti-robocalling legislation but failed to survive Congressional negotiations.

Drake also said that there should be legislation that requires the identification of companies participating in mass calling practices.

However, Drake said that attempting to stop robocalling in the United States is a difficult task.

“[They’re] very clever about trying to avoid being recognized for a pattern… they rotate numbers, all kind of tricks,” he said. “…Vines is looking at a way of testing that an actual call is happening from one network to another.”

Elijah Labby was a Reporter with Broadband Breakfast. He was born in Pittsburgh, Pennsylvania and now resides in Orlando, Florida. He studies political science at Seminole State College, and enjoys reading and writing fiction (but not for Broadband Breakfast).

Artificial Intelligence

Staying Ahead On Artificial Intelligence Requires International Cooperation

Benjamin Kahn

Published

on

Screenshot from the webinar

March 4, 2021—Artificial intelligence is present in most facets of American digital life, but experts are in a constant race to identify and address potential dangers before they impact consumers.

From making a simple search on Google to listening to music on Spotify to streaming Tiger King on Netflix, AI is everywhere. Predictive algorithms learn from a consumer’s viewing habits and attempt to direct consumers to other content an algorithm thinks a consumer will be interested in.

While this can be extremely convenient for consumers, it also raises many concerns.

Jaisha Wray, associate administrator for international affairs at the National Telecommunications and Information Administration, was a panelist at a conference hosted Tuesday by the Federal Communications Bar Association.

Wray identified three key areas of interest that are at the forefront of AI policy: content moderation, algorithm transparency, and the establishment of common-ground policies between foreign governments.

In addition to all the aforementioned uses for AI, it also has proven to be an indispensable tool for websites like Facebook, Alphabet’s Youtube, and myriad other social media platforms in auto-moderating their content. While most social media platforms employ humans to review various decisions made by AI (such as Facebook’s Oversight Board), most content is first handled by AI moderators.

According to Tubefilter, in 2019 more than 500 hours of video content were uploaded to Youtube every minute; in less than 20 minutes, a year’s worth of content is uploaded.

Content moderation, algorithm transparency, foreign alignment

On this scale, AI is necessary to police the website, even if it not a perfect system. “[AI] is like a thread that’s woven into every issue that we work on and every venue,” Wray explained. She described how both governments and private entities have looked to AI to not only moderate somewhat mundane things such copyright issues, but also national security issues like violent extremist content.

Her second point pertained to algorithm transparency. She outlined how entities outside of the U.S. have sought to address this concern by providing consumers with the opportunity to have their content reviewed by humans before a final decision is made. Wray pointed to the European General Protection Regulation, “which enshrines the principle that every person has the right not to be subject to a decision solely based on automated processing.”

Her final point raised the issue of coordinating these efforts between different international jurisdictions—namely the U.S. and its allies. “We’re really trying to hone-in on where our values align and where we can find common ground.” She added that coordination does not end with allies, however, and that it is key that the U.S. also coordinate with authoritarian regimes, allied or otherwise.

She said that the primary task facing the U.S. right now is simply trying to determine which issues are worth prioritizing when it comes to coordinating with foreign governments—whether that is addressing the spread of AI, how to police AI multilaterally, or how to address the use of AI by adversarial authoritarian regimes.

Technology needs to be built with security in mind

One of Wray’s co-panelists, Evelyn Remaley, who is the associate administrator for the NTIA’s Office of Policy Analysis and Development, said all multilateral cybersecurity efforts related to AI must be approached from a position of what she called a “zero-trust model.” She explained that this model operates from the presupposition that technology should not and cannot be trusted.

“We have to build in controls and standards from the bottom-up to make sure that we are building in the security layer by layer,” Remaley said. “It’s really that premise of ensuring that we realize that we’re always going to have vulnerabilities within this technical development space.”

Remaley said that increasing competition and collaboration can only be safely achieved with a zero-trust mindset.

Continue Reading

Copyright

Public Knowledge Celebrates 20 Years of Helping Congress Get a Clue on Digital Rights

Derek Shumway

Published

on

Screenshot of Gigi Sohn from Public Knowledge's 20th anniversary event

February 27, 2021 – The non-profit advocacy group Public Knowledge celebrated its twentieth anniversary year in a Monday event revolving around the issues that the group has made its hallmark: Copyright, open standards and other digital rights issues.

Group Founder Gigi Sohn, now a Benton Institute for Broadband and Society senior fellow and public advocate, said that through her professional relationship with Laurie Racine, now president of Racine Strategy, that she became “appointed and anointed” to help start the interest group.

Together with David Bollier, who also had worked on public interest projects in broadcast media with Sohn, and is now director of Reinventing the Commons program at the Schumacher Center for a New Economics, the two cofounded a small and scrappy Public Knowledge that has become a non-profit powerhouse.

The secret sauce? Timing, which couldn’t have been better, said Sohn. Being given free office space at DuPont Circle at the New America Foundation by Steve Clemmons and the late Ted Halstead, then head of the foundation, was instrumental in Public Knowledge’s launch.

The cofounders met with major challenges, Sohn and others said. The nationwide tragedy of September 11, 2001, occurred weeks after its official founding. The group continued their advocacy of what was then more commonly known as “open source,” a related grandparent to the new “net neutrality” of today, she said.

In the aftermath of September 11, a bill by the late Sen. Ernest “Fritz” Hollings, D-S.C., demonstrated a bid by large copyright interest to force technology companies to effectively be the copyright police. Additional copyright maximalist measures we launched almost every month, she said.

Public Knowledge grew into something larger than was probably imagined by the three co-founders. Still, they shared setbacks and losses that accompanied their successes and wins.

“We would form alliances with anybody, which meant that sometimes we sided with internet service providers [on issues like copyright] and sometimes we were against them [on issues like telecom],” said Sohn. An ingredient in the interest group’s success was its desire to work with everyone.

Congress didn’t have a clue on digital rights

What drove the trio together was a shared view that “Congress had no vision for the future of the internet,” explained Sohn.

Much of our early work was spend explaining how digitation works to Congress, she said. The 2000s were a time of great activity and massive growth in the digital industry and lawmakers at the Hill were not acquainted well with screens, computers, and the internet. They took on the role of explaining to members of Congress what the interests of their constituents were when it came to digitization.

Public Knowledge helped popularize digital issues and by “walking [digital information] across the street to [Capitol Hill] at the time created an operational reality with digitization,” said Bollier.

Racine remarked about the influence Linux software maker Red Hat had during its 2002 initial public offering. She said the founders of Red Hat pushed open source beyond a business model and into a philosophy in ways that hadn’t been done before.

During the early days of Public Knowledge, all sorts of legacy tech was being rolled out. Apple’s iTunes, Windows XP, and the first Xbox launched. Nokia and Sony were the leaders in cellphones at the time, augmenting the rise of technology in the coming digital age.

Racine said consumers needed someone in Washington who could represent their interests amid the new software and hardware and embrace the idea of open source technologies for the future.

Also speaking at the event was Public Knowledge CEO Chris Lewis, who said Public Knowledge was at the forefront of new technology issues as it was already holding 3D printing symposiums before Congress, something totally unfamiliar at the time.

Continue Reading

Artificial Intelligence

Connectivity Will Need To Keep Up With The Advent Of New Tech, Says Expert

Samuel Triginelli

Published

on

Screenshot from the webinar

February 24, 2021 – It used to be that technology had to keep up with the deployment of the growing ubiquity of broadband innovations. But the pace of technological advancements in the home is starting a conversation about whether connectivity can keep up.

That’s according to Shawn DuBravac, an accountant and author of a book about how big data will transform our everyday lives, who argues that the pandemic has illustrated the need for broader connections in the home to meet the need of future technologies. He was speaking on Tuesday at the conference of NTCA – The Rural Broadband Association.

Emerging consumer technologies, such as Samsung’s robots, which will perform tasks including loading a dishwasher, serving wine, and setting a dinner table, are redefining the conversation about how connectivity at home will manage them, DuBravac argues.

Health companies are also introducing “companion robots” focused on interacting with seniors. With its artificial intelligence and sensors, these robots develop a personality to adapt to the needs of consumers so social distancing does not become a disadvantage for care.

As such, the pandemic has grown the telehealth industry. With more people avoiding going to hospitals, the creation of watches, belts, scales that are connected to share information with medical professionals is further requiring better broadband connectivity to keep up.

But it’s not like the industry isn’t paying attention. Mesh network technologies, which utilize multiple router-like devices to enhance coverage inside the home, have started to emerge just as smart-home technologies illustrated the need for broader connectivity that better enhanced coverage as Wi-Fi signals experienced degradation through walls.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending