Connect with us

Privacy

EU’s Digital Services Act May Be a Model for the United States

The Digital Services Act imposes transparency requirements and other accountability measures for tech platforms.

Published

on

Photo of Mathias Vermeulen, public policy director at the AWO Agency, obtained from Flickr.

September 16, 2022 – European Union’s Digital Service Act, particularly its data-sharing requirements, may become the model for future American future tech policy, said Mathias Vermeulen, public policy director at the AWO Agency, at a German Marshall Fund web panel Monday.

Now in the final stages of becoming law, the DSA aims to create a safer internet by introducing transparency requirements and other accountability measures for covered platforms. Of note to the German Marshall Fund paneliests was the DSA’s provision that, when cleared by regulators, “very large online platforms” – e.g., Facebook and Twitter – must provide data to third-party researchers for the purpose of ensuring DSA compliance.

In addition, the EU’s voluntary Code of Practice on Disinformation was unveiled in June, requiring opted-in platforms to combat disinformation by introducing bot-elimination schemes, demonetizing sources of alleged misinformation, and labeling political advertisements, among other measures. Signatories of the Code of Practice – including American tech giants Google Search, LinkedIn, Meta, Microsoft Bing, and Twitter – also agreed to proactively share data with researchers.

Vermeulen said that he expects the EU will soon draft new legislation to address the privacy concerns raised by the Digital Service Act’s data-sharing requirements.

The risks of large-scale data sharing

To protect user privacy, the DSA requires data handed over to researchers to be anonymized. Many experts believe that “anonymous” data is generally traceable to its source, however. Even the EU’s recommendations on data-anonymization best practices acknowledges the inherent privacy risks:

“Data controllers should consider that an anonymised dataset can still present residual risks to data subjects. Indeed, on the one hand, anonymisation and re-identification are active fields of research and new discoveries are regularly published, and on the other hand even anonymised data, like statistics, may be used to enrich existing profiles of individuals, thus creating new data protection issues.”

An essay from the Brookings Institution – generally supportive of the DSA’s data-sharing provisions – argues that many private researchers do not have the experience necessary to securely store sensitive data, recommending that the EU Commission establish or subsidize of secure centralized databases.

Artificial Intelligence

Rep. Suzan DelBene: Want Protection From AI? The First Step Is a National Privacy Law

A national privacy standard would ensure a baseline set of protections and would restrict companies from storing and selling personal data.

Published

on

The author of this Expert Opinion is Suzan DelBene, Washington Representative

In the six months since a new chatbot confessed its love for a reporter before taking a darker turn, the world has woken up to how artificial intelligence can dramatically change our lives and how it can go awry. AI is quickly being integrated into nearly every aspect of our economy and daily lives. However, in our nation’s capital, laws aren’t keeping up with the rapid evolution of technology.

Policymakers have many decisions to make around artificial intelligence, such as how it can be used in sensitive areas such as financial markets, health care, and national security. They will need to decide intellectual property rights around AI-created content. There will also need to be guardrails to prevent the dissemination of mis- and disinformation. But before we build the second and third story of this regulatory house, we need to lay a strong foundation and that must center around a national data privacy standard.

To understand this bedrock need, it’s important to look at how artificial intelligence was developed. AI needs an immense quantity of data. The generative language tool ChatGPT was trained on 45 terabytes of data, or the equivalent of over 200 days’ worth of HD video. That information may have included our posts on social media and online forums that have likely taught ChatGPT how we write and communicate with each other. That’s because this data is largely unprotected and widely available to third-party companies willing to pay for it. AI developers do not need to disclose where they get their input data from because the U.S. has no national privacy law.

While data studies have existed for centuries and can have major benefits, they are often centered around consent to use that information. Medical studies often use patient health data and outcomes, but that information needs the approval of the study participants in most cases. That’s because in the 1990s Congress gave health information a basic level of protection but that law only protects data shared between patients and their health care providers. The same is not true for other health platforms, like fitness apps, or most other data we generate today, including our conversations online and geolocation information.

Currently, the companies that collect our data are in control of it. Google for years scanned Gmail inboxes to sell users targeted ads, before abandoning the practice. Zoom recently had to update its data collection policy after it was accused of using customers’ audio and video to train its AI products. We’ve all downloaded an app on our phone and immediately accepted the terms and conditions window without actually reading it. Companies can and often do change the terms regarding how much of our information they collect and how they use it. A national privacy standard would ensure a baseline set of protections no matter where someone lives in the U.S. and restrict companies from storing and selling our personal data.

Ensuring there’s transparency and accountability in what data goes into AI is also important for a quality and responsible product. If input data is biased, we’re going to get a biased outcome, or better put ‘garbage in, garbage out.’ Facial recognition is one application of artificial intelligence. Largely these systems have been trained by and with data from white people. That’s led to clear biases when communities of color interact with this technology.

The United States must be a global leader on artificial intelligence policy but other countries are not waiting as we sit still. The European Union has moved faster on AI regulations because it passed its privacy law in 2018. The Chinese government has also moved quickly on AI but in an alarmingly anti-democratic way. If we want a seat at the international table to set the long-term direction for AI that reflects our core American values, we must have our own national data privacy law to start.

The Biden administration has taken some encouraging steps to begin putting guardrails around AI but it is constrained by Congress’ inaction. The White House recently announced voluntary artificial intelligence standards, which include a section on data privacy. Voluntary guidelines don’t come with accountability and the federal government can only enforce the rules on the books, which are woefully outdated.

That’s why Congress needs to step up and set the rules of the road. Strong national standards like privacy must be uniform throughout the country, rather than the state-by-state approach we have now. It has to put people back in control of their information instead of companies. It must also be enforceable so that the government can hold bad actors accountable. These are the components of the legislation I have introduced over the past few Congresses and the bipartisan proposal the Energy & Commerce Committee advanced last year.

As with all things in Congress, it comes down to a matter of priorities. With artificial intelligence expanding so fast, we can no longer wait to take up this issue. We were behind on technology policy already, but we fall further behind as other countries take the lead. We must act quickly and set a robust foundation. That has to include a strong, enforceable national privacy standard.

Congresswoman Suzan K. DelBene represents Washington’s 1st District in the United States House of Representatives. This piece was originally published in Newsweek, and is reprinted with permission. 

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

 

Continue Reading

Robocall

FCC’s Proposed Rules on Robotexts Will Limit Wireless Providers’ Effectiveness: Industry

The ruling would prevent providers from accessing emergency and government services, they say.

Published

on

Photo of Gregory Romano of AT&T

WASHINGTON, August 18, 2023 – Commenters argue that proposed Federal Communications Commission rules that seek to provide voice consumers more control over robocalls and robotexts would have harmful consequences by limiting their ability to communicate with service providers. 

The FCC released a notice of proposed rulemaking in June that would strengthen the consumers’ ability to revoke consent to receive robocalls and robotexts. It would ensure consumers can easily revoke consent to receive robocalls, require that callers honor do-not –call requests within 24 hours and allow wireless consumers the option to stop robocalls and robotexts from their own wireless provider.  

ACA International, a trade group for the debt collection industry, in conjunction with the Credit Union National Association recommended that the FCC codify reasonable limits on the methods of revocation of consent for robocalls and texts.  

The law, as currently written, would “ensure that revocation of consent does not require the use of specific words or burdensome methods” and codify a 2015 ruling that consumers who have provided consent may revoke it through any reasonable means. ACA International and CUNA asked the FCC to acknowledge the realities of revocation processes. 

“Automated processes cannot be programmed to recognize a virtually infinite combination of words and phrases that could reasonably be interpreted as a clear expression of consumers desire to stop further communications,” it said. The FCC should specify “reasonable means that callers can prescribe, such as a limited set of keyworks that are common synonyms of STOP, which is the universally recognized method to prevent further text messages.” 

Cable industry wants guidance on ‘reasonable methods’

Steven Morris, vice president at NCTA, the Internet and Television Association, added his support that the FCC should provide additional guidance on what it defines as “reasonable methods” of revoking consent and allow callers 72 hours to process opt-out requests. It also suggested that the FCC adopt its proposal to permit one-time texts seeking clarification on the scope of an opt-out request. 

“The FCC’s proposal that consumers be able to revoke consent using ‘any telephone number or email address at which the consumer can reasonably expect to reach the caller’ would also be incredibly complex and likely impossible to effectively administer,” NCTA said. 

Wireless trade association CTIA’s manager of regulatory Affairs Courtney Tolerico said in comments that the proposal severely limits providers ability to send important, service-related communications to subscribers and incentives providers to apply opt-outs unnecessarily broadly, further limiting these beneficial communications and “downgrading the wireless customer experience.” 

It claimed that “even if the FCC had such authority, doing so in the absence of demonstrated consumer harm would be arbitrary and capricious,” saying that the agency does not have reason to enforce laws that would hamper wireless carrier’s ability to serve customers. 

Verizon’s general counsel, Christopher Oatway, expressed the same sentiment, claiming that the FCC “provides no basis to conclude that wireless carriers are abusing their subscribers with unwanted calls or texts.” 

The proposal would “undermine the unique relationship between providers and their customer for wireless service, which today is crucial to Americans’ ability both to conduct their everyday lives as well as to access emergency services and government benefits,” said Verizon. It referred to federal programs like lifeline and ACP that promote connectivity, claiming that its communications with its own customers educates on federal benefit programs. 

‘No incentive’ for abuse by wireless providers, says AT&T

Gregory Romano, vice president and deputy general counsel at AT&T added that “there is no incentive for wireless providers to abuse the current wireless carrier exception,” referring to wireless carriers’ ability to contact their own customers. “The marketplace for consumer wireless service is highly competitive. Wireless providers do not want to annoy their customers with too many messages, or the provider is at risk of losing the customer to a competitor, which is clearly not in the provider’s interest.” 

In June, commenters pushed back against FCC proposed rules that would require mobile wireless providers to ban marketers from contacting a consumer multiple times based on one consent, claiming it will harm legitimate communications. 

Proposed rules are in response to the rising number of telemarketing and robocalls, sated the notice of proposed rulemaking.  

Continue Reading

FCC

FCC Proposed Rules Will Harm Legitimate Text Messages, Say Commenters

The rules would ban the practice of marketers purporting to have written consent for numerous parties to contact a consumer.

Published

on

Photo from robotext lawsuit

WASHINGTON, June 6, 2023 – Commenters claim that the Federal Communications Commission’s proposed rules that would require mobile wireless providers to ban marketers from contacting a consumer multiple times based on one consent will harm legitimate communications. 

The new rules will set additional protections that would require the terminating provider to block texts after notification from the FCC that the text is illegal, to extend the National Do-Not-Call Registry’s protections to text messages, and to ban the practice of marketers purporting to have written consent for numerous parties to contact a consumer based on one consent. Comments on the proposal were due in May and reply comments on June 6.

“Robocall campaigns often rely on flimsy claims of consent where a consumer interested in job listings, a potential reward, or a mortgage quote, unknowingly and unwillingly ‘consents’ to telemarketing calls from dozens – or hundreds or thousands – of unaffiliated entities about anything and everything,” read the comments from USTelecom trade association.  

Wireless trade association CTIA cited that Medicaid text messages that alert customers to critical health updates may be blocked by the ruling despite the FCC’s acknowledgement that these texts are critical. Many providers are unbending in enforcing robotext policies that mandate agencies must “satisfactorily demonstrate they receive prior express consent from enrollees to contact them.” 

CTIA’s comments claimed that the proposed rules would “do little to enhance existing industry efforts to reduce text spam or protect consumers.” 

Competitive networks trade association INCOMPAS claimed that the current framework is not well suited to allow the industry to universally resolve text messaging issues. “In the absence of standardized, competitively neutral rules, the current dynamics create perverse incentives that allow gamesmanship and arbitrage schemes as well as fraudulent behaviors to thrive.” 

USTelecom commended the FCC for taking these steps and suggested that it expressly ban the practice of obtaining single consumer consent as grounds for delivering calls to multiple receivers by issuing a decisive declaration rather than a rule change. Providing clear guidance will deprive aggressive telemarketers of the plausible deniability they rely on to put calls through, it said. 

The new language proposed in the notice is unnecessary and runs the risk of introducing new ambiguity by not eliminating perceived loopholes through a decisive declaration, read its comments. 

The Retail Industry Leaders Association claimed that the notice would “primarily and negatively impact those who send legitimate text message solicitations, not scam senders and bad actors.” The well-intentioned measures will sweep in legitimate text communications, it claimed, by reducing consumer control and making assumptions on their behalf. 

“Consumers use the DNC list to prevent unwanted telephone call solicitations. They do not expect that the DNC List will prevent normal and desired communications from legitimate businesses like RILA members,” it wrote.

In the event the FCC moves forward with the proposed rules, the RILA urged that the rules include “clear carve-outs or safe harbors” for legitimate solicitations. 

This comes as the FCC considers additional proposed rules that will strengthen consumer consent for robocalls and robotexts by allowing consumers to decide which robocalls and texts they wish to receive.

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending