Connect with us

Privacy

Privacy Policy Customization Has Both Benefits and Drawbacks, Say PrivacyCon Participants

Published

on

Screenshot of Federal Trade Commission PrivacyCon webcast

July 21, 2020 — Allowing users of online platforms to shape the use of their own private information can be a tricky practice, and not necessarily one that platforms are incentivized to employ, said participants in a Federal Trade Commission PrivacyCon webinar on Tuesday.

Speaking about the General Data Protection Regulation, a European Union law that allows users to decide how the data they give to websites is used, panelists said that such legislation is often difficult to employ and may come with adverse effects.

“On the one hand, consumers increasingly would like control over the data firms collect,” said Guy Aridor, an economics PhD candidate at Columbia University. “…On the other hand, firms are reliant on this data. There is a worry that this will impact their function.”

Aridor has done extensive research into the GDPR and recently published The Effect of Privacy Regulation on the Data Industry: Empirical Evidence from GDPR.

Garrett Johnston, who has also authored research into the consequences of the GDPR, said that customizable privacy policies could disincentivize competition.

“Our main research question is, can privacy policies hurt competition?” he said. “The GDPR is complex, but its many elements increase the logical cost and legal risk with processing personal data. This will have important consequences for the web.”

Johnson added that websites must share what data they collect, but the data can be difficult to track.

“In order to provide these services, vendors have to share what the GDPR considers personal data,” he said. “As a result, they have faced scrutiny with three countries…[But] the GDPR is challenging to study because normally we can’t observe how they use data.”

Jeff Prince, chief economist at the Federal Communications Commission, said that the GDPR decreases the number of online venders.

He said that research has shown a 15 percent reduction of vendor use post-GDPR.

Screenshot from FTC webcast

He also said that he and the agency were researching the value of users’ data.

“We are looking at how much privacy is worth around the world,” he said. “At a rough level, we can think about balancing privacy preferences for citizens with benefits for use of the data. One thing that has been emphasized is that it’s particularly difficult to measure the privacy preferences. That’s something we are trying to get at with this.”

In a companion webinar, Hana Habib, PhD student at Carnegie Mellon University, said that her research found a lack of cohesive privacy controls across platforms made choices from website to website difficult.

“Our empirical analysis found that privacy choices were often provided in privacy policies,” she said. “The downside of that, other than consumers largely ignoring privacy policies, is that the headings under which choices are presented are inconsistent from policy to policy”

When it comes to customizable privacy policies and individualized use of content, Prince said that measurements of choice can be difficult but useful.

“[We] did some measures for the value of privacy with regards to apps,” he said. “…This is one reason why quantification is valuable. A lot of times [the choice to surrender data] might not line up with what quantifiable metrics would be.”

Privacy controls are not merely desirable in the United States, but across the scope of his research, Prince continued.

“That was one of the big takeaways for me,” he said. “When we think about privacy policies and how people value privacy in a relative sense across countries and different types, there wasn’t that big of a difference across those countries.”

Privacy experts and users of platforms like Facebook and Google have often accused them of abusing user data while offering nothing in return. While Facebook and Google have both made public statements expressing their privacy practices and promising to take data collection practices seriously, some experts believe that companies are not sufficiently incentivized to make major changes.

Elijah Labby was a Reporter with Broadband Breakfast. He was born in Pittsburgh, Pennsylvania and now resides in Orlando, Florida. He studies political science at Seminole State College, and enjoys reading and writing fiction (but not for Broadband Breakfast).

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

Rep. Suzan DelBene: Want Protection From AI? The First Step Is a National Privacy Law

A national privacy standard would ensure a baseline set of protections and would restrict companies from storing and selling personal data.

Published

on

The author of this Expert Opinion is Suzan DelBene, Washington Representative

In the six months since a new chatbot confessed its love for a reporter before taking a darker turn, the world has woken up to how artificial intelligence can dramatically change our lives and how it can go awry. AI is quickly being integrated into nearly every aspect of our economy and daily lives. However, in our nation’s capital, laws aren’t keeping up with the rapid evolution of technology.

Policymakers have many decisions to make around artificial intelligence, such as how it can be used in sensitive areas such as financial markets, health care, and national security. They will need to decide intellectual property rights around AI-created content. There will also need to be guardrails to prevent the dissemination of mis- and disinformation. But before we build the second and third story of this regulatory house, we need to lay a strong foundation and that must center around a national data privacy standard.

To understand this bedrock need, it’s important to look at how artificial intelligence was developed. AI needs an immense quantity of data. The generative language tool ChatGPT was trained on 45 terabytes of data, or the equivalent of over 200 days’ worth of HD video. That information may have included our posts on social media and online forums that have likely taught ChatGPT how we write and communicate with each other. That’s because this data is largely unprotected and widely available to third-party companies willing to pay for it. AI developers do not need to disclose where they get their input data from because the U.S. has no national privacy law.

While data studies have existed for centuries and can have major benefits, they are often centered around consent to use that information. Medical studies often use patient health data and outcomes, but that information needs the approval of the study participants in most cases. That’s because in the 1990s Congress gave health information a basic level of protection but that law only protects data shared between patients and their health care providers. The same is not true for other health platforms, like fitness apps, or most other data we generate today, including our conversations online and geolocation information.

Currently, the companies that collect our data are in control of it. Google for years scanned Gmail inboxes to sell users targeted ads, before abandoning the practice. Zoom recently had to update its data collection policy after it was accused of using customers’ audio and video to train its AI products. We’ve all downloaded an app on our phone and immediately accepted the terms and conditions window without actually reading it. Companies can and often do change the terms regarding how much of our information they collect and how they use it. A national privacy standard would ensure a baseline set of protections no matter where someone lives in the U.S. and restrict companies from storing and selling our personal data.

Ensuring there’s transparency and accountability in what data goes into AI is also important for a quality and responsible product. If input data is biased, we’re going to get a biased outcome, or better put ‘garbage in, garbage out.’ Facial recognition is one application of artificial intelligence. Largely these systems have been trained by and with data from white people. That’s led to clear biases when communities of color interact with this technology.

The United States must be a global leader on artificial intelligence policy but other countries are not waiting as we sit still. The European Union has moved faster on AI regulations because it passed its privacy law in 2018. The Chinese government has also moved quickly on AI but in an alarmingly anti-democratic way. If we want a seat at the international table to set the long-term direction for AI that reflects our core American values, we must have our own national data privacy law to start.

The Biden administration has taken some encouraging steps to begin putting guardrails around AI but it is constrained by Congress’ inaction. The White House recently announced voluntary artificial intelligence standards, which include a section on data privacy. Voluntary guidelines don’t come with accountability and the federal government can only enforce the rules on the books, which are woefully outdated.

That’s why Congress needs to step up and set the rules of the road. Strong national standards like privacy must be uniform throughout the country, rather than the state-by-state approach we have now. It has to put people back in control of their information instead of companies. It must also be enforceable so that the government can hold bad actors accountable. These are the components of the legislation I have introduced over the past few Congresses and the bipartisan proposal the Energy & Commerce Committee advanced last year.

As with all things in Congress, it comes down to a matter of priorities. With artificial intelligence expanding so fast, we can no longer wait to take up this issue. We were behind on technology policy already, but we fall further behind as other countries take the lead. We must act quickly and set a robust foundation. That has to include a strong, enforceable national privacy standard.

Congresswoman Suzan K. DelBene represents Washington’s 1st District in the United States House of Representatives. This piece was originally published in Newsweek, and is reprinted with permission. 

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

 

Continue Reading

Robocall

FCC’s Proposed Rules on Robotexts Will Limit Wireless Providers’ Effectiveness: Industry

The ruling would prevent providers from accessing emergency and government services, they say.

Published

on

Photo of Gregory Romano of AT&T

WASHINGTON, August 18, 2023 – Commenters argue that proposed Federal Communications Commission rules that seek to provide voice consumers more control over robocalls and robotexts would have harmful consequences by limiting their ability to communicate with service providers. 

The FCC released a notice of proposed rulemaking in June that would strengthen the consumers’ ability to revoke consent to receive robocalls and robotexts. It would ensure consumers can easily revoke consent to receive robocalls, require that callers honor do-not –call requests within 24 hours and allow wireless consumers the option to stop robocalls and robotexts from their own wireless provider.  

ACA International, a trade group for the debt collection industry, in conjunction with the Credit Union National Association recommended that the FCC codify reasonable limits on the methods of revocation of consent for robocalls and texts.  

The law, as currently written, would “ensure that revocation of consent does not require the use of specific words or burdensome methods” and codify a 2015 ruling that consumers who have provided consent may revoke it through any reasonable means. ACA International and CUNA asked the FCC to acknowledge the realities of revocation processes. 

“Automated processes cannot be programmed to recognize a virtually infinite combination of words and phrases that could reasonably be interpreted as a clear expression of consumers desire to stop further communications,” it said. The FCC should specify “reasonable means that callers can prescribe, such as a limited set of keyworks that are common synonyms of STOP, which is the universally recognized method to prevent further text messages.” 

Cable industry wants guidance on ‘reasonable methods’

Steven Morris, vice president at NCTA, the Internet and Television Association, added his support that the FCC should provide additional guidance on what it defines as “reasonable methods” of revoking consent and allow callers 72 hours to process opt-out requests. It also suggested that the FCC adopt its proposal to permit one-time texts seeking clarification on the scope of an opt-out request. 

“The FCC’s proposal that consumers be able to revoke consent using ‘any telephone number or email address at which the consumer can reasonably expect to reach the caller’ would also be incredibly complex and likely impossible to effectively administer,” NCTA said. 

Wireless trade association CTIA’s manager of regulatory Affairs Courtney Tolerico said in comments that the proposal severely limits providers ability to send important, service-related communications to subscribers and incentives providers to apply opt-outs unnecessarily broadly, further limiting these beneficial communications and “downgrading the wireless customer experience.” 

It claimed that “even if the FCC had such authority, doing so in the absence of demonstrated consumer harm would be arbitrary and capricious,” saying that the agency does not have reason to enforce laws that would hamper wireless carrier’s ability to serve customers. 

Verizon’s general counsel, Christopher Oatway, expressed the same sentiment, claiming that the FCC “provides no basis to conclude that wireless carriers are abusing their subscribers with unwanted calls or texts.” 

The proposal would “undermine the unique relationship between providers and their customer for wireless service, which today is crucial to Americans’ ability both to conduct their everyday lives as well as to access emergency services and government benefits,” said Verizon. It referred to federal programs like lifeline and ACP that promote connectivity, claiming that its communications with its own customers educates on federal benefit programs. 

‘No incentive’ for abuse by wireless providers, says AT&T

Gregory Romano, vice president and deputy general counsel at AT&T added that “there is no incentive for wireless providers to abuse the current wireless carrier exception,” referring to wireless carriers’ ability to contact their own customers. “The marketplace for consumer wireless service is highly competitive. Wireless providers do not want to annoy their customers with too many messages, or the provider is at risk of losing the customer to a competitor, which is clearly not in the provider’s interest.” 

In June, commenters pushed back against FCC proposed rules that would require mobile wireless providers to ban marketers from contacting a consumer multiple times based on one consent, claiming it will harm legitimate communications. 

Proposed rules are in response to the rising number of telemarketing and robocalls, sated the notice of proposed rulemaking.  

Continue Reading

FCC

FCC Proposed Rules Will Harm Legitimate Text Messages, Say Commenters

The rules would ban the practice of marketers purporting to have written consent for numerous parties to contact a consumer.

Published

on

Photo from robotext lawsuit

WASHINGTON, June 6, 2023 – Commenters claim that the Federal Communications Commission’s proposed rules that would require mobile wireless providers to ban marketers from contacting a consumer multiple times based on one consent will harm legitimate communications. 

The new rules will set additional protections that would require the terminating provider to block texts after notification from the FCC that the text is illegal, to extend the National Do-Not-Call Registry’s protections to text messages, and to ban the practice of marketers purporting to have written consent for numerous parties to contact a consumer based on one consent. Comments on the proposal were due in May and reply comments on June 6.

“Robocall campaigns often rely on flimsy claims of consent where a consumer interested in job listings, a potential reward, or a mortgage quote, unknowingly and unwillingly ‘consents’ to telemarketing calls from dozens – or hundreds or thousands – of unaffiliated entities about anything and everything,” read the comments from USTelecom trade association.  

Wireless trade association CTIA cited that Medicaid text messages that alert customers to critical health updates may be blocked by the ruling despite the FCC’s acknowledgement that these texts are critical. Many providers are unbending in enforcing robotext policies that mandate agencies must “satisfactorily demonstrate they receive prior express consent from enrollees to contact them.” 

CTIA’s comments claimed that the proposed rules would “do little to enhance existing industry efforts to reduce text spam or protect consumers.” 

Competitive networks trade association INCOMPAS claimed that the current framework is not well suited to allow the industry to universally resolve text messaging issues. “In the absence of standardized, competitively neutral rules, the current dynamics create perverse incentives that allow gamesmanship and arbitrage schemes as well as fraudulent behaviors to thrive.” 

USTelecom commended the FCC for taking these steps and suggested that it expressly ban the practice of obtaining single consumer consent as grounds for delivering calls to multiple receivers by issuing a decisive declaration rather than a rule change. Providing clear guidance will deprive aggressive telemarketers of the plausible deniability they rely on to put calls through, it said. 

The new language proposed in the notice is unnecessary and runs the risk of introducing new ambiguity by not eliminating perceived loopholes through a decisive declaration, read its comments. 

The Retail Industry Leaders Association claimed that the notice would “primarily and negatively impact those who send legitimate text message solicitations, not scam senders and bad actors.” The well-intentioned measures will sweep in legitimate text communications, it claimed, by reducing consumer control and making assumptions on their behalf. 

“Consumers use the DNC list to prevent unwanted telephone call solicitations. They do not expect that the DNC List will prevent normal and desired communications from legitimate businesses like RILA members,” it wrote.

In the event the FCC moves forward with the proposed rules, the RILA urged that the rules include “clear carve-outs or safe harbors” for legitimate solicitations. 

This comes as the FCC considers additional proposed rules that will strengthen consumer consent for robocalls and robotexts by allowing consumers to decide which robocalls and texts they wish to receive.

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending