Connect with us

Privacy

Businesses Should Prepare for More State-Specific Privacy Laws, Attorneys Say

“The privacy landscape in the U.S. is likely to become more complicated before it gets any easier.”

Published

on

Photos of Joan Stewart, Kathleen Scott and Duane Pozza courtesy of Wiley

WASHINGTON, January 13, 2023 — In the absence of overarching federal legislation, several states are passing or considering their own privacy laws, creating an increasingly disparate legal landscape that may be difficult for national companies to navigate.

“I think the privacy landscape in the U.S. is likely to become more complicated before it gets any easier,” said Joan Stewart, an attorney specializing in privacy, data governance and regulatory compliance, at a webcast hosted by Wiley on Thursday.

New privacy laws in California and Virginia took effect on Jan. 1, and Colorado and Connecticut have privacy laws set to become effective in July. Utah’s privacy law will go into effect at the end of December.

 “We expect to see additional states actively considering both omnibus and targeted privacy laws this year,” Stewart said. “So we encourage businesses to focus now on creating universal privacy programs that can adapt to these new laws in the future.”

Although the various state laws have plenty of overlap, there are also several significant outliers, said Kathleen Scott, a privacy and cybersecurity attorney.

States take different approaches to imposing privacy

For example, the new California Privacy Rights Act — which amends and strengthens California’s existing digital privacy law, already considered the strongest in the country — requires that businesses use specific words to describe the categories of personally identifying information being collected.

“These words are unique to California; they come from the statute, and they don’t always make perfect sense outside of that context,” Scott said.

Another area of difference is the consumer’s right to appeal privacy-related decisions. Virginia, Colorado and Connecticut require businesses to offer a process through which they explain to consumers why a specific request was denied.

While implementing a universal standard make compliance easier for businesses, Scott noted that “processing appeals can be pretty resource intensive, so there may be important reasons not to extend those outlier requirements more broadly to other states.”

Generally speaking, the state privacy laws apply to for-profit businesses and make an exception for nonprofits. However, Colorado’s law applies to for-profit and nonprofit entities that meet certain thresholds, and the Virginia and Connecticut laws carve out select nonprofits as exempt instead of having a blanket exemption.

Other state-to-state differences include specific notices, link requirements and opt-in versus opt-out policies. Even key definitions, such as what qualifies as “sensitive data,” vary from state to state.

Two of the state privacy laws taking effect in 2023 authorize the development of new rules, making it likely that additional expectations are on the horizon.

California will not begin civil and administrative enforcement of the CPRA until July. In the meantime, the state’s new privacy agency is charged with developing rules for its implementation, including specific directives for required notices, automated decision-making and other issues.

“The California rulemaking has been particularly complicated… and the outcome is going to have significant impacts on business practices,” said Duane Pozza, an attorney specializing in privacy, emerging technology and financial practices.

The state’s attorney general is arguing that existing rules require a global opt-out mechanism, but the new law establishes this as optional, Pozza explained. The currently proposed rules would again require a global opt-out.

Colorado’s attorney general is undertaking a similar rulemaking process, revising a previously released draft of the rules in preparation for a February hearing.

Several additional states are expected to propose broad or targeted privacy laws during the coming legislative cycle, according to data published Thursday by the Computer and Communications Industry Association. In addition to comprehensive consumer data privacy legislation, several measures address the collection of biometric information and children’s online safety, the CCIA found.

Reporter Em McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for the student newspaper. In addition to agency and freelance marketing experience, she has reported extensively on Section 230, big tech, and rural broadband access. She is a founding board member of Code Open Sesame, an organization that teaches computer programming skills to underprivileged children.

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

Rep. Suzan DelBene: Want Protection From AI? The First Step Is a National Privacy Law

A national privacy standard would ensure a baseline set of protections and would restrict companies from storing and selling personal data.

Published

on

The author of this Expert Opinion is Suzan DelBene, Washington Representative

In the six months since a new chatbot confessed its love for a reporter before taking a darker turn, the world has woken up to how artificial intelligence can dramatically change our lives and how it can go awry. AI is quickly being integrated into nearly every aspect of our economy and daily lives. However, in our nation’s capital, laws aren’t keeping up with the rapid evolution of technology.

Policymakers have many decisions to make around artificial intelligence, such as how it can be used in sensitive areas such as financial markets, health care, and national security. They will need to decide intellectual property rights around AI-created content. There will also need to be guardrails to prevent the dissemination of mis- and disinformation. But before we build the second and third story of this regulatory house, we need to lay a strong foundation and that must center around a national data privacy standard.

To understand this bedrock need, it’s important to look at how artificial intelligence was developed. AI needs an immense quantity of data. The generative language tool ChatGPT was trained on 45 terabytes of data, or the equivalent of over 200 days’ worth of HD video. That information may have included our posts on social media and online forums that have likely taught ChatGPT how we write and communicate with each other. That’s because this data is largely unprotected and widely available to third-party companies willing to pay for it. AI developers do not need to disclose where they get their input data from because the U.S. has no national privacy law.

While data studies have existed for centuries and can have major benefits, they are often centered around consent to use that information. Medical studies often use patient health data and outcomes, but that information needs the approval of the study participants in most cases. That’s because in the 1990s Congress gave health information a basic level of protection but that law only protects data shared between patients and their health care providers. The same is not true for other health platforms, like fitness apps, or most other data we generate today, including our conversations online and geolocation information.

Currently, the companies that collect our data are in control of it. Google for years scanned Gmail inboxes to sell users targeted ads, before abandoning the practice. Zoom recently had to update its data collection policy after it was accused of using customers’ audio and video to train its AI products. We’ve all downloaded an app on our phone and immediately accepted the terms and conditions window without actually reading it. Companies can and often do change the terms regarding how much of our information they collect and how they use it. A national privacy standard would ensure a baseline set of protections no matter where someone lives in the U.S. and restrict companies from storing and selling our personal data.

Ensuring there’s transparency and accountability in what data goes into AI is also important for a quality and responsible product. If input data is biased, we’re going to get a biased outcome, or better put ‘garbage in, garbage out.’ Facial recognition is one application of artificial intelligence. Largely these systems have been trained by and with data from white people. That’s led to clear biases when communities of color interact with this technology.

The United States must be a global leader on artificial intelligence policy but other countries are not waiting as we sit still. The European Union has moved faster on AI regulations because it passed its privacy law in 2018. The Chinese government has also moved quickly on AI but in an alarmingly anti-democratic way. If we want a seat at the international table to set the long-term direction for AI that reflects our core American values, we must have our own national data privacy law to start.

The Biden administration has taken some encouraging steps to begin putting guardrails around AI but it is constrained by Congress’ inaction. The White House recently announced voluntary artificial intelligence standards, which include a section on data privacy. Voluntary guidelines don’t come with accountability and the federal government can only enforce the rules on the books, which are woefully outdated.

That’s why Congress needs to step up and set the rules of the road. Strong national standards like privacy must be uniform throughout the country, rather than the state-by-state approach we have now. It has to put people back in control of their information instead of companies. It must also be enforceable so that the government can hold bad actors accountable. These are the components of the legislation I have introduced over the past few Congresses and the bipartisan proposal the Energy & Commerce Committee advanced last year.

As with all things in Congress, it comes down to a matter of priorities. With artificial intelligence expanding so fast, we can no longer wait to take up this issue. We were behind on technology policy already, but we fall further behind as other countries take the lead. We must act quickly and set a robust foundation. That has to include a strong, enforceable national privacy standard.

Congresswoman Suzan K. DelBene represents Washington’s 1st District in the United States House of Representatives. This piece was originally published in Newsweek, and is reprinted with permission. 

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

 

Continue Reading

Robocall

FCC’s Proposed Rules on Robotexts Will Limit Wireless Providers’ Effectiveness: Industry

The ruling would prevent providers from accessing emergency and government services, they say.

Published

on

Photo of Gregory Romano of AT&T

WASHINGTON, August 18, 2023 – Commenters argue that proposed Federal Communications Commission rules that seek to provide voice consumers more control over robocalls and robotexts would have harmful consequences by limiting their ability to communicate with service providers. 

The FCC released a notice of proposed rulemaking in June that would strengthen the consumers’ ability to revoke consent to receive robocalls and robotexts. It would ensure consumers can easily revoke consent to receive robocalls, require that callers honor do-not –call requests within 24 hours and allow wireless consumers the option to stop robocalls and robotexts from their own wireless provider.  

ACA International, a trade group for the debt collection industry, in conjunction with the Credit Union National Association recommended that the FCC codify reasonable limits on the methods of revocation of consent for robocalls and texts.  

The law, as currently written, would “ensure that revocation of consent does not require the use of specific words or burdensome methods” and codify a 2015 ruling that consumers who have provided consent may revoke it through any reasonable means. ACA International and CUNA asked the FCC to acknowledge the realities of revocation processes. 

“Automated processes cannot be programmed to recognize a virtually infinite combination of words and phrases that could reasonably be interpreted as a clear expression of consumers desire to stop further communications,” it said. The FCC should specify “reasonable means that callers can prescribe, such as a limited set of keyworks that are common synonyms of STOP, which is the universally recognized method to prevent further text messages.” 

Cable industry wants guidance on ‘reasonable methods’

Steven Morris, vice president at NCTA, the Internet and Television Association, added his support that the FCC should provide additional guidance on what it defines as “reasonable methods” of revoking consent and allow callers 72 hours to process opt-out requests. It also suggested that the FCC adopt its proposal to permit one-time texts seeking clarification on the scope of an opt-out request. 

“The FCC’s proposal that consumers be able to revoke consent using ‘any telephone number or email address at which the consumer can reasonably expect to reach the caller’ would also be incredibly complex and likely impossible to effectively administer,” NCTA said. 

Wireless trade association CTIA’s manager of regulatory Affairs Courtney Tolerico said in comments that the proposal severely limits providers ability to send important, service-related communications to subscribers and incentives providers to apply opt-outs unnecessarily broadly, further limiting these beneficial communications and “downgrading the wireless customer experience.” 

It claimed that “even if the FCC had such authority, doing so in the absence of demonstrated consumer harm would be arbitrary and capricious,” saying that the agency does not have reason to enforce laws that would hamper wireless carrier’s ability to serve customers. 

Verizon’s general counsel, Christopher Oatway, expressed the same sentiment, claiming that the FCC “provides no basis to conclude that wireless carriers are abusing their subscribers with unwanted calls or texts.” 

The proposal would “undermine the unique relationship between providers and their customer for wireless service, which today is crucial to Americans’ ability both to conduct their everyday lives as well as to access emergency services and government benefits,” said Verizon. It referred to federal programs like lifeline and ACP that promote connectivity, claiming that its communications with its own customers educates on federal benefit programs. 

‘No incentive’ for abuse by wireless providers, says AT&T

Gregory Romano, vice president and deputy general counsel at AT&T added that “there is no incentive for wireless providers to abuse the current wireless carrier exception,” referring to wireless carriers’ ability to contact their own customers. “The marketplace for consumer wireless service is highly competitive. Wireless providers do not want to annoy their customers with too many messages, or the provider is at risk of losing the customer to a competitor, which is clearly not in the provider’s interest.” 

In June, commenters pushed back against FCC proposed rules that would require mobile wireless providers to ban marketers from contacting a consumer multiple times based on one consent, claiming it will harm legitimate communications. 

Proposed rules are in response to the rising number of telemarketing and robocalls, sated the notice of proposed rulemaking.  

Continue Reading

FCC

FCC Proposed Rules Will Harm Legitimate Text Messages, Say Commenters

The rules would ban the practice of marketers purporting to have written consent for numerous parties to contact a consumer.

Published

on

Photo from robotext lawsuit

WASHINGTON, June 6, 2023 – Commenters claim that the Federal Communications Commission’s proposed rules that would require mobile wireless providers to ban marketers from contacting a consumer multiple times based on one consent will harm legitimate communications. 

The new rules will set additional protections that would require the terminating provider to block texts after notification from the FCC that the text is illegal, to extend the National Do-Not-Call Registry’s protections to text messages, and to ban the practice of marketers purporting to have written consent for numerous parties to contact a consumer based on one consent. Comments on the proposal were due in May and reply comments on June 6.

“Robocall campaigns often rely on flimsy claims of consent where a consumer interested in job listings, a potential reward, or a mortgage quote, unknowingly and unwillingly ‘consents’ to telemarketing calls from dozens – or hundreds or thousands – of unaffiliated entities about anything and everything,” read the comments from USTelecom trade association.  

Wireless trade association CTIA cited that Medicaid text messages that alert customers to critical health updates may be blocked by the ruling despite the FCC’s acknowledgement that these texts are critical. Many providers are unbending in enforcing robotext policies that mandate agencies must “satisfactorily demonstrate they receive prior express consent from enrollees to contact them.” 

CTIA’s comments claimed that the proposed rules would “do little to enhance existing industry efforts to reduce text spam or protect consumers.” 

Competitive networks trade association INCOMPAS claimed that the current framework is not well suited to allow the industry to universally resolve text messaging issues. “In the absence of standardized, competitively neutral rules, the current dynamics create perverse incentives that allow gamesmanship and arbitrage schemes as well as fraudulent behaviors to thrive.” 

USTelecom commended the FCC for taking these steps and suggested that it expressly ban the practice of obtaining single consumer consent as grounds for delivering calls to multiple receivers by issuing a decisive declaration rather than a rule change. Providing clear guidance will deprive aggressive telemarketers of the plausible deniability they rely on to put calls through, it said. 

The new language proposed in the notice is unnecessary and runs the risk of introducing new ambiguity by not eliminating perceived loopholes through a decisive declaration, read its comments. 

The Retail Industry Leaders Association claimed that the notice would “primarily and negatively impact those who send legitimate text message solicitations, not scam senders and bad actors.” The well-intentioned measures will sweep in legitimate text communications, it claimed, by reducing consumer control and making assumptions on their behalf. 

“Consumers use the DNC list to prevent unwanted telephone call solicitations. They do not expect that the DNC List will prevent normal and desired communications from legitimate businesses like RILA members,” it wrote.

In the event the FCC moves forward with the proposed rules, the RILA urged that the rules include “clear carve-outs or safe harbors” for legitimate solicitations. 

This comes as the FCC considers additional proposed rules that will strengthen consumer consent for robocalls and robotexts by allowing consumers to decide which robocalls and texts they wish to receive.

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending