Connect with us


Comparing Privacy Policies for Wearable Fitness Trackers: Apple, Fitbit, Xiaomi and Under Armour



Photo by Kārlis Dambrāns used with permission

WASHINGTON, September 6, 2019 – Wearable fitness trackers are just another way of monitoring personal data. Yet the market for wearables is booming, because people want to optimize their fitness routines through self-monitoring of vital signs.

Aside from personal use, fitness tracker data is valuable for health insurance companies as it is a more efficient way to analyze a person’s fitness patterns. John Hancock, for instance, offers the Apple Watch at a high discount when paired with an insurance plan.

As these devices can track intricate details such as heart rate, calorie consumption and GPS coordinates, transparency about these practices is a must. Amid the diverse fitness tracker market, Apple and Fitbit are considered leading manufacturers. In addition to spearheading innovation in the fitness tech industry, Apple and Fitbit have been reported to have some of the most consumer-friendly policies.

Their competitors, on the other hand, still have some ways to go before they have developed a comprehensive level of security. Xiaomi and UnderArmour are two fitness tracker manufacturers that have had some discrepancies with their privacy and security practices.

Describing these four companies in the above manner is an oversimplification of their methodology. It is important to look at each individual company’s approach to privacy and to determine what benefits and gaps they have.


Apple has one of the most robust and easy-to-read privacy policies among major tech firms. Praised by the transparency of its computer and mobile devices, the Cupertino- based company puts the same amount of effort for wearables.

On its website, Apple states that privacy is a “fundamental human right.” It then goes into detail about the various ways in which a user’s privacy is protected. Concerns that Apple touches upon include encryption, third-party usage of app data and how users can modify and/or delete their data.

With the advent of the EU’s General Date Protection Regulations, multinational companies needed to update their policies to comply with international standards.

Last year, Apple launched a privacy portal allowing users to obtain a copy of the personal data associated with their account. This includes Apple ID info, App Store activity and data stored in iCloud. Additionally, the online portal has a page allowing users to correct their data and deactivate or delete their account.

For these reasons, the iPhone and Apple Watch have overlapping data protections. Apple Watch’s terms and conditions underscores the ability to control data via the privacy portal. This data is shareable with Apple’s affiliates and is combined with other personal information the company obtained, as well as with de-identified data.

Apple Watch terms note that photo and location data is shared, though it does not guarantee that data is accurate. Users are also informed that Apple can limit Watch use without any notice, and are advised to consult with a physician before starting a fitness program with the Watch.


Fitbit is Apple’s like-minded competitor, fully disclosing what its device does and who can access the information. Much of its privacy approach mirrors that of Apple, however Fitbit also has a separate privacy policy for children.

From the start, Fitbit discloses that it does not produce medical devices and that their trackers are not meant to replace doctor consultation. In its premium services, Fitbit incorporates data of non-paying users.

In the summary of its privacy policy, Fitbit states that transparency “is the key to any healthy relationship.” Following Apple’s protocol, Fitbit takes personal data such as height, weight and sleep patterns to give the user an accurate and relevant portrayal of their fitness regime.

We may personalize exercise and activity goals, Fitbit writes, based on past activity data and goals a user has previously set.

Under Fitbit’s child policy, parents have control over their child’s data until they’re 13 years of age. Persons under 13 are not permitted to create accounts without parental consent, and parents must consent to the use of their child’s data in accordance with Fitbit’s Privacy Policy for Children’s Accounts.

A separate link outlines these policies in detail. If parents have reason to believe that their child’s data was submitted without consent, they can contact Fitbit to request the removal of that data.


At a glance, this Chinese-based tech company seems to have minor privacy discrepancies. It also sells fitness trackers for a much lower price than the mainstream manufacturers. However, some security issues and limited disclosure causes Xiaomi to have a lower credibility than its competitors.

Xiaomi specifically states that neither they nor their suppliers and distributors make any specific promises about the service. Xiaomi claims it is committed to upholding privacy policies worldwide, however it explicitly states that any disputes to the company’s terms and conditions “will be litigated exclusively” in Chinese courts, where the user and Xiaomi “consent to personal jurisdiction.”

Another concerning factor of the privacy agreement is that Xiaomi doesn’t clarify what happens to personal information in the event of a merger or acquisition. The company only states that users will be notified.

Furthermore, Xiaomi fitness trackers have also been found vulnerable to Bluetooth MAC address surveillance, a flaw that’s not uncommon among trackers. According to a study by Open Effect, when a fitness tracker’s MAC address doesn’t change, it becomes easier for users to be monitored through location sharing services.

Overall, the vague and slightly intimidating nature of Xiaomi’s terms and conditions prevents the company from contributing to transparency.

Under Armour

Under Armour faced significant scrutiny after its subsidiary MyFitnessPal was compromised in a data breach involving 150 million accounts. Yet the company’s terms and conditions still leave room for improvement. The amount of data it collects and the default sharing options available are particularly concerning.

Moreover, Under Armour’s online privacy policy hasn’t been updated for over a year. Their terms outline the different ways they collect, use, disclose and process personal data. It’s also unclear the extent to which UA shares personal data with other companies.

In addition to UA’s abundant yet vague privacy approach, UA can track a user’s location even when its app isn’t running and sharing preferences such as Activity Stats, Community Social Data and Lookup Information are set as public by default. Setting an entire account on private mode would prevent those on the user’s friend list from finding them.

California residents have some leeway with UA’s privacy agreement due to the state’s legal framework. They are permitted once a year to request a list of personal data that the company disclosed to third parties if it was used for direct marketing purposes. However, UA claims that it does not share personal data for this purpose as per a California Civil Code.

As with most multinational companies, users residing in the EU have the right to request deletion of their account.

Although UA does provide ways for users to opt out of public settings, the fact that these are set to default is alarming. UA also suffers from a lengthy and difficult to understand Terms and Conditions agreement. This implies that privacy is not on the top of UA’s priorities, which is perplexing given the aftermath of their data breach scandal.


Private Sector Falling Behind on Information Sharing During Cyberattacks, Says Comcast Rep

Comcast’s Noopur Davis says cyber attackers share information better than the private sector.



Noopur Davis, Chief Product and Information Officer at Comcast Cable.

ASPEN, Colorado, August 23 — In the wake of an influx of ransomware attacks on critical infrastructure and cyberattacks on private carriers, entities across the technology industry are revaluating their strategies and how they share information to prevent such acts.

T-Mobile announced on August 15 that as many as 50 million consumers had their private data compromised during a data breach. Days later, on August 17, as part of Technology Policy Institute’s 2021 Aspen Forum, Noopur Davis, Chief Product and Information Officer at Comcast Cable, sat down for a fireside chat to discuss what the industry was doing to address this event and events like it.

Join in Broadband Breakfast Live Online’s Discussion on “Cybersecurity: Reviewing the Biden Administration’s Executive Order,” on Wednesday, August 25, 2021, at 12 Noon ET.

When Davis was asked how she felt about the current state of cybersecurity, she said it was okay, but that the telecom community at large would have to do more.

She referenced the mean time of comfort—that is, the average duration between the time that a service becomes connected to the internet and when it is targeted by bad actors. While in the early days of the internet cybersecurity experts could expect to have significant mean times of comfort, she stated that this is no longer the case.

“The second you connect [to the internet] you are attacked,” she said.

As soon as a successful breach is recognized, Davis explained that the target companies begin to revaluate their “TTP,” or tactics, techniques, and procedures.

Information sharing is crucial

Though one company may find a remedy to their breach, other companies may remain vulnerable. To combat this, Davis said that it is critical for companies to share information quickly with their counterparts, but she indicated that this is a race that the private sector is currently losing.

“[Attackers] share information better than [the private industry does].”

She went further, revealing that there is now a sophisticated market for malware as a service, where various platforms publish reviews for their products and services and even offer tech support to those struggling to get the most out of their purchases.

Growing market for hacking tools

She pointed to the Colonial Pipeline attack as an example where hackers did not even create the malware themselves—they just purchased it from a provider online. She explained that this marketplace has significantly lowered the barriers of entry and deskilled the activity for would be attackers, and that theoretically anyone could engage in such nefarious acts today.

Though Davis was in favor of collaboration between companies to address these attacks, she made it clear that this would not mean that responses and capabilities would become standardized, and that every company would maintain their own unique strategies to ensure that their services and data remain uncompromised.

Continue Reading


Associations Press FCC to Keep Robocall Extension for Facilities-Based Carriers

Organizations say preponderance of illegal calls don’t come from facilities-based providers.



Acting FCC Chairwoman Jessica Rosenworcel

August 11, 2021 – In submissions to the Federal Communications Commission on Monday, associations representing smaller telecom are asking the agency to keep an extension specifically for facilities-based carriers to comply with new robocall rules.

In May, the FCC voted to push up by a year, from 2023 to 2022, the deadline for small carriers to comply with the STIR/SHAKEN regime, which requires telephone service providers to put in place measures – including analytics services to vet calls – to drastically reduce the frequency of scam, illegal robocalls, and ID spoofing that misleads Americans to believe the call is legitimate. Large carriers, however, had a deadline of June 30 this year.

But in submissions to the FCC this week, the Competitive Carriers Association, NTCA, and USTelecom said the preponderance of illegal robocalls come from smaller providers – those with fewer than 100,000 lines – that don’t have networks and, because of that, facilities-based carriers should have the additional year to comply with the rules, which is reportedly a highly technical and complex endeavor.

To appreciate the effort, providers must tag or label all calls on their network, using analytics tools, to ensure that the calls are legitimate. All illegitimate calls must be tagged as potential spam or blocked completely. Even still, the possibility of “false positives” can occur. Failure to comply with the rules could result in hefty penalties.

‘Good faith’ actors shouldn’t be penalized

“Commenters recognize as well that care must be taken to correctly identify this group of small providers in a surgical and precise manner that does not sweep in innocent actors and compel them to adopt this standard on a timeframe they had neither anticipated nor budgeted for,” the NTCA said in its submission to the FCC on Monday.

“A more targeted and effective way of capturing the parties that prompted these proposals can be found in the record – specifically, the Commission should require operators that are not ‘facilities-based’ voice providers…to adopt STIR/SHAKEN on a more accelerated timeframe,” the NTCA continued.

Burden of proof on non-facilities providers to show need for extension

USTelecom, however, added that the non-facilities-based providers – which generally originate calls over the public internet – should be able to request the full two-year extension, but they must show why they need it.

“It’s also critical that they are required to explain in detail and specificity why their robocall mitigation plans are sufficient to protect consumers and other voice service providers from illegal and unwanted robocalls,” USTelecom said in its Monday submission.

“Such a requirement would offer the right balance between affording non-facilities-based small providers the opportunity for the full extension if truly needed, but without creating an opportunity for the small VoIP providers responsible for illegal robocalls to abuse the process in order to continue to send unsigned illegal traffic downstream to the detriment of other providers and consumers,” USTelecom added.

Continue Reading


FCC Proposes Measures to Limit Unwanted Access to Numbers, Protect Against Foreign Entities

The agency proposed rules for public comment on that would further restrict illegal use of numbering resources.



Acting FCC Chairwoman Jessica Rosenworcel

August 9, 2021 – The Federal Communications Commission is seeking comment on proposed rules that it expects will reduce access to phone numbers by those running illegal robocalls and to further enhance public safety by adding transparency measures on foreign providers.

At the August Open Meeting on Thursday, the FCC said it is proposing to require voice-over-internet protocol providers to abide by the new robocalling regime, which requires voice service providers to put in place measures to limit spam calls and ID masking or face severe penalties.

To block calls, voice providers must first gather analytics on the origins of the call and essentially tag its authenticity before its routed.

As of the June 30 deadline, all major U.S. phone companies use caller ID authentication system to label calls as part of the regime, known as STIR/SHAKEN. This allows for end-to-end phone calls to be verified with a now common digital tool. AT&T said in June that is it now labelling over one billion calls a month.

Last week’s meeting also yielded other proposals to safeguard limited numbering resources, “protect against national security risks…and further promote public safety” by adding a layer of oversight at the executive branch level to vet those outside the United States trying to access numbering resources. That includes requiring applicants disclose foreign ownership information.

This would add to the federal government’s approach to protecting national security from foreign entities perceived as threats to the country, including legislation introduced recently that would prevent the FCC from approving those companies with ties to the Chinese communist party.

Continue Reading


Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field