Connect with us


Comparing Privacy Policies for Wearable Fitness Trackers: Apple, Fitbit, Xiaomi and Under Armour



Photo by Kārlis Dambrāns used with permission

WASHINGTON, September 6, 2019 – Wearable fitness trackers are just another way of monitoring personal data. Yet the market for wearables is booming, because people want to optimize their fitness routines through self-monitoring of vital signs.

Aside from personal use, fitness tracker data is valuable for health insurance companies as it is a more efficient way to analyze a person’s fitness patterns. John Hancock, for instance, offers the Apple Watch at a high discount when paired with an insurance plan.

As these devices can track intricate details such as heart rate, calorie consumption and GPS coordinates, transparency about these practices is a must. Amid the diverse fitness tracker market, Apple and Fitbit are considered leading manufacturers. In addition to spearheading innovation in the fitness tech industry, Apple and Fitbit have been reported to have some of the most consumer-friendly policies.

Their competitors, on the other hand, still have some ways to go before they have developed a comprehensive level of security. Xiaomi and UnderArmour are two fitness tracker manufacturers that have had some discrepancies with their privacy and security practices.

Describing these four companies in the above manner is an oversimplification of their methodology. It is important to look at each individual company’s approach to privacy and to determine what benefits and gaps they have.


Apple has one of the most robust and easy-to-read privacy policies among major tech firms. Praised by the transparency of its computer and mobile devices, the Cupertino- based company puts the same amount of effort for wearables.

On its website, Apple states that privacy is a “fundamental human right.” It then goes into detail about the various ways in which a user’s privacy is protected. Concerns that Apple touches upon include encryption, third-party usage of app data and how users can modify and/or delete their data.

With the advent of the EU’s General Date Protection Regulations, multinational companies needed to update their policies to comply with international standards.

Last year, Apple launched a privacy portal allowing users to obtain a copy of the personal data associated with their account. This includes Apple ID info, App Store activity and data stored in iCloud. Additionally, the online portal has a page allowing users to correct their data and deactivate or delete their account.

For these reasons, the iPhone and Apple Watch have overlapping data protections. Apple Watch’s terms and conditions underscores the ability to control data via the privacy portal. This data is shareable with Apple’s affiliates and is combined with other personal information the company obtained, as well as with de-identified data.

Apple Watch terms note that photo and location data is shared, though it does not guarantee that data is accurate. Users are also informed that Apple can limit Watch use without any notice, and are advised to consult with a physician before starting a fitness program with the Watch.


Fitbit is Apple’s like-minded competitor, fully disclosing what its device does and who can access the information. Much of its privacy approach mirrors that of Apple, however Fitbit also has a separate privacy policy for children.

From the start, Fitbit discloses that it does not produce medical devices and that their trackers are not meant to replace doctor consultation. In its premium services, Fitbit incorporates data of non-paying users.

In the summary of its privacy policy, Fitbit states that transparency “is the key to any healthy relationship.” Following Apple’s protocol, Fitbit takes personal data such as height, weight and sleep patterns to give the user an accurate and relevant portrayal of their fitness regime.

We may personalize exercise and activity goals, Fitbit writes, based on past activity data and goals a user has previously set.

Under Fitbit’s child policy, parents have control over their child’s data until they’re 13 years of age. Persons under 13 are not permitted to create accounts without parental consent, and parents must consent to the use of their child’s data in accordance with Fitbit’s Privacy Policy for Children’s Accounts.

A separate link outlines these policies in detail. If parents have reason to believe that their child’s data was submitted without consent, they can contact Fitbit to request the removal of that data.


At a glance, this Chinese-based tech company seems to have minor privacy discrepancies. It also sells fitness trackers for a much lower price than the mainstream manufacturers. However, some security issues and limited disclosure causes Xiaomi to have a lower credibility than its competitors.

Xiaomi specifically states that neither they nor their suppliers and distributors make any specific promises about the service. Xiaomi claims it is committed to upholding privacy policies worldwide, however it explicitly states that any disputes to the company’s terms and conditions “will be litigated exclusively” in Chinese courts, where the user and Xiaomi “consent to personal jurisdiction.”

Another concerning factor of the privacy agreement is that Xiaomi doesn’t clarify what happens to personal information in the event of a merger or acquisition. The company only states that users will be notified.

Furthermore, Xiaomi fitness trackers have also been found vulnerable to Bluetooth MAC address surveillance, a flaw that’s not uncommon among trackers. According to a study by Open Effect, when a fitness tracker’s MAC address doesn’t change, it becomes easier for users to be monitored through location sharing services.

Overall, the vague and slightly intimidating nature of Xiaomi’s terms and conditions prevents the company from contributing to transparency.

Under Armour

Under Armour faced significant scrutiny after its subsidiary MyFitnessPal was compromised in a data breach involving 150 million accounts. Yet the company’s terms and conditions still leave room for improvement. The amount of data it collects and the default sharing options available are particularly concerning.

Moreover, Under Armour’s online privacy policy hasn’t been updated for over a year. Their terms outline the different ways they collect, use, disclose and process personal data. It’s also unclear the extent to which UA shares personal data with other companies.

In addition to UA’s abundant yet vague privacy approach, UA can track a user’s location even when its app isn’t running and sharing preferences such as Activity Stats, Community Social Data and Lookup Information are set as public by default. Setting an entire account on private mode would prevent those on the user’s friend list from finding them.

California residents have some leeway with UA’s privacy agreement due to the state’s legal framework. They are permitted once a year to request a list of personal data that the company disclosed to third parties if it was used for direct marketing purposes. However, UA claims that it does not share personal data for this purpose as per a California Civil Code.

As with most multinational companies, users residing in the EU have the right to request deletion of their account.

Although UA does provide ways for users to opt out of public settings, the fact that these are set to default is alarming. UA also suffers from a lengthy and difficult to understand Terms and Conditions agreement. This implies that privacy is not on the top of UA’s priorities, which is perplexing given the aftermath of their data breach scandal.


Federal Communications Commissioner Starks Seeks to Encourage Democratic Principles Online

The commissioner noted the peril democracy and citizen privacy finds themselves in around the world.



Federal Communications Commissioner Geoffrey Starks

WASHINGTON, January 14, 2021 – Speaking at an event hosted by Bridge for Innovation on Tuesday, Federal Communications Commissioner Geoffrey Starks says the private sector must lead in the fight to promote democracy and digital privacy rights online.

With increasing challenges to democracy around the world and citizen surveillance efforts by several international governments, as well as domestic concerns over privacy on social media platforms, Starks says private sector entities should work to set standards which promote democratic principles and privacy for citizens.

Just this month, Facebook faced a lawsuit – which it won – over access of third-party companies such as Cambridge Analytica, the British political consulting firm made famous when it was investigated in connection with alleged Russian interference and collusion in the 2016 United States presidential election, to users’ personal data.

Starks also emphasized that international diplomatic and regulatory bodies play a key role in upholding these norms.

He stated that China is looking to step up its role in these international bodies in attempts to influence policy to gain greater control over its citizens’ political activities and limit their privacy rights online.

At the beginning of November, President Joe Biden’s administration announced an initiative with several international allies to share information on surveillance programs of authoritarian regimes, with key focus landing on actions of the Chinese government.

Additionally, Biden said he would take action to limit U.S. exports to China of technology that  China uses for surveillance efforts.

U.S. technologies are on record being used in China for citizen surveillance, military modernization and persecution of Muslim Uyghurs in Xinjiang.

Looking to domestic broadband expansion efforts following the enactment of the bipartisan Infrastructure Investment and Jobs Act, Starks said the FCC will soon be collecting and posting pricing information from internet service providers which participate in the Affordable Connectivity Program.

Continue Reading


Congress Must Avoid ‘Overly Prescriptive’ Incident Reporting To Avoid Missing Larger Cyberattacks

Too many reports could burden federal officials, said the executive director of the Alliance for Digital Innovation.



Rep. Debbie Shultz
Rep. Debbie Schultz, D-Florida

WASHINGTON, January 11, 2022 — The executive director of an organization that pushes information technology reform in government testified Tuesday in front of the House Oversight committee that any incident reporting requirements that Congress is considering should not burden officials so much that they end up missing more serious breaches of cybersecurity.

Ross Nodurft of the Alliance for Digital Innovation told lawmakers studying the reform of the Federal Information Security Management Act, a 2002 law which implements an information security and protection program, that the amended legislation should consider keeping Congress abreast of incidents, but should be mindful of how it defines a security problem.

“As Congress considers defining major incidents or codifying vulnerability response policies, any legislation should be mindful of the dynamic nature of responding to cybersecurity challenges facing government networks,” Nodurft said. “If Congress is overly prescriptive in its definition of an incident, it runs the risk of receiving so many notifications that the incidents which are truly severe are missed or effectively drowned out due to thee frequency of reporting,” he said in prepared remarks.

The comments come on the heels of a year that included major cybersecurity attacks, including the attacks on software company SolarWinds, oil transport company Colonial Pipeline, which prompted a Senate hearing on the matter. The House Oversight committee released details of its investigation into some of the breaches in November.

The comments also come after lawmakers proposed new reporting requirements on companies. Those proposed laws would make it mandatory that small and large companies report incidents to the government so they can best prepare a response to protect Americans.

In July, Sens. Mark Warner, D-Virginia, Marco Rubio, R-Florida, and Susan Collins, R-Maine, introduced the Cyber Incident Notification Act of 2021, which requires federal and private sector cybersecurity intrusions to be reported to the government within 24 hours.

Cyber incident reporting was recently left out of a Senate bipartisan version of the National Defense Authorization Act.

Lead cybersecurity officials in government have been calling for mandatory breach reporting to government. Brandon Wales, executive director of the Cybersecurity and Infrastructure Security Agency, told the same Oversight committee in November that Congress should force companies to share that kind of information. Last summer, a Department of Justice official said he supports mandatory breach reporting.

In October, Secretary of State Antony Blinken announced the department intends to create a new cyber bureau to help tackle the growing challenge of cyber warfare.

Agency roles should be clarified

Rep. Debbie Schultz, D-Florida, talked about the varied organizations and institutions in her state that have been affected by cyberattacks and threats, including the Miami-based software company Kaseya, which experienced a major ransomware attack.

Schultz stated that there are two entities that are critical to federal cybersecurity: the Cybersecurity and Infrastructure Security Agency and the Office of the National Cyber Director.

Grant Schneider, senior director of cybersecurity services, Venable, said that the Office of the National Cyber Director acts as a conductor in the framework of FISMA. These organizations work with other organizations, such as the National Institute of Standards and Technologies, and the Office of Management and Budget.

With so many organizations, Nodurft explained how important it is for the roles within these organizations to be defined. He talked about how important it is for agencies to know where to turn to report cyberattacks. In part with this, he continued, agencies who “are proactively trying to mitigate their cyber risks” need clear reporting channels and clear areas of jurisdiction to go to for various issues.

According to Nodurft, these defined roles would “make it much easier for [agencies] to work together, to build a broader defensive structure.”

Continue Reading


FCC Narrows Small Provider Group for Accelerated Robocall Compliance Timeline

Providers that are not facilities-based will need to meet their robocall obligations by June 2022.



FCC Chairwoman Jessica Rosenworcel

WASHINGTON, December 14, 2021 – The Federal Communications Commission said Friday it will provide facilities-based voice service providers a full two-year extension for complying with robocall regulations, while moving up the deadline for certain small operators to comply.

The agency originally ruled earlier this year that all small voice service providers that have 100,000 or fewer subscribers must comply by June 2022 with the STIR/SHAKEN regulations, a regime that requires operators to digitally validate the authenticity of a phone number and give consumers certainty that the number matches that of the supposed caller. The June 2022 date was revised earlier this year from a June 2023 timeline set earlier. The regime has been in place for large carriers since June of this year.

But after reviewing further evidence, the agency on Friday argued that a smaller “subset” of affected carriers that don’t have networks “are originating an increasing quantity of illegal robocalls.”

As a result, the FCC requires those non-facilities-based providers to continue to work toward the June 2022 deadline to comply with the regime, which operators have said is a highly technical and expensive endeavor. By narrowing the group, the FCC effectively allowed facilities-based operators to have the full compliance extension, until June 2023.

Friday’s decision follows submissions to the agency by facilities-based carriers who argued they should be granted a full extension to June 2023 precisely because the preponderance of illegal spam calls doesn’t originate from them.

The Competitive Carriers Association, NTCA, and USTelecom argued that facilities-based providers shouldn’t be penalized for calls that largely don’t run their networks.

The NTCA said in an August submission that “care must be taken to correctly identify this group of small providers in a surgical and precise manner that does not sweep in innocent actors and compel them to adopt this standard on a timeframe they had neither anticipated nor budgeted for.”

They also argued that the burden of proof is on the non-facilities-based carriers to who why they need additional time.

Continue Reading


Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field