The Current Security State Of Top Fitness Mobile Apps

In January 2018, the news broke out that Strava, a social fitness tracking app exposed the locations of US military bases around the globe. After the media reports, the company actively worked on improving privacy and security oversights by implementing restrictions on viewing and added an opt-out option for displaying user data on its heat map. Those were all good remedies for minimizing future damages, but they are bandages to cover up mistakes. Mobile app security should be active solutions to prevent problems, not just to fix them.

With this recent issue, we wanted to take a look at the current security state of the top fitness apps.

The current fitness app market has around 260 million users as of 2018. It is expected to reach over 400 million users worldwide by 2020, and 30 million will come from the US alone according to data from Statista. Most fitness apps like many other apps, in general, collect a variety of personal data.

Some data may not seem as crucial as others like credit cards and passwords. However, any user data can be maliciously leveraged to bring impacts on individuals and even the society. It can be used to reveal military bases around the world or learn your everyday activities to stalk daily routines or sneak into your photo albums to invade your privacy. Such information could potentially be used for different purposes, which could bring more harms than stealing credit card information.

To understand the overall fitness app security status better, we analyzed the top 10 fitness apps on the market to see what problems they have under the hood. From the internal analysis, we found 10 common security issues. All of the apps we analyzed had some critical and medium security vulnerabilities. Moreover, they all had a possibility of getting decompiled, which could bring subsequent hacking damages. All of the apps we analyzed share a critical vulnerability of File Input and Output, and a medium vulnerability of uncontrolled network behaviors. We also found that all of the apps have some issues related to the OWASP Mobile Top 10 Risks.   

Below are the definitions of each vulnerability along with guidelines that developers can follow during the app development phase. We firmly believe that security should be considered from the app development process, and continuously be updated to meet with the rapidly-changing security trends.

File Input/Output

The file I/O (input/output) is an essential programming function that allows data to be transferred either to or from the app file system. When files go through an Input and output objective, they are mapped into data streams, which uniform the various input and output properties. If this part is not secured, I/O functions can be used to inject malicious code to gain read or write access to resources that should not be readily accessible, such as user permissions and file structures.

Potentially any attacker can send code or files in and out of your application if you don’t protect your file I/O operations. When it is not correctly protected, hackers can easily bypass file naming validation by manipulating variables that reference files with “dot-dot-slash (../)” sequences gaining access to files, directories, and even app source code. To avoid this, properly canonicalize and validate any given file path within your application.

Developers need to store the data they need for their local files and encrypt them.


is a coding framework that allows apps and components to communicate with one another by passing messages; this helps specify between a procedure to call and the arguments to use. Applications then declare in a manifest file which intents each component services as well as permissions on the application and component level. If app intents are not secured, these same intents can be easily called to view other instances of intents hidden within the mobile application. These hidden intents could contain cached data which may potentially include sensitive user information like passwords and credentials. 

If there are insecure permissions on your AndroidManifest.xml file, these intents could be called manually to display on the mobile device by attackers. The best action for developers is to limit permissions or make intents private altogether by changing android=”false”. If intents are needed, developers can change back to ”true” and restrict access to a provider by setting permissions attributes.

Developers instinctively need to change intent exports to “false” until otherwise needed.


URL schemes are intents that allow applications to communicate with servers and web pages from inside an app. They generate an intent object based on the URL and then try to start activities with the intent. This intent is often done in three steps by a typical browser: Parsing URL, Filtering intent and Launching activity. Parsing URL generates the intent object from the intent scheme within the app. Next, a browser filters the intent object by the app, the filter is a protective layer meant to stop intent-based attacks, but most filters are insufficient or more often non-existent. Lastly, browsers launch the activity with the intent filter. 

Unfortunately, Android API documents don’t explain in much detail about URL schemes, so it is not widely known or appropriately secured by app developers. Insecure intent scheme URLs can give malicious web pages a chance to conduct intent-based attacks against apps with virtually no protections. These attacks can lead to SQL injection attacking web server communication, bypassing authentication by modifying server result values. Browsers have taken measures to reduce the risks, but these actions are not enough. And, app developers cannot solely rely on browser developers for ensured security. 

Developers must implement their security provisions for URL functions instead of relying on browsers.


Log classes are often used by developers to store a history of events or transactions for later review, statistics, or debugging purposes. Depending on the nature of the application, the task of reviewing log files may be performed manually or automated with a tool that culls logs for significant events or trending information. Log files are invaluable to developers for improving app performance and usability, but if not appropriately secured log classes can be accessible by any other application with READ_LOGS permissions enabled.

With improper protections, developers are giving attackers the ability to use a standard command to view all the logs in the device. Hackers can quickly write a malicious app and read the logs with elevated privileges on any rooted device or inject code and other commands into the log file and take advantage of a vulnerability in the log processing utility.

Developers need to disable or restrict permissions for log reading to only classes that need access.


Reflection is a programming technique provided in most APIs that lets developers treat class definitions as objects. It gives the programmer a way to view and modify class members such as constructors, methods, interfaces, and fields in runtime. Often used by developers in common development tools like class browsers or GUI designers. Reflection can be a powerful technique that enables applications to perform operations which would otherwise be impossible without it.

However, reflection is a relatively advanced code feature and should be used only by developers who have a firm grasp of the fundamentals of the language. If not correctly implemented, a malicious user can manipulate objects at runtime, injecting damaging code to the application. Attackers may be able to view or modify which classes are used or even run arbitrary code bypassing security controls altogether.

Developers must only implement reflection when they fully grasp the language; otherwise, it’s an unneeded risk.


An SDK is a kit that contains a set of tools, libraries, relevant documents, sample code, processes, and guides, pretty much anything a developer may need to create apps on a specific platform. The mobile app world moves fast, so developers rely on SDKs to beef up performance and other aspects of there application quickly so they can gain and keep users. 3rd party developers often make SDKs and come with pre-built functionality built for the fastest implementation.

But, like most 3rd party solutions, there can be security issues that developers need to look at very carefully. If an SDK is compromised or carries malicious code embedded in it, it can affect every app that uses it. So, a best practice for developers when it comes to adding additional SDKs to your app, choose sparingly. Adding extra and often unneeded SDKs to your system can lead to unnecessary security risks and exposure of user’s sensitive data. 

Developers should only use necessary SDKs that improve the function of the application.

Deprecated Objects

Deprecated objects are program functions that are supported but obsolete and removed after a new version is produced, often they are left in place. A deprecated object is one that programmers are discouraged from using typically because they are or will become irrelevant. Leaving these functions in place is dangerous to the app’s overall security, and most often a better alternative exists and should be implemented.

These old and unuseful functions can cause applications security risks as well as take up extra code space. The presence of an obsolete function often indicates to an attacker that the surrounding code has been neglected and may be forgotten, allowing for it to be exploited. If the application uses deprecated or obsolete functions, it raises the probability that security problems are lurking nearby.

Developers’ best course of action is to remove any and all unused and outdated code.

Here are the common problems under the OWASP Mobile Top 10 guidelines.

M2 ‑ Insecure Data Storage

“This new category is a combination of M2 and M4 from Mobile Top Ten 2014. It covers insecure data storage and unintended data leakage.” – Mobile Top 10 2016-M2-Insecure Data Storage

Local data

Local data is very much like it sounds, data that is stored locally in the application. This makes internal storage an excellent place for data that users don’t need access to directly. The system provides a directory on the file system for each app where you can organize any files your app needs. These files saved to the internal storage are private by default, but developers often assume users or malicious party will not access a mobile device’s file system and leave local data unprotected.

External data

Just as local data, external data is data that can be stored outside the application. Virtually every device supports external storage sharing to give users the ability to move data off devices. Most often, external storage is used to make data accessible to other apps or saved if the user uninstalls an app, like photos or downloaded files. But, files that are saved to external storage are all world-readable and can be modified by the user when they enable USB storage to transfer files to a computer, making them a prime target for attacks.

To protect against insecure data storage vulnerabilities, here are some useful general rules of thumb. This list is built on the guidelines of OWASP by — oh wait; it’s from us! Sorry, only boast in this post.

  • Do not store credentials on the phone file system. Make it so the user must identify themselves with a standard login each time the application is opened and that the appropriate session timeouts are put into place.
  • Be particular about the cryptography that is being implemented and use solutions that avoid the leakage of binary signature that is often used in encryption libraries.
  • Avoid using hardcoded encryption or decryption keys.
  • Add another layer of encryption beyond the default encryption methods provided by the operating system. 

Developers need to consider precisely what files are used and accessible on their apps.

M8 – Code Tampering

“This category covers binary patching, local resource modification, method hooking, method swizzling, and dynamic memory modification.” – Mobile Top 10 2016-M8-Code Tampering

Code tampering comprises of modifications to code within an application that was not intended to be changed. Once an application is installed on a mobile device, all the code and data resources are housed within the device. Mobile apps code runs on the environment of the device making it no longer entirely under the control of the developer — meaning the mobile code is inherently vulnerable to attacks and code tempering. Attackers can use these code weaknesses to cause damages, ranging from directly modifying the source code to altering the contents that reside within app memory to changing or removing the applications APIs to manipulating the data and resources held within the app. 

A good practice is to check for test-keys, test-keys for android are public knowledge, which means anyone could replace or hijack your app if you don’t have protection. Look at your build.prop to see if it includes, this denotes a build or unofficial ROM. Also, implement unique sets of release-keys for your test-keys that only you have access to. OWASP suggests implementing a real-time monitoring solution that detects any adding or changes to your code. Make sure your solution can actively look for jailbroken or rooted devices and allows for remove shutdown or deletion — this will make prevention more streamlined.

Developers can implement monitoring solutions to stop rooted device attacks on your applications.

M9 – Reverse Engineering

“This category includes analysis of the final core binary to determine its source code, libraries, algorithms, and other assets.” – Mobile Top 10 2016-M9-Reverse Engineering

Reverse engineering is the action of analyzing an application’s binary code and deconstructing it for malicious purpose. Almost all mobile code is subject to reverse engineering, which in turn makes mobile apps as well. Commonly used coding languages like JavaNET, Objective C, and Swift hold the highest risks of reverse engineering, because of their uses dynamic introspection or reflection, which is talked about above. Coupled with the many free binary injection tools available that give insights into the internal operations of an application, it makes reverse engineering quite easy.

Because of the inherent nature of code and that most current languages use large amounts of metadata, apps are increasingly prone to attacks. Metadata is an excellent tool for developers to debug and improve but they are also a fantastic tool for hackers to understand how the application works. The best step, once again by OWASP, to minimize reverse engineering of apps is through obfuscation. OWASP advises for the best protection, developers to pick solutions that have these abilities: 

  • Narrow down what methods/code segments to obfuscate;
  • Tune the degree of obfuscation to balance performance impact;
  • Withstand de-obfuscation from tools like IDA Pro and Hopper;
  • Obfuscate string tables as well as methods.

Developers must incorporate obfuscation into their security protocols to prevent reverse engineering.

These are the vulnerabilities shared across all fitness applications tested but are not the only issues. Among other notable security risks shared by some of these applications were Exported Providers and M6 – Insecure Authorization. Fitness applications have seemed to garner limited security and protections, perhaps because developers and users alike feel the information stored on apps is not as valuable. But, as fitness apps become more popular and interconnected, the data they collect will garner more value.

“Fitness-tracking companies can pinpoint where people live, how often they sleep, and even when they are engaged in sexual activity based on data collected,” said Mark Weinstein, a privacy advocate and chief executive officer of social network MeWe in an article from the New York Post.

Fitness app security needs to be taken seriously by developers and users. When developing an app of any kind, development teams need to keep security in the back of their minds and integrate protections at the earliest possible moment. We know that the development time for apps is accelerated, but developers still need to offer a layer of protection for their users — especially when that data can be so valuable. If developers take and learn from the mistakes of those above, they will have secured their applications tenfold.

Because, when 75% of all mobile apps fail basic security testing, there needs to be a change. So, as your users are enjoying that peaceful jog around their neighborhood, tracking their best time, think about who else might find that data valuable.

*A version of this post first appeared on June 4, 2018 in Security Boulevard.


Leave a Reply Cancel reply

Exit mobile version