Another privacy problem with an app

February 3, 2014 |

Mobile Apps are all too often the weak link in privacy protections.  This has well been well recognised by regulators.  It was the subject of a communique, known as the Warsaw declaration on the “appification” of society. In Track Star Slate reports on iBeacon being used with third party apps to track users.  The beauty of the article is, using a popular app Shopkick, it demonstrates how intrusive the data collection process is and how misleading and, effectively useless, the privacy policies are.  The problems identified in the article regarding privacy policies would probably not be compliant with the Australian Privacy Principles.  In Australia the issue would be that most app developers, especially the start ups, aren’t covered.  They don’t gross more than $3 million per year.  That is a huge problem because apps rely on data.  And they are prone to provide data to other apps, compounding the danger of interferences with privacy.  Apps are also notoriously lax on security.

The article provides:

Last year, Apple quietly introduced iBeacon into location services on iOS 7. It’s a technology that can track your position and movements in places like stores and restaurants. It functions kind of like GPS but uses more energy-efficient Bluetooth communication. When you install a third-party app that uses iBeacon, a destination (like a store or a stadium) can know when you enter, where you go, what you look at, and when you leave.

Apps that have access to this kind of tracking data, combined with other personal information, are powerful monitoring tools. Few of us would allow the police or government to track us at this level. What could an app offer that would make us hand over all this data? Discounts, perhaps?

Each person is likely to have a different answer. In every case, though, the data is valuable and there are real risks to sharing it. Choosing to share should be based on a clear understanding of how our data will be collected, used, and passed around. Unfortunately, privacy policies—in this domain and many others—are written to obfuscate and sometimes they are outright misleading.

Take as an example Shopkick, one of the most popular apps starting to use iBeacon. It has more than 6 million users, and, based on its high ratings in the App Store, a lot of them are fans. I’m not singling it out—its policies are fairly typical of what you would see from other apps in this category.

What if the third party tells insurance companies how much time you spend shopping in a marijuana dispensary?

Shopkick’s big draw is that you are rewarded with “kicks” for all kinds of actions, like checking in at a store or scanning an item. Get enough kicks, and you can exchange them for gift cards or other rewards. To qualify for these rewards, you are required to give Shopkick your cellphone number, ZIP code, email, and access to your phone’s microphone (more on that shortly). And if you are using Shopkick at a store with iBeacon, it also knows where you are in a store, where you linger, and what products you are interested in.

Are the perks Shopkick offers worth the privacy trade-off? To answer that, we need to know what information the app collects, which is described in its privacy policy. That includes what it calls “Non-Personally Identifiable Information,” or data that can’t be associated with you by name. This “Non-Personally Identifiable Information” includes things like your birthday, gender, all the location data it gets from GPS and iBeacon, and the commercials you watch on TV.

Did you catch that last one? How can it know what commercials you watch on TV? It accesses your phone’s microphone and listens. From the privacy policy:

[T]he shopkick application may ask you to open the app while you are watching TV, and then we may record or analyze the audio signal from the television set via the shopkick app and your cell phone’s microphone, to determine the commercial, and/or program, including the date and/or time)

For users to qualify for rewards—the main motivation for using Shopkick—they are required to grant mic access to the app.

Shopkick also collects “Personally Identifiable” information, like “your name, mobile phone number, other phone numbers, email address, home address.” You may provide some of it, but it can also come from stores and white pages providers. If you have associated the app with a loyalty card, or if the information’s in an online database, Shopkick may be able to access it even if you don’t explicitly choose to share it.

In other words, knowing exactly who you are is important to Shopkick’s honchos, and they will seek that information out.

Once Shopkick has your data, what does it do with it? More importantly, who does it share it with? Shopkick’s policy says, “We may also share this Non-Personally Identifiable Information with our Affiliated Partners.” It’s not clear what an
“Affiliated Partner” is, though.

Does the privacy policy tell us enough to make an informed decision about whether to use the app and hand over our data? Not really. It is vague and misleading (and also typical) on several important points.

First, it is hard to tell what data is being collected. Even though Shopkick’s privacy policy provides an extensive list of data it collects, a lot is left unsaid. For example, does Shopkick access your microphone and listen in any time it’s open? Is it monitoring what you say about a product when you look at it or try it on? Does it record that audio? Is that ever shared?

I reached out to Shopkick and a representative told me that, while the microphone is on when you have the app open, it does not listen to human voices. She explained the TV commercial initiative was old and that the microphone is currently used only to decode inaudible audio signals that are part of its in-store communication setup. The fact that it doesn’t listen to you talk is good, but the language of the privacy policy seems that it would cover audio eavesdropping. The fact that there is ambiguity about what Shopkick does and what it could do in the future is both problematic and common among privacy policies.

Second, when an app or website shares or sells your data, that data enters the hands of another company whose privacy policies we don’t know. Those third parties might be stores that can make us useful offers based on our data. But what if the third party tells insurance companies how much time you spend shopping in your local tobacco shop, liquor store, or marijuana dispensary? Since there are no clear restrictions on the companies that receive our data, we are left to wonder how they use it.

Finally, most privacy policies are extremely misleading about “non-personally identifiable information.” Bits of data that feel anonymous are anything but when taken together. The Electronic Frontier Foundation provides an excellent overview of academic research that shows these traits can uniquely identify a vast majority of the U.S. population. For example, the combination of ZIP code, birthdate, and gender—all “non-personal information” that Shopkick and many other services collect—is unique for about 87 percent of U.S. residents. That means if an app has this data, it usually has enough that you can be individually identified.

The vast majority of companies creating these apps are not malicious. They are trying to provide a service that people find valuable, and there is nothing inherently wrong with profiting from that. Our data helps them provide these services to us, and it helps them make money.

Right now, users are deciding to share their data based on an analysis of the benefits. But that decision should come from an analysis of the risks, too. That requires transparent and informative privacy policies, but the state of privacy policies today is quite the opposite. They are opaque and misleading. Before granting companies access to such critical components of our lives, we should know exactly what they will collect and how well it will be protected. Without that knowledge, we are forced to operate entirely on trust.

On a similar theme regarding privacy but different issues of the device defeating privacy promise of apps in Privacy Apps Like Snapchat Make a Promise They Can’t Keep.  It provides:

In the wake of the Snowden disclosures, more and more apps are making a promise that people want desperately to believe: You can still control emails, texts, photos, and videos even after you’ve sent them to other people.

We want the digital world to be like the physical world that we learned first. Just as we can show someone a photograph or a page of our diary and then take it back, we sometimes want to send someone an email that they can read only once—while we hold it open for them, as it were.

Every time a politician is embarrassed by text messages he never meant to be made public, every time a high official is brought down by emails that unexpectedly come to light, the demand for apps that can guarantee our safe passage goes up a bit. The makers of apps like Confide (“confidential messages that self-destruct”), Snapchat (“after a snap has been opened … it is deleted from the device’s storage”), and many others understand this demand very well, and they capitalize on it. It’s why such apps tend to attract a disproportionate amount of attention when they launch.

But what makes the promise so dangerous is that it is false. Not just false in practice, but in principle, for reasons that won’t change even as technology improves.

The problem isn’t that the NSA can defeat any app’s security. (Sometimes it can, but not always.) Nor is it that the makers of the apps are untrustworthy and build in “back doors” that would allow them, or those with whom they cooperate, to listen in when they want to. These are legitimate concerns, but they are not the real issue.

The real issue is that this promise depends on something that cannot be depended on: a sender somehow having control over the device on which a message is received. I will explain this technically, but it may be best grasped by analogy: “I’m going to whisper a secret to you, but it will self-destruct in 10 seconds and you won’t have it after that.”

That, in essence, is the marketing pitch for these apps.

Of course, your cellphone and your laptop are not your mind. But they are yours, and that turns out to be the important thing. When someone sends you a message sent from one of these apps, and the receiving app—the app that is running on your device—wants the photo or text or whatever was sent to self-destruct, here’s what it does: It sends an electronic request to your device, saying “Please, I humbly ask that you delete that thing over there.” That’s it.

The app is now completely at the mercy of your device. If your phone decides not to obey the request, then the photo will stay. Similarly, an app that promises (as Confide does) to let the sender know if someone attempts to take a screenshot relies completely on the receiving device obeying the request “Please let me know if the user tries to take a screenshot.”

The device is always free to lie to the app. Your phone could claim to have deleted a file successfully when it actually didn’t delete the file at all; it could claim that it has temporarily turned off screenshot capability when in fact it is recording everything displayed on the screen to a video file for later review.

There’s simply no way for an app to know.

This is because apps by themselves don’t have any ability to write images to the screen, or turn the microphone on and off, or delete files, or do any other device-level tasks. Apps rely on the phone’s underlying operating system—the core software the phone ships with, such as Google’s Android or Apple’s iOS—for those things, and apps must have faith that the phone (or tablet, laptop, etc.) performs the tasks as requested.

With more and more apps coming out that make promises based on that faith, we are staring at an arms race: It is just a matter of time before some manufacturer realizes that security for the sender and security for the receiver are two different things and offers a smartphone with a “save everything” mode, in which every pixel displayed to the screen and every piece of information that flies through the device’s memory is logged for a short period, and the owner is given a chance to review the log and preserve anything she wants.

This feature would be especially easy for Android-based devices to add, because the operating system is already open source—the software code is already published and documented and available for anyone to customize when building a new device. (This is no doubt why the new Blackphone decided to base its operating system on Android, and if its manufacturers are serious that the phone “prioritizes the user’s privacy and control,” then I would expect them to be the first ones off the starting block in this particular arms race.)

In the meantime, everyone should understand these apps for what they are: tools to help your friends avoid accidentally saving private messages from you. When you send messages to people you have no reason to trust, you have no reason to trust their devices either.

Leave a Reply