Privacy, password protections and compliance
June 11, 2015 |
Password protection is critically important for both users of online accounts and those who operate the accounts. An organisation has a responsibility to have a sufficiently rigorous password system to avoid random attacks. One option is 2 factor authentication. The Privacy Act does not specify the nature of the password protections that must be in place however, if the overseas experience is any guide, once a security breach is investigated the usual review of security often highlights flaws in access, commonly with archaic, ineffective or poorly maintained password protection systems.
A very interesting report early this month by Telesign (found here), based on a survey highlights concerns by about the level and quality of online security. The key findings are:
- 80 % worry about online security.
- 45 % are extremely or very concerned about their accounts being hacked in the past year,
- 40 % experienced a security incident with 70 % changing their passwords in response.
- 30 % are confident that their passwords will protect the security of their online accounts users have an average of 24 online accounts, but use only 6 unique passwords to protect them.
- 73 % of accounts use duplicate passwords. 47 % are using a password that hasn’t been changed in five or more years.
- 77 % have a password that is one year or older.
- 72 % would welcome advice on how to protect the security of their online accounts.
- 68 % say they want online companies to provide an extra layer of security, such as two- factor authentication, to protect their personal information.
- 61 % have not enabled two-factor authentication for any of their online accounts.
- Among those who don’t use two factor authentication 56 percent are unfamiliar with it, 29 percent don’t know how to turn it on while 29 percent said none of their online accounts offer it.
- 54 % say they’d be more willing to use 2FA if companies guaranteed mobile phone numbers would only be used for account security and neverfor marketing.
- of those who use two-factor authentication 61% did so because the site required it
- 47 % of Millennials (18 to 34 years of age) are extremely or very concerned about hacking 81 % would like advice on how to protect their accounts.
- Millennials have more online accounts than others but use fewer passwords with 32% using only 1 – 3 passwords for all of their accounts vs. 17 % of those 35 and older.
- Millennials more often turn on two factor authentication; 61% vs 49 % of those 35 and older.
Data breaches caused by inadequate or easily breached password protections are ubiquitous. That is a problem for both the consumers whose default may be rudimentary if not farcical passwords but also the organisations who have to deal with the consequences of the breach. Often times the password protection systems is flawed, not properly patched and easy to bypass (see the Adobe breach in 2013).
Given the level of compliance in Australia is generally poor when the Privacy Commissioner undertakes enforcement action this issue may cause organisations real problems. At minimum the reputational damage can be significant. The Commissioner, like most regulators, has produced guidance for securing personal information (see here) and password protection is a feature of the processes, software and hardware to secure personal information. In the cyber environment part of the minimum requirement is password protection. Australian Privacy Principle 11 provides:
11.1 If an APP entity holds personal information, the entity must take such steps as are reasonable in the circumstances to protect the information:
a. from misuse, interference and loss; and
b. from unauthorised access, modification or disclosure.
The guidelines to APP 11 are very broad and general but relevantly provides:
11.8 Reasonable steps should include, where relevant, taking steps and implementing strategies in relation to the following:
- governance, culture and training
- internal practices, procedures and systems
- ICT security
- access security
- data breaches
- physical security
- standards.
The one imponderable is the will, desire or ability of the Privacy Commissioner to take strong action using newly (as in 15 months ago) acquired powers.
That privacy protection is important to consumers should be clear. A recent study by academics at the University of Pennsylvania and New Hampshire highlights this in a report titled The Tradeoff Fallacy; How marketers are misrepresenting American consumers and opening them up to exploitation. Notwithstanding the, for academics, slightly volcanic prose the report is a sober and thoughtful analysis of surveys which highlight what has consistently been in evidence from surveys in the past, consumers value their privacy and don’t see it as a commodity to be traded off. The report also highlights the lack of information made available to consumers as to what will be made of their information. The report has been covered in the Techcrunch piece, The Online Privacy Lie Is Unraveling, another flourish by a sub editor. The story provides:
A new report into U.S. consumers’ attitude to the collection of personal data has highlighted the disconnect between commercial claims that web users are happy to trade privacy in exchange for ‘benefits’ like discounts. On the contrary, it asserts that a large majority of web users are not at all happy, but rather feel powerless to stop their data being harvested and used by marketers.
The report authors’ argue it’s this sense of resignation that is resulting in data tradeoffs taking place — rather than consumers performing careful cost-benefit analysis to weigh up the pros and cons of giving up their data (as marketers try to claim). They also found that where consumers were most informed about marketing practices they were also more likely to be resigned to not being able to do anything to prevent their data being harvested.
“Rather than feeling able to make choices, Americans believe it is futile to manage what companies can learn about them. Our study reveals that more than half do not want to lose control over their information but also believe this loss of control has already happened,” the authors write.
“By misrepresenting the American people and championing the tradeoff argument, marketers give policymakers false justifications for allowing the collection and use of all kinds of consumer data often in ways that the public find objectionable. Moreover, the futility we found, combined with a broad public fear about what companies can do with the data, portends serious difficulties not just for individuals but also — over time — for the institution of consumer commerce.”
“It is not difficult to predict widespread social tensions, and concerns about democratic access to the marketplace, if Americans continue to be resigned to a lack of control over how, when, and what marketers learn about them,” they add.
The report, entitled The Tradeoff Fallacy: How marketers are misrepresenting American consumers and opening them up to exploitation, is authored by three academics from the University of Pennsylvania, and is based on a representative national cell phone and wireline phone survey of more than 1,500 Americans age 18 and older who use the internet or email “at least occasionally”.
Key findings on American consumers include that —
- 91% disagree (77% of them strongly) that “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing”
- 71% disagree (53% of them strongly) that “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.”
- 55% disagree (38% of them strongly) that “It’s okay if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.”
The authors go on to note that “only about 4% agree or agree strongly” with all three of the above propositions. And even with a broader definition of “a belief in tradeoffs” they found just a fifth (21%) were comfortably accepting of the idea. So the survey found very much a minority of consumers are happy with current data tradeoffs.
The report also flags up that large numbers (often a majority) of U.S. consumers are unaware of how their purchase and usage data can be sold on or shared with third parties without their permission or knowledge — in many instances falsely believing they have greater data protection rights than they are in fact afforded by law.
Examples the report notes include —
- 49% of American adults who use the Internet believe (incorrectly) that by law a supermarket must obtain a person’s permission before selling information about that person’s food purchases to other companies.
- 69% do not know that a pharmacy does not legally need a person’s permission to sell information about the over-the-counter drugs that person buys.
- 65% do not know that the statement “When a website has a privacy policy, it means the site will not share my information with other websites and companies without my permission” is false.
- 55% do not know it is legal for an online store to charge different people different prices at the same time of day.
- 62% do not know that price-comparison sites like Expedia or Orbitz are not legally required to include the lowest travel prices.
Data-mining in the spotlight
One thing is clear: the great lie about online privacy is unraveling. The obfuscated commercial collection of vast amounts of personal data in exchange for ‘free’ services is gradually being revealed for what it is: a heist of unprecedented scale. Behind the bland, intellectually dishonest facade that claims there’s ‘nothing to see here’ gigantic data-mining apparatus have been manoeuvered into place, atop vast mountains of stolen personal data.
Stolen because it has never been made clear to consumers what is being taken, and how that information is being used. How can you consent to something you don’t know or understand? Informed consent requires transparency and an ability to control what happens. Both of which are systematically undermined by companies whose business models require that vast amounts of personal data be shoveled ceaselessly into their engines.
This is why regulators are increasingly focusing attention on the likes of Google and Facebook. And why companies with different business models, such as hardware maker Apple, are joining the chorus of condemnation. Cloud-based technology companies large and small have exploited and encouraged consumer ignorance, concealing their data-mining algorithms and processes inside proprietary black boxes labeled ‘commercially confidential’. The larger entities spend big on pumping out a steady stream of marketing misdirection — distracting their users with shiny new things, or proffering up hollow reassurances about how they don’t sell your personal data.
Make no mistake: this is equivocation. Google sells access to its surveillance intelligence on who users are via its ad-targeting apparatus — so it doesn’t need to sell actual data. Its intelligence on web users’ habits and routines and likes and dislikes is far more lucrative than handing over the digits of anyone’s phone number. (The company is also moving in the direction of becoming an online marketplace in its own right — by adding a buy button directly to mobile search results. So it’s intending to capture, process and convert more transactions itself — directly choreographing users’ commercial activity.)
These platforms also work to instill a feeling of impotence in users in various subtle ways, burying privacy settings within labyrinthine submenus. And technical information in unreadable terms and conditions. Doing everything they can to fog rather than fess up to the reality of the gigantic tradeoff lurking in the background. Yet slowly, but slowly this sophisticated surveillance apparatus is being dragged into the light.
And as more questions are asked the discrepancy between the claim that there’s ‘nothing to see here’ vs the reality of sleepless surveillance apparatus peering over your shoulder, logging your pulse rate, reading your messages, noting what you look at, for how long and what you do next — and doing so to optimize the lifting of money out of your wallet — then the true consumer cost of ‘free’ becomes more visible than it has ever been.
The tradeoff lie is unraveling, as the scale and implications of the data heist are starting to be processed. One clear tipping point here is NSA whistleblower Edward Snowden who, two years ago, risked life and liberty to reveal how the U.S. government (and many other governments) were involved in a massive, illegal logging of citizens’ digital communications. The documents he released also showed how commercial technology platforms had been appropriated and drawn into this secretive state surveillance complex. Once governments were implicated, it was only a matter of time before the big Internet platforms, with their mirror data-capturing apparatus, would face questions.
Snowden’s revelations have had various reforming political implications for surveillance in the U.S. and Europe. Tech companies have also been forced to take public stances — either to loudly defend user privacy, or be implicated by silence and inaction.
Another catalyst for increasing privacy concerns is the Internet of Things. A physical network of connected objects blinking and pinging notifications is itself a partial reveal of the extent of the digital surveillance apparatus that has been developed behind commercially closed doors. Modern consumer electronics are hermetically sealed black boxes engineered to conceal complexity. But the complexities of hooking all these ‘smart’ sensornet objects together, and placing so many data-sucking tentacles on display, in increasingly personal places (the home, the body) — starts to make surveillance infrastructure and its implications uncomfortably visible.
Plus this time it’s manifestly personal. It’s in your home and on your person — which adds to a growing feeling of being creeped out and spied upon. And as more and more studies highlight consumer concern about how personal data is being harvested and processed, regulators are also taking notice — and turning up the heat.
One response to growing consumer concerns about personal data came this week with Google launching a centralized dashboard for users to access (some) privacy settings. It’s far from perfect, and contains plentiful misdirection about the company’s motives, but it’s telling that this ad-fueled behemoth feels the need to be more pro-active in its presentation of its attitude and approach to user privacy.
Radical transparency
The Tradeoff report authors include a section at the end with suggestions for improving transparency around marketing processes, calling for “initiatives that will give members of the public the right and ability to learn what companies know about them, how they profile them, and what data lead to what personalized offers” — and for getting consumers “excited about using that right and ability”.
Among their suggestions to boost transparency and corporate openness are —
- Public interest organizations and government agencies developing clear definitions of transparency that reflect consumer concerns, and then systematically calling out companies regarding how well or badly they are doing based on these values, in order to help consumers ‘vote with their wallets’
- Activities to “dissect and report on the implications of privacy policies” — perhaps aided by crowdsourced initiatives — so that complex legalize is interpreted and implications explained for a consumer audience, again allowing for good practice to be praised (and vice versa)
- Advocating for consumers to gain access to the personal profiles companies create on them in order for them to understand how their data is being used
“As long as the algorithms companies implement to analyze and predict the future behaviors of individuals are hidden from public view, the potential for unwanted marketer exploitation of individuals’ data remains high. We therefore ought to consider it an individual’s right to access the profiles and scores companies use to create every personalized message and discount the individual receives,” the report adds.
“Companies will push back that giving out this information will expose trade secrets. We argue there are ways to carry this out while keeping their trade secrets intact.”
They’re not the only ones calling for algorithms to be pulled into view either — back in April the French Senate backed calls for Google to reveal the workings of its search ranking algorithms. In that instance the focus is commercial competition to ensure a level playing field, rather than user privacy per se, but it’s clear that more questions are being asked about the power of proprietary algorithms and the hidden hierarchies they create.
Startups should absolutely see the debunking of the myth that consumers are happy to trade privacy for free services as a fresh opportunity for disruption — to build services that stand out because they aren’t predicated on the assumption that consumers can and should be tricked into handing over data and having their privacy undermined on the sly.
Services that stand upon a futureproofed foundation where operational transparency inculcates user trust — setting these businesses up for bona fide data exchanges, rather than shadowy tradeoffs.
Meanwhile a spat has broken out between the New York Times, with What Tim Cook overlooks in his defence of privacy and Tim Cook of Apple over the latter’s attack, in a recent speech, on the practice by all too many companies to lull users (or misinform them) into complacency about their personal information which is used for the corporate benefit. Apple has done more than many in supporting privacy enhancing features, in particular encryption which defies governmental interference. But as the article points out that Apple does not exactly have clean hands when it comes to using personal information. It knows the value of personal information and has used that data for its commercial benefit. True. But the NY Times is incorrect in then semi (as in hinting rather than stating) surmising that it even with the bad practices (which could be better the author concedes) we have:
“But it would be insane to argue that we haven’t seen benefits in return for this data. Anyone who uses devices like the ones Apple makes can see that ad-driven businesses like Google, Facebook and Twitter have improved people’s lives in major ways.”
All of this presupposes that there is no other way. It is flawed analysis It is also quite wrong. Getting proper consents and protections in place is not the mark of the Luddite. The article provides:
Tim Cook, Apple’s chief executive, delivered a speech last week that raised some eyebrows in the technology industry.
“I’m speaking to you from Silicon Valley, where some of the most prominent and successful companies have built their businesses by lulling their customers into complacency about their personal information,” said Cook, who was being honoured by the Electronic Privacy Information Center, a privacy watchdog group.
His blistering defense of privacy, which he and other Apple executives repeated at the company’s developer conference this week, was notable. It isn’t every day that you hear a tech executive admit there is an opaque trade-off at the heart of his industry.
Illustration by Stuart Goldenberg.
We users give digital giants access to our most private information, and they shower us with technology we can’t do without. It is an arrangement baked into every decision made in every boardroom in Silicon Valley, and it is a bargain that many of us are uneasy about. Now, finally, here was the leader of the world’s most powerful company asking whether that deal is worth the trouble.
But while Cook raised awareness for digital privacy, his speech glossed over two main issues. For one, he neglected to mention that Apple also collects a great deal of data about how we use technology. While it has more protections for that data than many rivals, the company plainly states in its privacy policy that it does use private data in many ways, including to build and market its own products, and to build its own advertising network.
Apple’s most profitable devices sit at the center of a tech ecosystem teeming with businesses that collect our data — and if those social networks, search engines and other free apps didn’t exist, Apple’s products would be far less useful.
Cook also failed to fairly explore the substantial benefits that free, ad-supported services have brought to consumers worldwide. Many hundreds of millions of people now have access to more information and communication technologies because of such services.
An Apple spokeswoman declined to comment.
The fact that Apple goes out of its way to include free services like Google search in its iPhones and iPads suggests that it agrees with the rest of the tech industry — and many users — that ad-supported services can, on balance, be good for the world. The question to ask is not whether we should ever use those free services, but rather whether, when we do use them, we are given enough information and disclosure to be able to make those decisions rationally.
“There are timeless principles around fair dealings with consumers,” said Nuala O’Connor, the president and chief executive of the Center for Democracy and Technology, a tech-focused US think tank. “And the first and main thing is, does the customer know what’s happening to them?”
She argued that if companies were transparent and honest about how they use people’s data, customers could freely weigh the benefits and costs of online services.
In his speech to EPIC, Cook offered a much starker, and less practical, view of privacy. “We don’t want your data,” he said. “We don’t think you should ever have to trade it for a service that you think is free but actually comes at a very high cost.”
That bold pronouncement got me wondering whether Cook uses a different iPhone from every iPhone that Apple has ever sold me. On my iPhone, Google is right there in the search bar, by default. Microsoft’s Bing is built into Siri, and Facebook and Twitter beckon me from the sharing menu.
If Apple really didn’t think that its customers should trade their data for free services, you’d guess that it would build its own ad-free search engine for its devices. But Apple does not do so. Instead, it sells off the search bar to ad-supported search companies.
Analysts at Goldman Sachs say that Apple’s current deal with Google is worth billions; at least indirectly, then, Apple benefits financially from Google’s ad-gotten gains.
And that’s not all. When I go to Apple’s App Store, I’m presented with a bevy of free apps that are supported either in whole or in part by ads. This vibrant marketplace works in Apple’s favour — the more free apps there are, the more useful the iPhone becomes.
That dynamic explains why, in 2010, Apple created iAd, its own advertising network meant to foster the ad-supported app marketplace. iAd lets marketers target users of Apple’s devices based on their purchases from the iTunes store, purchases that Apple of course tracks by default. (You can use an ad-supported search engine like Google to find instructions for opting out.)
There’s nothing terribly wrong with any of this. Yes, there are downsides to the ad-supported tech industry, and, yes, privacy advocates and tech insiders like Cook should continue to push the entire industry to more stringently protect our data. But it would be insane to argue that we haven’t seen benefits in return for this data. Anyone who uses devices like the ones Apple makes can see that ad-driven businesses like Google, Facebook and Twitter have improved people’s lives in major ways.
Among other things, ad-supported services give us instant access to more information than was ever stored in the entirety of the world’s libraries just a few decades ago. They also create systems that allow for instant communication and organisation between more than 1 billion people. They are helping to provide life-changing miniature computers, also known as phones, to people in developing nations for about $US50 each. They’ve given us artificial-intelligence supercomputers that can instantly translate languages or recognise speech. And, indirectly, they’re creating upcoming wonders like self-driving cars and balloons that wire up the globe in internet access.
Cook is fond of arguing that “when an online service is free, you’re not the customer; you’re the product.” That view is simplistic because it overlooks the economic logic of these services, especially the idea that many of them would never work without a business model like advertising.
[…] Privacy, password protections and compliance […]