Harvard Law Review articles on surveillance and privacy

December 6, 2013 |

The most recent Harvard Law review (Volume 127 November 2013) has published replies to 2 excellent earlier papers, The Dangers of Surveillance, 126 Harv. L. Rev. 1934 (2013) and Toward a Positive Theory of Privacy Law, 126 Harv. L. Rev. 2010 (2013).  Those papers were delivered at a symposium on privacy law held earlier this year. All the papers delivered at the symposium were excellent.  While the regulatory structure of US privacy law differs from Australia and there is a constitutional overlay there with the Fourth and Fourteenth Amendments which are touchstones on some privacy jurisprudence (usually the most high profile cases) which is absent in Australia there is sufficient conceptual similarity for Australian practitioners of privacy law to obtain benefit in reviewing these papers.  Technology moves apace around the world and the law in every jurisdiction is (sometimes) trying to catch up and grapple with the right balance on a range of issues, including freedom of expression, law enforcement etc..

The Dangers of Surveillance

The 32 page article is found here (in PDF format)   The synopsis provides:

From the Fourth Amendment to George Orwell’s Nineteen Eighty-Four, and from the Electronic Communications Privacy Act to films like Minority Report and The Lives of Others, our law and culture are full of warnings about state scrutiny of our lives. These warnings are commonplace, but they are rarely very specific. Other than the vague threat of an Orwellian dystopia, as a society we don’t really know why surveillance is bad and why we should be wary of it. To the extent that the answer has something to do with “privacy,” we lack an understanding of what “privacy” means in this context and why it matters. We’ve been able to live with this state of affairs largely because the threat of constant surveillance has been relegated to the realms of science fiction and failed totalitarian states.

But these warnings are no longer science fiction. The digital technologies that have revolutionized our daily lives have also created minutely detailed records of those lives. In an age of terror, our government has shown a keen willingness to acquire this data and use it for unknown purposes. We know that governments have been buying and borrowing private-sector databases, and we recently learned that the National Security Agency (NSA) has been building a massive data and supercomputing center in Utah, apparently with the goal of intercepting and storing much of the world’s Internet communications for decryption and analysis.

Although we have laws that protect us against government surveillance, secret government programs cannot be challenged until they are discovered. And even when they are, our law of surveillance provides only minimal protections. Courts frequently dismiss challenges to such programs for lack of standing, under the theory that mere surveillance creates no harms. The Supreme Court recently reversed the only major case to hold to the contrary, in Clapper v. Amnesty International USA, finding that the respondents’ claim that their communications were likely being monitored was “too speculative.”

But the important point is that our society lacks an understanding of why (and when) government surveillance is harmful. Existing attempts to identify the dangers of surveillance are often unconvincing, and they generally fail to speak in terms that are likely to influence the law. In this Article, I try to explain the harms of government surveillance. Drawing on law, history, literature, and the work of scholars in the emerging interdisciplinary field of “surveillance studies,” I offer an account of what those harms are and why they matter. I will move beyond the vagueness of current theories of surveillance to articulate a more coherent understanding and a more workable approach.

At the level of theory, I will explain why and when surveillance is particularly dangerous and when it is not. First, surveillance is harmful because it can chill the exercise of our civil liberties. With respect to civil liberties, consider surveillance of people when they are thinking, reading, and communicating with others in order to make up their minds about political and social issues. Such intellectual surveillance is especially dangerous because it can cause people not to experiment with new, controversial, or deviant ideas. To protect our intellectual freedom to think without state over-sight or interference, we need what I have elsewhere called “intellectual privacy.” A second special harm that surveillance poses is its effect on the power dynamic between the watcher and the watched. This disparity creates the risk of a variety of harms, such as discrimination, coercion, and the threat of selective enforcement, where critics of the government can be prosecuted or blackmailed for wrongdoing unrelated to the purpose of the surveillance.

At a practical level, I propose a set of four principles that should guide the future development of surveillance law, allowing for a more appropriate balance between the costs and benefits of government surveillance. First, we must recognize that surveillance transcends the public/private divide. Public and private surveillance are simply related parts of the same problem, rather than wholly discrete. Even if we are ultimately more concerned with government surveillance, any solution must grapple with the complex relationships between government and corporate watchers. Second, we must recognize that secret surveillance is illegitimate and prohibit the creation of any domestic-surveillance programs whose existence is secret. Third, we should recognize that total surveillance is illegitimate and reject the idea that it is acceptable for the government to record all Internet activity without authorization. Government surveillance of the Internet is a power with the potential for massive abuse. Like its precursor of telephone wiretapping, it must be subjected to meaningful judicial process before it is authorized. We should carefully scrutinize any surveillance that threatens our intellectual privacy. Fourth, we must recognize that surveillance is harmful. Surveillance menaces intellectual privacy and increases the risk of blackmail, coercion, and discrimination; accordingly, we must recognize surveillance as a harm in constitutional standing doctrine. Explaining the harms of surveillance in a doctrinally sensitive way is essential if we want to avoid sacrificing our vital civil liberties.

I develop this argument in four steps. In Part I, I show the scope of the problem of modern “surveillance societies,” in which individuals are increasingly monitored by an overlapping and entangled assemblage of government and corporate watchers. I then develop an account of why this kind of watching is problematic. Part II shows how surveillance menaces our intellectual privacy and threatens the development of individual beliefs in ways that are inconsistent with the basic commitments of democratic societies. Part III explores how surveillance distorts the power relationships between the watcher and the watched, enhancing the watcher’s ability to blackmail, coerce, and discriminate against the people under its scrutiny. Part IV explores the four principles that I argue should guide the development of surveillance law, to protect us from the substantial harms of surveillance.

The response, Addressing the Harm of Total Surveillance: A Reply to Professor Neil Richards (found here), provides, absent itallics:

The ethos of our age is “the more data, the better.”In nearly every sector of our society, information technologies identify, track, analyze, and classify individuals by collecting and aggregating data. Law enforcement, agencies, industry, employers, hospitals, transportation providers, Silicon Valley, and individuals are all engaged in the pervasive collection and analysis of data that ranges from the mundane to the deeply personal. Rather than being silos, these data gathering and surveillance systems are linked, shared, and integrated. Whether referred to as coveillance, sousveillance, bureaucratic surveillance, “surveillance-industrial complex,” “panvasive searches,” or business intelligence, total-information awareness is the objective.

Consider Virtual Alabama. Google has built a customized database for Alabama’s Department of Homeland Security that combines three-dimensional satellite/aerial imagery of the state with geospatial analytics that reveal relationships, trends, and patterns in incoming data.Virtual Alabama can “track moving objects, monitor sensors, and overlay near-real time data sets.”Alabama will continue to add inputs,but the system already aggregates data from traffic cameras, real-time private and public video streams, GPS location data for police cruisers, building schematics, sex offenders’ addresses, and land-ownership records.The state’s 1500 public schools plan to link their video cameras into the system, providing live streaming 24 hours a day, 7 days a week.  Virtual Alabama is also encouraging contributions from government agencies in exchange for access to the system.The stated goal of the program is to map all available data in the state.

Virtual Alabama is part of a broader surveillance system sponsored by federal, state, and local governments and their private partners. In the wake of the 9/11 attacks, Congress adopted a number of innovations to break down ossified bureaucratic structures that previously impeded intelligence efforts to identify future threats. Among these innovations was the creation of the new Department of Homeland Security.  Amidst these efforts, the United States rejected proposals to establish an intelligence agency akin to Britain’s MI5, which is devoted to domestic intelligence and surveillance, due to bureaucratic infighting and fear of a civil liberties firestorm.  But what it eschewed formally, it pursued in fact.

Since 9/11, a surveillance state has been in development,19 accomplished in part by a network of fusion centers through which government agents and private-sector representatives “collect and share” information and intelligence.  State- and locality-run fusion centers get most of their funding from federal grants.Their stated goal is to detect and prevent “all hazards, all crimes, all threats.”  At the Washington Joint Analytical Center, for instance, analysts from the Department of Homeland Security, the FBI, state police, and Boeing generate and analyze “criminal and anti-terrorism intelligence.”

Congressional panels, journalists, and citizens have been told that fusion centers raise few privacy concerns and that their information gathering is focused and valuable.Contrary to these assurances, critics have argued that fusion centers erode civil liberties without concomitant gains for security.A recent Congressional report backs these concerns, demonstrating that fusion centers have amounted to a waste of resources.

Fusion centers cast a wide and indiscriminate net. Data-mining tools analyze a broad array of personal data culled from public- and private-sector databases, the Internet, and public and private video cameras. Fusion centers access specially designed data-broker databases containing dossiers on hundreds of millions of individuals, including their Social Security numbers, property records, car rentals, credit reports, postal and shipping records, utility bills, gaming, insurance claims, social network activity, and drug- and food-store records.  Some gather biometric data and utilize facial-recognition software.  On-the-ground surveillance is collected, analyzed, and shared as well. For example, the San Diego fusion center purchased tiny cameras for law enforcement to attach to their shirt buttons, hats, and water bottles. Through the federal government’s “Information Sharing Environment,”  information and intelligence is distributed to public entities, including state, local, and federal agencies, and private owners of “critical infrastructure,” such as transportation, medical, and telecommunications infrastructure.

The scope of surveillance capacities continues to grow. Fusion centers and projects like Virtual Alabama may already have access to broadband providers’ deep packet inspection (DPI) technologies, which store and examine consumers’ online activities and communications.  This would provide government and private collaborators with a window into online activities,which could then be exploited using data-mining and statistical-analysis tools capable of revealing more about us and our lives than we are willing to share with even intimate family members. More unsettling still is the potential combination of surveillance technologies with neuroanalytics to reveal, predict, and manipulate instinctual behavioral patterns of which we are not even aware.

There can be no doubt that advanced surveillance technologies such as these raise serious privacy concerns. In his article, Professor Neil Richards offers a framework to “explain why and when surveillance is particularly dangerous and when it is not.”Richards contends that surveillance of intellectual activities is particularly harmful because it can undermine intellectual experimentation, which the First Amendment places at the heart of political freedom. Richards also raises concerns about governmental surveillance of benign activities because it gives undue power to governmental actors to unfairly classify, abuse, and manipulate those who are being watched; but it is clear that his driving concern is with intellectual privacy. We think that this focus is too narrow.

According to Richards, due to intellectual records’ relationship to First Amendment values, “surveillance of intellectual records — Internet search histories, email, web traffic, or telephone communications — is particularly harmful.” Richards argues that governmental surveillance seeking access to intellectual records should therefore be subjected to a high threshold of demonstrated need and suspicion before it is allowed by law.He argues also that individuals ought to be able to challenge in court “surveillance of intellectual activities.” Richards further proposes that “a reasonable fear of government surveillance that affects the subject’s intellectual activities (reading, thinking, and communicating) should be recognized as a harm sufficient to prove an injury in fact under standing doctrine.”

Richards is right to call for the protection of “intellectual privacy.”Reflecting his concerns, the U.S. Senate’s Permanent Subcommittee on Investigations recently reported internal Department of Homeland Security warnings about agents routinely using fusion centers to collect intelligence on “First Amendment-protected activities lacking a nexus to violence or criminality,” including those of religious and political groups.One fusion center instructed law enforcement to collect information on supporters of third-party candidates, including the public movements of cars with bumper stickers supporting Ron Paul and Bob Barr.Expressing the impact of this sort of surveillance on intellectual privacy, one political activist explained that he feared being pulled over by a police officer because of political views expressed by his bumper sticker.Although much fusion center surveillance remains hidden, Richards’s concerns are valid and pressing; in the present, as in the past, there can be no doubt that surveillance systems interfere with expressive activities.

Although Richards aptly captures the dangers to intellectual freedom posed by technologically enhanced surveillance, we fear his policy prescriptions are both too narrow and too broad because they focus on “intellectual activities” as a necessary trigger and metric for judicial scrutiny of surveillance technologies.Our concerns run parallel to arguments we have made elsewhere against the so-called “mosaic theory” of quantitative privacyadvanced by the D.C. Circuitand four Justices of the Supreme Court in United States v. Jones.Our argument there supports our objection here: by focusing too much on what information is gathered rather than how it is gathered, efforts to protect reasonable expectations of privacy threatened by new and developing surveillance technologies will disserve the legitimate interests of both information aggregators and their subjects.

One reason we are troubled by Richards’s focus on “intellectual activities” as the primary trigger for regulating surveillance technology is that it dooms us to contests over which kinds of conduct, experiences, and spaces implicate intellectual engagement and which do not.Is someone’s participation in a message board devoted to video games sufficiently intellectual to warrant protection? What about a telephone company’s records showing that someone made twenty phone calls in ten minutes’ time to a particular number without anyone picking up? Would we consider the route someone took going to the library an intellectual activity? Is it the form of the activity or what is being accomplished that matters most?

Setting aside obvious practical concerns, the process of determining which things are intellectual necessarily raises the specter of oppression. Courts and legislators would be required to select among competing conceptions of the good life, marking some “intellectual” activities as worthy of protection, while denying that protection to other “non-intellectual” activities. Inevitable contests over the content and scope of “intellectual privacy” will be, by their nature, subject to the whims and emergencies of the hour.  In the face of terrorist threats, decisionmakers will surely promote a narrow definition of “intellectual privacy,” one that is capable of licensing programs like Virtual Alabama and fusion centers. Historically, decisionmakers have limited civil liberties in times of crisis and reversed course in times of peace,but the post-9/11 period shows no sign of the pendulum’s swinging back. Given the nature of political and judicial decisionmaking in our state of perpetually heightened security, protection, even of “intellectual privacy,” is most likely to be denied to the very outsiders, fringe thinkers, and social experimenters whom Richards is most concerned with protecting.

Richards might argue that his account of “intellectual privacy” and his definition of “intellectual activities” are sufficiently capacious to obviate these concerns. Yet this very capaciousness proves our point. Whether “intellectual privacy” and “intellectual activities” will be read narrowly or broadly, and for that matter, what might constitute a narrow or broad reading, inevitably will be contested just as hotly as the borders of inclusion and exclusion. To draw a loose parallel, the debates among legal positivists and natural law theorists did not abate when Hart expanded the descriptive scope of positivismor when Dworkin did the same for naturalism.  To the contrary, they simply expanded the number of battlefronts so that we now see bloody contests within both camps as well as between them.

The Supreme Court has acknowledged the weight of these sorts of concerns in the context of Fourth Amendment debates. For example, in Kyllo v. United States, the Court was invited to limit Fourth Amendment protection to activities in the home that can be regarded as “intimate.” Writing for the Court, Justice Scalia demurred precisely because he thought the Court had neither the qualifications nor the authority to determine what is and is not “intimate.”He therefore focused on the invasiveness of the technology itself — a heat detection device — and its potential to render a wide range of activities in the home, whether “intimate” or not, subject to government surveillance.57 By our lights, this is a wise path to follow. Although we find persuasive Richards’s description of the harms inflicted by totalizing surveillance on intellectual privacy, we are not persuaded that the law should use “intellectual activities” as a trigger for judicial scrutiny or as a special category for judicial treatment any more than the Court should use “intimacy” as a signal for Fourth Amendment regulation.

Rather than assigning primary importance to “intellectual activities” and presumably providing less protection against the acknowledged perils of broader types of surveillance, the law’s focus should be on the dangers of totalizing surveillance. Information privacy scholarsand surveillance studies theoristsalike have long adhered to this approach, and for good reason. Technologies like Virtual Alabama and the fusion-center network amass, link, analyze, and share mass quantities of information about individuals, much of which is quotidian. What is troubling about these technologies is not what information they gather, but rather the broad, indiscriminate, and continuous nature of the surveillance they facilitate.Video cameras may be trained on street corners, drugstore aisles, or a school’s bathroom entrances. The information they gather likely does not implicate intellectual activities. They nonetheless create and sustain the kind of surveillance state that is anathema to liberty and democratic culture.Fusion centers rely upon data-broker dossiers, much of which has nothing to do with intellectual endeavors. There is no doubt, however, that continuously streaming all of this information into the information-sharing environment facilitates the sort of broad and indiscriminate surveillance that is characteristic of a surveillance state.

In assessing the privacy interests threatened by such totalizing surveillance, we have in mind some of the lessons taught by Samuel Warren and Louis Brandeis in their foundational article The Right to Privacy.Of course, the surveillance technologies of their era could only record discrete slices of life. Nonetheless, Warren and Brandeis recognized that emerging surveillance capacities threatened individuals’ interests in being “let alone” in their “private life, habits, acts, and relations.”In Warren and Brandeis’s view, the watchful eye of “any other modern device for recording or reproducing scenes or sounds” interfered with the development of a person’s “inviolate personality.”In discussing a husband’s note to his son that he did not dine with his wife — a pedestrian communication by any measure — Warren and Brandeis explained that the privacy interest protected was “not the intellectual act of recording the fact that the husband did not dine with his wife,” but the unwanted observance of the “domestic occurrence” itself.Of course, these are precisely the concerns echoed by Justice Scalia on behalf of the Court in Kyllo.

The threat posed by contemporary surveillance technologies lies in how much and how often people are watched. Modern technologies allow observers to detect, gather, and aggregate mass quantities of data about mundane daily acts and habits as well as “intellectual” ones.The continuous and indiscriminate surveillance they accomplish is damaging because it violates reasonable expectations of quantitative privacy, by which we mean privacy interests in large aggregations of information that are independent from particular interests in constituent parts of that whole To be sure, the harms that Richards links to intellectual privacy are very much at stake in recognizing a right to quantitative privacy. But rather than being a function of the kind of information gathered, we think that the true threats to projects of self-development and democratic culture lie in the capacity of new and developing technologies to facilitate a surveillance state.

In adopting this view, we ally ourselves in part with commitments to a quantitative account of Fourth Amendment privacy promoted by at least five Justices of the Supreme Court last Term in United States v. Jones.In Jones, police officers investigating drug trafficking in and around the District of Columbia attached a GPS-enabled tracking device on defendant Jones’s car. By monitoring his movements over the course of a month, investigators were able to document both the patterns and the particulars of his travel, which played a critical role in his ultimate conviction. Although the Court resolved Jones on the narrow grounds of physical trespass, five justices wrote or joined concurring opinions showing sympathy for the proposition that citizens hold reasonable expectations of privacy in large quantities of data, even if they lack reasonable expectations of privacy in the constitutive parts of that whole.Thus, they would have held that Jones had a reasonable expectation in the aggregate of data documenting his public movements over the course of four weeks, even though he did not have any expectation of privacy in his public movements on any particular afternoon.

The account of quantitative privacy advanced by the Jones concurrences has much in common with the views promoted by Warren and Brandeis. Specifically, the concurring Justices in Jones expressed worry that by “making available at a relatively low cost such a substantial quantum of intimate information about any person whom the Government, in its unfettered discretion, chooses to track,” programs of broad and indiscriminate surveillance will “chill[] associational and expressive freedoms,” and “alter the relationship between citizen and government in a way that is inimical to a democratic society.”Their concerns are well-grounded in original understandings of the Fourth Amendment.As Professor William Stuntz has shown, the Fourth Amendment was drafted partly in reaction to eighteenth-century cases involving the British government’s use of general warrants to seize personal diaries and letters in support of seditious-libel prosecutions that were designed to suppress political thought.  Despite these roots, quantitative privacy is just beginning to receive recognition because it is only now under threat of extinction by technologies like Virtual Alabama and fusion centers.

There are two ways we might seek to protect quantitative privacy in an age of expanding surveillance technology. One strategy would focus on the aggregations of information assembled with respect to a particular person. This “mosaic” approach presents serious practical concerns along the lines we described with regard to intellectual privacy.  As Professor Orin Kerr asks, where would we draw the line between aggregations that are and are not too invasive?  How would we treat discrete aggregations assembled by different actors if the sum of those wholes would cross the invasiveness threshold, wherever it is drawn?More importantly, we do not see how this approach could actually preserve reasonable expectations of quantitative privacy. The harm is done, after all, by being watched in a totalizing way — or by the awareness that one might be so watched.Limiting the scope of information dossiers does little to address those concerns. In light of these challenges, we have argued elsewhere for regulating the technologies themselves.Our arguments there strongly suggest that Richards’s goal of protecting intellectual privacy would also be better served by adopting a technology-centered approach.

Of course, none of this argument is intended to discount the benefits of surveillance to national security, criminal justice, emergency response, public administration, or medical care.As Richards observes, any account of surveillance’s privacy harms is often resisted on the grounds that some surveillance is essential for the public good. But there is a line between surveillance that is essential for the public good and invasive total-information awareness technologies, and that line is easy to cross if unattended. This leaves us with the question of how to protect society from the gradual acceptance and institutionalization of total-information awareness technologies. Richards supports allowing individuals to challenge surveillance of intellectual activities in court as a cognizable harm. Here again, we worry that his proposal is unlikely to preserve the fundamental interests at stake.

Richards proposes to grant individuals standing to challenge governmental surveillance. Putting concerns about the constitutionality of such a challenge aside, his proposal may raise practical problems. Granting individuals standing to challenge governmental surveillance of them would overwhelm the courts. There are not enough judicial resources to adjudicate three hundred million such suits, each of which could be renewed — almost as soon as it is resolved — on nothing more than suspicion of continued surveillance because the focus, under Richards’s approach, is on what information is being gathered. The possibility of a class action would not help matters because individual issues of harm attached to what particular information is gathered would predominate.Suits are also bound to be met with claims of national security interest, to which courts routinely show considerable deference. For example, in litigation involving police surveillance of protestors at the 2004 Republican National Convention, the Second Circuit refused to allow discovery of officers’ field reports, even in redacted form, because they would reveal information about undercover operations and thus potentially hinder future ones.

What is more, lawsuits designed to uncover surveillance of intellectual activities may be unable to identify the “intellectual records” gathered by government due to the way certain surveillance systems operate. Fusion centers, for instance, may access and analyze private and public databases and real-time video feeds without ever creating and storing records. Although fusion center surveillance of all individuals’ on and offline activities is continuous and totalizing, it does not necessarily produce records that could be packaged and produced as part of a discovery process. Ultimately, the vastness of contemporary governmental total-information awareness renders the judiciary incapable of reviewing the majority of situations on an individual basis. Furthermore, any individual cases that made it to judgment could no more chip away at discrete instances of governmental surveillance. Because they would focus on the intellectual privacy interests of specific litigants, these cases would not and could not challenge the system of totalizing surveillance as a whole.

Here again, we think that a technology-centered approach that seeks to protect quantitative privacy is far more promising. Not only would it avoid the constitutional and practical challenges of individual litigation based on the trigger and metric of intellectual privacy, a focus on the technology would also open the door to a wide range of alternative regulatory frameworks that could more efficiently and reliably strike a reasonable compromise between the legitimate interests of government and the privacy interests of citizens. For example, an independent board of experts, such as the Privacy and Civil Liberties Oversight Board (PCLOB), could perform an analysis of the privacy and civil liberties risks posed by surveillance technologies.PCLOB, now fully staffed,could mandate safeguards for the use of surveillance technologies that raise the specter of a surveillance state and make recommendations based on their privileged access to security analyses, piercing the veil secrecy that Richards laments.Board members, vetted for top-secret national security clearances, could attain a comprehensive view of domestic surveillance technologies that would enable them to recommend procedural protections for quantitative privacy to prevent governmental abuse.   Such procedural protections would by nature protect the intellectual privacy interests at the heart of Richards’s proposal without the drawbacks of using intellectual privacy as a trigger and metric of action.

Although we live in a world of total surveillance, we need not accept its dangers — at least not without a fight. As Richards rightly warns, unconstrained surveillance can be profoundly harmful to intellectual privacy. It would be wrong, however, to conflate symptom and cure. What is most concerning, for us is the rapid adoption of technologies that increasingly facilitate persistent, continuous, and indiscriminate monitoring of our daily lives. Although harms to intellectual privacy are certainly central to our understanding of the interests at stake, it is this specter of a surveillance state that we think ought to be the center of judicial, legislative, and administrative solutions, not the particular intellectual privacy interests of individuals.

Toward a Positive Theory of Privacy Law

The entire 32 page article is found here.   The synopsis provides:

Privacy protections create winners and losers. So does the absence of privacy protections. The distributive implications of governmental decisions regarding privacy are often very significant, but they can be subtle too. Policy and academic debates over privacy rules tend not to emphasize the distributive dimensions of those rules, and many privacy advocates mistakenly believe that all consumers and voters win when privacy is enhanced. At the same time, privacy skeptics who do discuss privacy in distributive terms sometimes score cheap rhetorical points by suggesting that only those with shameful secrets to hide benefit from privacy protections. Neither approach is appealing, and privacy scholars ought to do better.

This Article reveals some of the subtleties of privacy regulation, with a particular focus on the distributive consequences of privacy rules. The Article suggests that understanding the identities of privacy law’s real winners and losers is indispensable both to clarifying existing debates in the scholarship and to helping predict which interests will prevail in the institutions that formulate privacy rules. Drawing on public choice theory and median voter models, I begin to construct a positive account of why U.S. privacy law looks the way it does. I also suggest that a key structural aspect of U.S. privacy law — its absence of a catch-all privacy provision nimble enough to confront new threats — affects the attitudes of American voters and the balance of power among interest groups. Along the way, I make several other subsidiary contributions: I show why criminal history registries are quite likely to become increasingly granular over time, I examine the relationship between data mining and personality-based discrimination, and I explain how the U.S. political system might be just as biased in favor of citizens who do not value privacy as it is biased in favor of highly educated and high-income citizens.

Part I assesses the distributive implications of two privacy controversies: the extent to which public figures should be protected from the nonconsensual disclosure of information concerning their everyday activities, and the extent to which the law should suppress criminal history information. In both instances the United States is far less protective of privacy interests than Europe is, and, as a result, the U.S. government has received criticism both at home and abroad. The Part shows that defensible distributive judgments undergird the American positions. The European approach to celebrity privacy is highly regressive and causes elites and nonelites to have differential access to information that is valuable to both groups. The U.S. attitude toward criminal history information may be defended on pragmatic grounds: in the absence of transparent criminal history information, individuals may try to use pernicious proxies for criminal history, like race and gender. Part I then shows how these distributive implications affect the politics of privacy. To use one example, California’s interest groups are pushing that state toward European-style regulation, and there is an apparent emerging trend toward ever-increasing granularity in criminal history disclosures.

Part II analyzes the emerging issue of Big Data and consumer privacy. It posits that firms rely on Big Data (data mining and analytics) to tease out the individual personality characteristics that will affect the firms’ strategies about how to price products and deliver services to particular consumers. We cannot anticipate how the law will respond to the challenges posed by Big Data without assessing who gains and who loses by the shift toward new forms of personality discrimination, so the Article analyzes the likely winners and losers among voters and industry groups. The analysis focuses on population segments characterized by high levels of extraversion and sophistication, whose preferences and propensities to influence political decisions may deviate from those of introverts and unsophisticated individuals in important ways.

Part III reaches across the Atlantic, using Europe’s quite different legal regime for governing Big Data as a way to test some of the hypotheses articulated in Part II. Although U.S. and European laws differ significantly, the attitudes of Americans and Europeans toward privacy seem rather similar. The Article therefore posits that different public choice dynamics, especially the strength of business interests committed to data mining in the United States, are a more likely cause of the observed legal differences. But this conclusion raises the question of why European business interests committed to data mining do not have similar sway. The Article hypothesizes that structural aspects of U.S. and European privacy laws substantially affect the contents of those laws. In Europe, open-ended, omnibus privacy laws permit regulators to intervene immediately to address new privacy challenges. The sectoral U.S. approach, which lacks an effective catch-all provision, renders American law both reactive and slow to react. As a result, by the time U.S. regulators seek to challenge an envelope-pushing practice, interest groups supporting the practice have developed, social norms have adjusted to the practice, and a great deal of the sensitive information at issue has already been disclosed by consumers.

Part IV examines the National Do Not Call Registry, a rare case in which U.S. regulators were able to combat a substantial privacy harm despite these structural and interest-group dynamics. The fact that the Registry took more than a decade to be implemented, despite its enormous popularity with voters, shows just how difficult regulating privacy can be, especially since many other privacy regulations will create a substantial number of losing consumers who are likely to buttress the interests of prospective loser firms in opposing the new regulation.

The response, by Anita Allen, titled Privacy Law: Positive Theory and Normative Practice (found here) provides (absent footnotes):

Professor Lior Strahilevitz’s article Toward a Positive Theory of Privacy Law urges novel positive approaches to privacy law scholarship. Positive theories of law employ empirical and analytical methods to describe what the law is, how it came to be, and what its consequences may be. Grounded in median voter models and public choice theory generally, Strahilevitz’s article illustrates positive analysis, illuminating distributive implications of privacy statutes and common law privacy doctrines for a range of groups, including political elites, racial minorities, criminal offenders, naïve and sophisticated consumers, data miners, and marketers. The overall goals of this insightful article are to clarify the distributive “winners and losers” of privacy law and to shed light on the predictability of who prevails in the institutions that formulate privacy rules in the United States and in Europe.

By contrast to Strahilevitz’s positive project, my recent work on privacy law has been normative in thrust. Specifically, I have explored the normative ethical value of privacy, evaluated the normative ethics of privacy laws, and pondered the extent of normative ethical obligations to protect one’s own and others’ privacy. Though a normativist, I welcome greater attention to positive theory. Positive theory and normative theory go hand-in-hand, in my view. Normative theories of law evaluate and commend laws by reference to values that the laws embody or promote. Information management policies reflected in law are subject to evaluation by economists as efficient or inefficient, but by ethicists as right or wrong, good or bad, virtuous or vicious, and just or unjust.

We cannot know if we are doing the right thing, if we do not know what we are doing and whom we are doing it to. My work often attends to the winners of losers of privacy rules and practices — whether corporations, women, the LGBT community, criminals locked in prisons, African Americans, or children. Whether privacy is a good thing for the people who have it is a question with a large empirical dimension. For the sake of rigor and completeness, normative ethical theorizing must attend to subtle concrete distributional effects of the sort Strahilevitz examined. Attending with special care to distributive implications serves the needs of ethics, as it serves the needs of other normative enterprises of perhaps more immediate concern to Strahilevitz — welfare-enhancing cost-benefit policy analysis and commercial advantage-seeking. Understanding those that Strahilevitz terms the “winners and losers” of privacy law bears on the choices that persons of conscience, character, and goodwill make respecting the frequency, content, and context of data acquisition, data disclosure, and data retention.

Yet the truth about distributional effects may be subtle, unobserved, and disbelieved. Presumed winners may be losers, and the presumed losers may be winners. Presumptions about winners and losers may be so fixed in prejudice that no one bothers to challenge philosophical assumptions with fresh analytics or factual pieties with rigorously derived empirical data. I applaud Professor Strahilevitz’s illustrations of new ways to think empirically about privacy laws’ distributive effects. Here, I briefly comment on his major arguments and examples. First, in Part I, I comment on his claims concerning the law of celebrity privacy, and I offer a challenge to his conception of winners and losers in that domain. Second, in Part II, I consider his argument that granular criminal-history disclosures may be the direction for the near future and may benefit African Americans more than criminal-history privacy. I suggest that privacy-reducing surveillance of African Americans may already be so extensive that African Americans would not view themselves as “winners” under a regime that placed detailed criminal-history data in the hands of employers. Third, in Part III, I address privacy concerns raised by Big Data, noting grounds for a concerned response to the data mining and consumer-profiling practices artfully described by Strahilevitz. Finally, in Part IV I respond to Strahilevitz’s celebratory response to the federal Do Not Call registry’s privacy implications with the observation that a benignly more paternalistic Do Not Call law could have made telephone customers even bigger winners. In sum, I embrace Strahilevitz’s call for nuanced positive theories of privacy law’s “winners and losers” but for a reason he does not highlight: better positive theory is critical also for better normative ethical theory. I reject his specific characterizations of “winners and losers” of the law of celebrity publicity and criminal-history disclosure, and I suggest policy directions for bigger wins for American shoppers and consumers.

I. Virtuous Inattention

Californians enacted an anti-paparazzi statute after the deaths of Princess Diana and Dodi Fayed,  which were initially attributed to their chauffeur’s attempt to evade encroaching paparazzi. The law forbade recording celebrities’ activities near their homes; more recent laws outlaw high-speed chases and intrusive photography. Describing California as an exception, Strahilevitz points out correctly that under U.S. law, readers and the media are generally permitted wide access to information about celebrities. The law of the United Kingdom and Continental Europe resembles California law. Celebrated public figures are often accorded the protection of privacy rights, including the fundamental rights set by the European Convention on Human Rights.  Why the antipopulism of California’s and Europe’s law? Strahilevitz’s answer is an observation about power and influence.

According to Strahilevitz, popular celebrity Californians (like former Governor and film star Arnold Schwarzenegger) swayed legislators and median voters. Wealthy and elite Californians thus “won” at the expense of the ordinary literate public with a taste for celebrity gossip. In a move from strictly positive theory toward normative reflection, Strahilevitz questions whether the law ought to make privacy winners of those who so often win, when it could distribute a win to less politically and economically powerful consumers (and the for-profit media interests that sell to them). He does not reach the deep ethical questions raised by his example, however. I suggest we ask whether the distribution of publication and readership rights to non-elites makes non-elites “winners” worthy of the name. Indeed, moral theorists might call for restraint in attention to others’ intimate lives. The individual readers who win the ability to access celebrities’ personal lives may lose from the point of view of perfectionist conceptions of virtue. A balance of inattention to others’ personal lives and attention to one’s own is arguably a moral virtue. Kantian-style conceptions of perfect and imperfect duties to the self include duties of self-improvement and self-respect. Feeding raw desires and fan obsessions at the expense of nontrivial activities has moral implications. Inattention to others’ personal lives may also be a qualitative benefit to civil society. Samuel Warren and Louis Brandeis made a point along these lines about the loss to civil society that comes from privacy invasions: the market for gossip represents a qualitative decline in cultural life, “a lowering of social standards and of morality.”Their prose was high-minded: “Triviality destroys at once robustness of thought and delicacy of feeling. No enthusiasm can flourish, no generous impulse can survive under its blighting influence.”  It could be best to let celebrities have their privacy since, what conservative political theorist Robert George calls the “moral ecology” of our society may suffer if the populace grows coarsely inquisitive and celebrities are egregiously abused.

II. Transparency as Racism

The Supreme Court once blessed the notion that people have a strong privacy interest in their criminal histories, strong enough to defeat media efforts to obtain rap sheets prepared by the Justice Department.  Common law courts have noted that criminal-history secrecy facilitates rehabilitation and reintegration.  Against the grain of such thinking, Strahilevitz argues that granular criminal-history disclosures may make winners of African Americans without criminal backgrounds and losers of white ex-offenders. A policy of making publicly available detailed criminal-history information might be presumed to make African Americans losers because African Americans are disproportionately convicted of crimes. Although “bad information” may be discounted by time, criminal histories are a long-term burden affecting where ex-offenders can live and work. Strahilevitz argues that a policy of disclosure could benefit blacks — and the more granular the disclosures the better. Supplied with criminal histories, potential employers can distinguish serious offenders from those who have not offended at all or trivially. More granularity can reveal that a felony was mere possession of marijuana rather than armed robbery, rape, or homicide. The ability to discern and discriminate removes any rational incentive for employers to use race as a proxy for criminality. Loss of privacy might confer on African Americans competitiveness in the market for jobs. The privacy losers, on a closer look, turn out to be the winners.

Assume with Strahilevitz that employers use white race as a proxy for honesty, reliability, and skill, resulting in squeaky-clean African Americans losing opportunities to whites who may harbor secret criminal histories. There is likely more than one way to address the problem of resource- and power-holders’ “rational” racial profiling. Before pursuing policies that decrease privacy on a premise of intractable black criminality, should come (1) attacks on the inequities that account for black criminality in the first place, (2) a solid understanding of how criminal-history disclosures impact rehabilitation and the reintegration of ex-offenders, and (3) clarity about the aggregation problem of numerous small privacy losses aggregating into an enormous surveillance and transparency burden for African Americans. The surveillance society is doubly such for low-income people living in high-crime communities and reliant on government benefits, services, and public and military employment. The state collects detailed information about individuals, families, living arrangements, health, and financial resources. Many African Americans are heavily supervised at work, watched in stores to deter shoplifting, scrutinized, and profiled when they drive their cars or walk outside their neighborhoods.African Americans might in important respects be better off in a society of trust and fairness than in a suspicious and biased society that arms the public with access to criminal histories. As Strahilevitz suggests, there might be a political backlash of sorts against the increasing granularity of criminal-history disclosures that offend the sensibilities of median voters. A public choice theory positive account of winners and losers might suggest to African American interest groups effective strategies for promoting privacy rules that make a net positive contribution to their constituents’ lives.

African Americans are not always better off with more information privacy, however. Not having certain information privacies benefits historically subordinated groups.A failed 2002 “Racial Privacy” referendum would have made losers of California’s racial minorities. The proponents of the referendum claimed that an amendment to the California constitution barring state racial data collection would have ushered in color-blind practices that would make winners of everyone.However, giving up so-called racial privacy helped minorities acquire access to health and education goods vitally needed by their communities.Moreover, racial privacy is an illusory concept. Race is a social construct with public historical, associational, and phenotypical dimensions. Race is “in the face” and seeking to privatize it the way one privatizes the results of a blood test makes little practical sense.Giving up so-called racial privacy makes winners of African Americans, while the giving up of criminal-history privacy may not.

III. Big Data, Big Personality, and Consumer Privacy

Policymakers and privacy theorists need to understand the implications that Big Data has for information privacy. “Big Data” is a nickname for enterprises that collect, analyze, package, and sell data, even uninteresting-looking data, to reveal tastes, habits, personality, and market behavior. Big Data is challenging traditional privacies.Private sector surveillance is rampant, introducing research about personality assessment and classification into the legal literature.Increasingly, the personality and psychology of individual consumers are probed without their knowledge or consent.

Big Data, Strahilevitz observes, represents a shift from nondiscriminating, pooling equilibriums to controversial discriminating, separating equilibriums in marketing.Big Data is enabled by the promise of efficiencies that include the capacity cheaply to ascertain who is a suitable purchaser of goods and products, output maximization, and producer surplus. Strahilevitz focuses on what he calls the “secondary” rather than “primary” effects of information rules governing consumer retail transactions. Primary effects relate to how the collection, manipulation, and disclosure of information affect individuals whose data is collected and disclosed. Secondary effects are the consequences of data collection, manipulation, and disclosures, whether or not experienced as individual harm. Both primary and secondary effects of privacy laws have implications that positive theorists will want to describe and normative theorists will want to evaluate.

Big Data’s thirst for information and capacity to learn from it threatens privacy. Big Data information extractions are offensive to principled privacy lovers even when, as in the pharmacy data at issue in Sorrell v. IMS Health Inc., most sensitive personal information has been scrubbed using anonymization. Privacy advocates’ concerns include concerns about re-identification of de-identified data and the loss of trust in confidential relationships. Ought we jump on the privacy bandwagon?

Strahilevitz answers with analysis and facts, not norms. He maintains that protecting privacy seems to thwart price and service discrimination that is consistent with consumer welfare. Without privacy, Amazon can tell you what you want before you know what you want. Products can be marketed to those likely to want them, and, if credit is extended, people can be relied upon to pay. Collecting consumer data and engaging in personality discrimination might make winners of certain shoppers no less than for-profit data miners. Data miners win if they can guide efficient marketing. Shoppers win if they are offered attractive discounts and premiums based on data demonstrating reliability and creditworthiness. (It turns out that buying felt pads to protect your furniture from scratches and dents predicts credit worthiness.)

The general public is not a clear winner of data accessibility and manipulation by Big Data, economically or otherwise. In theory relying on information gleaned from data mining or consumer personality testing will lead to lower costs, and lower costs for business could mean lower prices for consumers. Yet data miners and retailers will not necessarily lower prices. When do powerful business interests pass on profits to consumers? When there is competition? We need to know a great deal about the industries in question to predict likely winners and losers.

Strahilevitz suggests an interesting political alliance between Big Data and sophisticated consumers. Sophisticated consumers are the wealthier, better-educated, voting consumers with excellent credit and wholesome habits who think they have nothing to lose from policies that put volumes of data into the hands of firms. According to Stahilevitz, a median voter model predicts that American law will systematically favor the interests of sophisticated consumers, which are congruent with those of data miners, since sophisticated consumers are on the whole more politically engaged people who pay attention to legislative policy proposals and vote their interests.

The Lisbon Treaty may widen the divide between U.S. and EU approaches to data mining. The treaty protects all data as a matter of fundamental right.Legislative lobbying by Big Data in the U.S. is not impeded by doctrines of fundamental right. My observation is in line with Strahilevitz’s that the presence of a tradition of powerful industry lobbying in the U.S. predicts fewer restrictions on Big Data. He argues that a lack of such a tradition may help explain why, even though EU and U.S. persons have similar privacy tastes, EU law is significantly more prohibitive.

Power and interest group dynamics may also explain why Big Data and major firms have been successful fighting consumer information privacy claims in the U.S. courts interpreting commercial free speech doctrines.Few relationships are as surrounded by traditions of confidentiality and privacy as the physician-patient relationship. In Sorrell, consistent with Strahilevitz’s positive theory, the Supreme Court nonetheless struck down a state law limiting data miners’ access to confidential physician prescription information, on the ground that singling out data miners with a disabling law violated their commercial free speech rights.

However, the precise nature of median voter, power, and interest group dynamics is not always easy to discern in interactions among Congress, the federal courts, federal agency privacy regulators, the big business sector, voters, and consumers. Consider the following examples. A common contrast between EU and U.S. privacy law is that our sectoral laws typically permit consumers to consent to disclosures of personal information by default, simply by not affirmatively “opting out”. The U.S. “opt-out” bias seems to favor data sharing–hungry American businesses, since consumers rarely bother to affirmatively opt out. In the late 1990s when Federal Communications Commission (FCC) regulators attempted to impose a stricter “opt-in” consent requirement for the disclosure of sensitive customer proprietary network information (CPNI), the telecom firm U.S. West, Inc. took them to court. U.S. West prevailed in the Tenth Circuit Court of Appeals with the argument that the FCC’s preferred opt-in consent requirement violated “the First Amendment by restricting its ability to engage in commercial speech with customers” and raised “serious Fifth Amendment Takings Clause concerns because CPNI represents valuable property that belongs to the carriers and the regulations greatly diminish its value.”Positive theory could potentially explain why a federal court interpreted the Constitution so as to make the telecom industry the owners of CPNI, defined as “information of, and relating to . . . customers”and why the court refused to allow federal regulators to act aggressively and beneficently as guardians of consumer privacy. Yet the contours of an explanation in terms of power dynamics and median voter alliances here is far from obvious. In the CPNI case, the 10th Circuit sided with industry against the government; but in the Do Not Call registry case, the 10th Circuit sided with the government against industry. The Court upheld the right of the FTC to create the Do Not Call registry, over objections from the telemarketing industry that Congress had not authorized the FTC to act and that such a move would deliver a profitable blow to a productive industry that was also a major employer.

IV. Hold the Calls, Forget the Notices!

Finally, Strahilevitz touches on design mechanism in the enactment of the federal Do Not Call rules.Do Not Call rules (Rules) enforced by the Federal Trade Commission (FTC) make losers of commercial telemarketers but winners of telephone consumers, both consumers annoyed by unsolicited phone calls (they can easily opt out) and consumers who enjoy calls (they need do nothing). Under the Rules, consumers who choose to place their numbers on a Do Not Call registry maintained by the FTC are entitled to a reduction in nonpolitical, noncharity calls by businesses with whom they have no preexisting relationship. The Rules pass a cost-benefit test: they are significantly welfare-enhancing at a low cost. Assuming that welfare implications are relevant to the desirability of privacy protections, we have normative grounds for praising the Rules.

Professor Strahilevitz’s positive analysis of winners and losers should be expanded to include all of them: telephone users, people who live with them who also suffer the distraction of unwanted calls, charities, politicians, prior businesses patronized, and telemarketers. Welfare improvements were realized with the Rules, but I argue elsewhere that a more paternalistic policy would have been more welfare enhancing.33 Policymakers with a broader understanding of the public’s privacy interests might have banned most telemarketing calls, doing away with the need for an opt-in registry and imposing beneficial privacy at home on phone customers.

The general consensus among privacy scholars is that the Do Not Call registry law was a good privacy law at the time it was enacted. I surmise that most would agree with Strahilevitz that the telemarketing Rules were as welfare enhancing. But not all of the privacy law innovations of the 2000s have been met with a similar appreciation. Many privacy scholars and officials bluntly denigrate the Gramm-Leach-Bliley34 (GLB) financial privacy law as a foolish law that “only lawyers could love”.GLB was not designed to be a robust privacy law. GLB was Title V of the Financial Services Modernization Act of 1999, demolishing walls between insurance, investment, and commercial banking. GLB is not stupid relative to its actual purpose of giving consumers some control over who has access to sensitive financial transactions and related personal information.What subjects the law to ridicule is that it requires written notices few read or act on. The notices offer an opportunity for opting out of certain third-party disclosures of some personal information. So few understand the opportunity and take time to exploit it that the notices reduce to useless formalism.

If Congress or agency regulators wanted seriously to limit access by financial institutions to consumer data, a flat ban on sharing even with consent would have been enacted. One has to assume that Congress and regulators made self-conscious policy choices to allow firms access to sensitive information about consumers, for the good of those firms, the economy, consumers, and/or the nation. A full positive analysis of the design mechanism and the distributive implications of the policy implemented via GLB would be useful; consumers do not benefit and firms waste money. GLB regulations require privacy notices, but it bears emphasis that GLB also requires data safeguards and penalizes pretexting. Whatever the critique of the notices requirement, the security safeguards and antipretexting rules require their own separate positive assessments.

IV. Conclusion

The central observation of Lior Strahilevitz’s paper is sound: privacy rules have distributive implications. With careful empirical investigation and analysis we can better ascertain the true “winners and losers” of our privacy laws. But how should we really understand “winning” and “losing”? The winners and losers in the thin distributional senses at play in Strahilevitz’s article may not be winners and losers from ethical points of view he does not broach. Is it enough that a distribution furthers wealth maximization, or equalizes social power? Should we strive to enact policies that defy power and influence; that look to fundamental human rights rather than preferences and desires? There is plenty of work for philosophers in sorting through and interpreting the distributive implications of privacy rules. The question of individual responsibility in all of this — what ought I do in light of positive distributive consequences — is one of those calling out for further inquiry.

Leave a Reply

Verified by MonsterInsights