For almost a year, negotiators have tried to agree on a mechanism that would allow consumers to let a website know that they don’t want to be tracked while they are surfing the Web.
So far, they can’t even agree whether “Do Not Track” should mean “don’t collect information about me” or “don’t send me behaviorally targeted ads.”
The negotiating stakeholders seem just about ready to pull up their stakes and start whacking each other with them. We had been warned that this might happen, and why.
Last year, in an essay titled “To Track or ‘Do Not Track,’” Omer Tene and Jules Polonetsky of the Future of Privacy Forum argued that we haven’t been addressing the real crux of this issue: the underlying value judgment about tracking.
“It may be premature to debate technical standardization of DNT mechanisms,” wrote Tene and Polonetsky, “before making this value judgment.” The issue, they added, “is not whether analytics, measurement, or third-party cookie sharing constitute ‘tracking,’ but rather it is whether those activities carry an important social value that we wish to promote, or are negative and thus better be ‘killed softly.’”
“The contexts in which tracking is currently being addressed don’t reflect the true magnitude and variety of the value questions related to tracking.”
As is the case with most ethical dilemmas, the answer is “both.” Or, rather, “it depends.”
The answer to what Tene and Polonetsky correctly identify as a philosophical quandary is not unitary. It is a bundle or a bouquet of answers, all of them context-specific.
“WHAT’S THE HARM?”
We can’t decide whether “tracking” is a positive or a negative without first asking a series of questions. Tracking by whom? Of whom? For what purposes? On what devices? What information would be collected in the process? Who else could get access to the collected data? How soon (if ever) would the information be deleted? What would happen if the collector were bought or went bankrupt?
The contexts in which tracking is currently being addressed don’t reflect the true magnitude and variety of the value questions related to tracking.
The same technologies allow publishers and advertisers to track consumers, governments to track terrorists (or dissidents), researchers to track subjects, doctors to track patients, parents to track children, insurance companies to track drivers, et cetera.
We need a broad debate around the benefits and drawbacks of these new technologies, but the arguments get muddied when devoid of context. That’s what happens when some people ask “What’s the harm? What’s so bad about getting more-relevant ads?” in response to others who view data analytics as “our generation’s civil rights issue,” or worry about tracking as a tool of oppression in repressive states.
Some of us may be willing to put up with massive data collection and analytics for improvements in education or health care, but not for “personalized” advertising. Some of us may be OK with adults being tracked by advertisers, but reject similar tracking of kids. Some of us may be willing to have our supermarket-buying practices tracked, but not our reading habits.
Some of us may be more worried about being tracked by the government than about being tracked by Google or data brokers (although the reality is that the government has access to vast amounts of information collected by private parties that are tracking us now; the surveillance doesn’t split cleanly along a public/private divide).
What about being tracked by a political campaign? Or by members of one’s own family? The New York Times recently ran an article about parents using GPS tracking devices to keep an eye on their children, children using such devices to track elderly parents with Alzheimer’s, and, of course, spouses tracking each other when suspecting infidelity.
Each of those practices involves different value judgments and requires the balancing of different rights and needs. As Tene and Polonetsky correctly noted, “the value judgment is not for engineers or lawyers to make. … It is not a technical or legal question; it is a social, economic, even philosophical quandary.”
We need to stop thinking of tracking as a “consumer” issue, to be addressed by the Federal Trade Commission, industry self-regulation, or inherent marketplace forces. It isn’t just a law enforcement or national security issue, either. It is a cluster of ethical dilemmas that now impacts many facets of our lives, doesn’t lend itself to simple solutions, yet must be addressed.
|
Irina Raicu. Photo courtesy Markkula Center for Applied Ethics
|
This cluster also prompts a deeper question about the effects of pervasive surveillance on the individual. In a recent article titled “What Privacy Is For,” Georgetown Law professor Julie Cohen warns about the dangers of a society in which “surveillance is not heavy-handed; it is ordinary, and its ordinariness lends it extraordinary power.” Cohen worries about “citizens who are subject to pervasively distributed surveillance and modulation by powerful commercial and political interests,” and argues that a “society that permits the unchecked ascendancy of surveillance infrastructures … cannot hope to maintain a vibrant tradition of cultural and technical innovation.”
As residents of Silicon Valley, which prides itself on being a fulcrum of creativity and innovation, we should take such warnings seriously. Emitting a “Don’t Track Me, Bro” signal into the ether, hoping that someone listens, is not nearly enough.
Irina Raicu J.D. ’09 is the Internet ethics program manager at the Markkula Center for Applied Ethics at Santa Clara University. This piece is adapted from one that she wrote for WSJ Marketwatch.