I’ve said it once, I’ll say it again: the FBI doesn’t get encryption

James Baker, Alvaro Bedoya, and David Garrow
James Baker, Alvaro Bedoya, and David Garrow

At last Friday’s Color of Surveillance conference, FBI General Counsel James A. Baker called for checks on government power to prevent the kinds of abuses discussed during the conference’s historically focused panels, notably COINTELPRO, surveillance of the civil rights and anti-war movements, and the intimidation of activist leaders — including Martin Luther King, Jr. But he also asked if we, as a society, are okay with the public security implications of unbreakable encryption. That question, and the assumptions behind it, exemplifies why the FBI doesn’t get encryption.

The question assumes that there are objective, measurable costs to unbreakable encryption. More fundamentally, it also assumes that untraceable communication is new, when it is anything but: for most of human history, communication was either oral or written on artifacts that had to be physically transported. Encryption is better understood as a return to the pre-digital status quo, when records were the exception rather than the rule, and the secrecy of correspondence was a sacrosanct rule. Far from “going dark,” we are living in a golden age of surveillance, and we don’t need to go all the way back to the Church Report to find evidence that institutional checks on the government’s power are insufficient. Yet last week Baker reiterated the bewildering claim that the FISA court is a meaningful check on NSA surveillance, when it serves as a rubber stamp at best.

The FBI argues certain investigations would be easier to conduct if the public communicated in the clear. The recent brouhaha over the San Bernardino shooter’s iPhone 5c provides one such example. But the argument falls apart on close examination.

The security community has said from the beginning that the FBI never needed Apple’s help unlocking the Farook phone. It was always about setting a legal precedent that they can compel companies to break their own products. The Farook phone presented a unique opportunity as a test case: it was very unlikely to contain new useful information (it was a work phone, and Farook had physically destroyed his two personal phones; neither the 30-day iCloud backup nor the call/SMS metadata revealed any non-work activity associated with that device), but it was connected to an ongoing investigation, and the FBI could plausibly argue that the phone might contain crucial information, thus playing on the media and the public’s fear of terrorism and poor understanding of the underlying tech issues. The case’s conclusion supports this interpretation, as does the fanciful nature of some of the FBI’s arguments (I’m still waiting for an explanation of what a “dormant cyber pathogen” might be).

What seems likely is that the FBI’s leadership, which has been fighting a losing battle against encryption for decades, had been waiting for the right opportunity to push for a court ruling requiring a private company to hack its own product, and thought the Farook phone would get them what they wanted. They miscalculated, and found an out that would allow them to save face with respect to the general public. The experts of course know that the FBI has egg all over its face, but the FBI doesn’t care what experts think. It’s going for a political win, and we are clearly living in a post-empirical political environment.

The U.S. is currently governed by fear and emotion, rather than facts. Terrorism and child abuse imagery are probably the most frequently invoked horsemen of the infocalypse these days. Senator Dianne Feinstein — whose anti-encryption draft bill was leaked on Friday as well — has raised the bogeyman of a pedophile communicating with her grandchildren through their gaming consoles to justify banning encryption. But it is just that, a bogeyman: there are many more effective ways to protect children from predators (including appropriate parental controls on devices used by children, and talking to kids about risks both online and in the real world, for example) that neither sacrifice the security and privacy of millions of individuals, nor jeopardize society’s ability to evolve and change. She has similarly claimed that the terrorist cell behind the Paris and Brussels attacks used encryption, when in fact it seems that their op-sec relied on burner phones, insecure tools like Facebook and on face-to-face communication. It seems that European law enforcement’s tragic failure to prevent the recent attacks should be attributed to poor coordination between agencies (much as 9/11 was), not encryption.

So these “costs of unbreakable encryption” are neither proven, nor unavoidable through other means, nor different from the world humanity was stuck with until the 1990s. They certainly aren’t worth the costs of mass surveillance, which are well known. Without going into the abuses by the Stasi, the KGB, or Sisi’s Egypt — parallels that are often rejected in the name of American exceptionalism — the mass surveillance revealed by Snowden in 2013 comes at significant, measurable costs to the American economy. Studies also support the notion that the chilling effect — self-censorship due to the perception of surveillance — is real.  As Alvaro Bedoya pointed out in his conversation with Baker last Friday, being a black civil rights activist was effectively against the law during the COINTELPRO era.

As Yochai Benkler laid out in a recent article, “the fact of the matter is that institutional systems are highly imperfect, no less so than technological systems, and only a combination of the two is likely to address the vulnerability of individuals to the diverse sources of power and coercion they face.” We already know that institutional checks on surveillance powers are insufficient, even in democracies, and yes, even in the United States. Unbreakable, ubiquitous encryption is the technical check on surveillance power. The FBI doesn’t need, and should not have, backdoor access to communications, warrant or not. It should find another way to do its job.

Internet Governance Forum 2015 in João Pessoa, Brazil

I had the opportunity to travel to João Pessoa, Brazil for the 2015 Internet Governance Forum, a UN-sponsored multistakeholder event focused on Internet governance. I moderated a workshop on “Benchmarking ICT companies on human rights.”

There has been growing interest over the past few years in civil society efforts to hold ICT companies accountable for their impact on human rights,. All stakeholders including companies have an interest in setting clear industry standards on dimensions of privacy and freedom of expression. To that end, more research and comparative data about different companies’ policies and practices can encourage companies to compete with one another on respect for users’ rights. Given the international scope and complexity of the sector, this task is more than any single organization can fully tackle on a global scale, and it is important to recognize the diversity of goals and perspectives represented by organizations working in this space. The purpose of this roundtable workshop is to bring together a geographically diverse range of NGO’s and researchers to share experiences and perspectives on creating projects to rank or rate ICT companies. The goal is to create a “how to” guide on launching such projects as well as a collaborative network of organizations and researchers. Company and government stakeholders will also provide feedback on how such projects can most effectively influence corporate practice and government policy.

Talk at Going Public with Privacy roundtable

On May 28, 2014 I gave a talk about privacy at a roundtable of educators, media literacy experts, youth media professionals, and academics co-hosted by the National Association for Media Literacy Education (NAMLE) and the Annenberg Innovation Lab. The first of a planned series of regional roundtables, this was an opportunity for a select group of diverse participants to gain a clearer understanding of challenges and concerns related to online privacy as experienced by key stakeholders such as K-12 students, parents, teachers, youth media creators, and policy-makers. The accompanying slides (not shown in video) are at NAMLE roundtable May 2014 – Privacy overview

Can a Problem be its Own Solution? Privacy, Technology and the Limits of Civic Hacking

In October 2013, I participated in a hackathon sponsored by the Annenberg Innovation Lab(AIL) on the theme of privacy. Having spent most of my career on the edges of the tech/geek scene, I was excited to get my hands dirty and perhaps finally learn how to code something more complicated than the text on a website. Unfortunately, the hackathon was scheduled for the same weekend as USC parents’ weekend, and attendance was sparse. While the three of us who showed up were guaranteed to “win,” it also meant that three strangers from very different personal and academic backgrounds would have to quickly come up with a technological solution to a privacy-related problem. The first step was to identify and agree on a problem to solve.

The Problem
As anyone with a pulse (and an Internet connection) will have noticed, online privacy is a very hot topic these days. Between Snowden’s revelations about the NSA’s electronic surveillance programsTarget’s data security breach affecting millions of customers’ credit card information, and heartbreaking accounts of cyber-stalking and cyber-bullying driving teens to suicide, the public has every reason to be wary of the dark forces lurking behind that glowing screen, and “online privacy” has become the cri de guerre for civil libertarians, consumer watchdogs and concerned parents from sea to shining sea. The term “online privacy” itself is a bit of a misnomer. Privacy can’t be segmented into online or offline any more than dating or shopping can. Rather, privacy is newly vulnerable to threats coming from the Internet. As my teammates and I quickly realized, these threats (both real and perceived) are far from applying uniformly to everyone who goes online, a reality that the dominant discourse often ignores in favor of one-size-fits all advice like “change your passwords frequently,” “don’t open unsolicited attachments,” and “don’t do your online banking on an unsecured public WiFi connection.” All good advice, to be sure – but far from addressing the range of perceived threats to privacy that the Internet poses for individuals. I’ve alluded to three broad categories of online threats to privacy: surveillance (in the etymological sense, “being watched from above”), economic theft (ranging from the theft of a single credit card number to the usurpation of an entire identity), and context collapse (the increasing difficulty of keeping our multiple social selves separate when social networking sites relentlessly drive us toward convergence). I felt strongly that the surveillance problem was not one that could (or even should) be tackled technologically. The theft problem had already been tackled technologically, mostly successfully, and what progress could be made at the margins was beyond the technical chops of our merry little band of would-be hackers. This left the third problem: context collapse. Specifically, how can users make the most of the Web (including social networking sites and their increasingly confusing privacy settings) to meet professional and personal needs while safeguarding their privacy from online threats?

The Solution
Solving the problem of the optimal web presence would require two prior diagnoses: the user’s current, actual web presence and optimal web presence – what the user would implement if only s/he knew how. Once we knew where a user was and where s/he wanted to go, I reasoned, it would be pretty easy to give them a prescription for getting from A to B. Out of this disease/treatment model came the product’s name: the Privacy Doctor.

ImageThe project mock-up that came out of the hackathon envisioned a Buzzfeed Quiz-type inventory of social media habits, perceived threats to privacy, and desired uses and gratifications of an online presence. In addition to helping people solve a real problem, the Privacy Doctor would also educate users about how the Internet works (both technically and socially) and empower them to make decisions that best suited their needs and tolerance for risk. I felt very strongly that the Privacy Doctor should not collect any information that wasn’t strictly needed to perform the diagnosis or generate a prescription, and that what information we did collect should not be retained, much less monetized. I was, of course, thinking like a social scientist well-versed in the ethics of academic survey research: only ask for information that you really need, and prioritize your subjects’ privacy above all other considerations. I quickly discovered, though, that the tech start-up culture (epicenter: Silicon Valley) has its own guiding principles: being cool and making money. These are not things that I have ever been particularly good at, much to my chagrin.

There are two main ways to make money from a web start-up: either sell a service from the beginning (UberLyftAirBnB), or hope to be acquired by an established player for an extravagant sum of money (WhatsAppInstagram). The tech behemoths (GoogleFacebook) make their money from selling information about their users to marketers – you and I aren’t the customers, we’re the product. I didn’t want to charge money for what I considered a public service, and the idea of selling user data to marketers made my hair stand on its ends, so I convinced my partners that we could eventually “sell” the Privacy Doctor to an established organization dedicated to online privacy.

I had dispatched the profit motive monster fairly easily, but avoiding the pressure to be “cool” proved just as hard as it was in high school. Suggestions included wearable tech, a Google Glass tie-in, and – most worryingly – automating as much of the process as possible. “No one is going to take the time to manually answer questions about their online presence – why can’t that be automated?,” the argument went. My gut told me that asking users to allow the Privacy Doctor to login to their online accounts (Facebook, Google, Twitter, and the like) defeated the purpose of a media literacy project, but I was willing to go along with it – as long as users still had the option of manually completing a questionnaire. I ran into the same argument with respect to the “prescription.” “You’re making it too hard for people – if they wanted to manually change their own Facebook privacy settings they would have done so on their own. People want technology to make their lives easier, not give them more work to do! Since people will have already given the Privacy Doctor permission to log in to their accounts for the diagnosis, there’s no reason it can’t also change their privacy settings for them if they want it to.”

If they want it to.

No one in their right mind would want such a thing, I thought. But that was also the reaction to any number of technologically-fueled social changes that we now take for granted, from personal lifestyle blogs that more closely resemble the journals and diaries of yore, to the practice of “checking in” to a location via Facebook or Foursquare, then broadcasting your whereabouts to a group of “friends,” who may or may not care where you are and whom you may or may not want to run into “accidentally on purpose.” Maybe my critics were right, and there was in fact demand for the service that I had just derided as creepy and evil. Maybe I just didn’t get it. After all, I hear that in Silicon Valley a 29-year-old like myself is bordering on the ancient and out-of-touch. But my gut kept telling me that asking people to give up their online passwords was no way to “fix” online privacy. If I wasn’t going to win a normative argument about what technology should or shouldn’t do, I’d move the argument to the empirical battlefield and do some market research.

What The Data Told Me
From Friday, February 14th to Monday, February 24th, 2014 I collected 103 responses to a 10-item survey administered through Survey Monkey. I used a convenience sample obtained through emailing the graduate student distribution list, posting on Facebook, and posting on Reddit. Additionally, several respondents elected to share the survey link on their own Facebook feeds or to e-mail it to their contacts. The survey instrument is online athttps://www.surveymonkey.com/s/C2MBKQR. It included questions about demographics (age, gender, education), technology use (level of tech savvy, social media use), and online threats to privacy, whether real or perceived. My real interest, however, was in the last question: Would you be interested in using a service that logs into your social media accounts for you in order to assess your privacy settings and change them for you so that they meet your needs better?Snip20140422_2

The data was unambiguous: the Privacy Doctor was a bad idea. The answers to the final open-ended question drove the point home. The quotes below are representative of the responses I received:

  • “Creating such an app would be such a PERFECT way to get access to massive amounts of private data that it would have to be handled very carefully. It would have to be open source and run entirely on the users own machine, not one scrap of code running remotely or communicating with your server. If someone uses your app, the VERY FIRST thing you should do is inform them that they are gullible and do not have the capacity to make reasonable judgments about what is safe or not online then advise them to get rid of their Internet-connected devices entirely.”
  • “I feel that a service that ‘logs into’ your social media is dangerous, even if it is made by a reputable company.”

Armed with data – the social scientist’s weapon of choice – all I had left to do was to break the news to my teammates and my mentors. The Privacy Doctor was Dead On Arrival.