Talk at Going Public with Privacy roundtable

On May 28, 2014 I gave a talk about privacy at a roundtable of educators, media literacy experts, youth media professionals, and academics co-hosted by the National Association for Media Literacy Education (NAMLE) and the Annenberg Innovation Lab. The first of a planned series of regional roundtables, this was an opportunity for a select group of diverse participants to gain a clearer understanding of challenges and concerns related to online privacy as experienced by key stakeholders such as K-12 students, parents, teachers, youth media creators, and policy-makers. The accompanying slides (not shown in video) are at NAMLE roundtable May 2014 – Privacy overview

Can a Problem be its Own Solution? Privacy, Technology and the Limits of Civic Hacking

In October 2013, I participated in a hackathon sponsored by the Annenberg Innovation Lab(AIL) on the theme of privacy. Having spent most of my career on the edges of the tech/geek scene, I was excited to get my hands dirty and perhaps finally learn how to code something more complicated than the text on a website. Unfortunately, the hackathon was scheduled for the same weekend as USC parents’ weekend, and attendance was sparse. While the three of us who showed up were guaranteed to “win,” it also meant that three strangers from very different personal and academic backgrounds would have to quickly come up with a technological solution to a privacy-related problem. The first step was to identify and agree on a problem to solve.

The Problem
As anyone with a pulse (and an Internet connection) will have noticed, online privacy is a very hot topic these days. Between Snowden’s revelations about the NSA’s electronic surveillance programsTarget’s data security breach affecting millions of customers’ credit card information, and heartbreaking accounts of cyber-stalking and cyber-bullying driving teens to suicide, the public has every reason to be wary of the dark forces lurking behind that glowing screen, and “online privacy” has become the cri de guerre for civil libertarians, consumer watchdogs and concerned parents from sea to shining sea. The term “online privacy” itself is a bit of a misnomer. Privacy can’t be segmented into online or offline any more than dating or shopping can. Rather, privacy is newly vulnerable to threats coming from the Internet. As my teammates and I quickly realized, these threats (both real and perceived) are far from applying uniformly to everyone who goes online, a reality that the dominant discourse often ignores in favor of one-size-fits all advice like “change your passwords frequently,” “don’t open unsolicited attachments,” and “don’t do your online banking on an unsecured public WiFi connection.” All good advice, to be sure – but far from addressing the range of perceived threats to privacy that the Internet poses for individuals. I’ve alluded to three broad categories of online threats to privacy: surveillance (in the etymological sense, “being watched from above”), economic theft (ranging from the theft of a single credit card number to the usurpation of an entire identity), and context collapse (the increasing difficulty of keeping our multiple social selves separate when social networking sites relentlessly drive us toward convergence). I felt strongly that the surveillance problem was not one that could (or even should) be tackled technologically. The theft problem had already been tackled technologically, mostly successfully, and what progress could be made at the margins was beyond the technical chops of our merry little band of would-be hackers. This left the third problem: context collapse. Specifically, how can users make the most of the Web (including social networking sites and their increasingly confusing privacy settings) to meet professional and personal needs while safeguarding their privacy from online threats?

The Solution
Solving the problem of the optimal web presence would require two prior diagnoses: the user’s current, actual web presence and optimal web presence – what the user would implement if only s/he knew how. Once we knew where a user was and where s/he wanted to go, I reasoned, it would be pretty easy to give them a prescription for getting from A to B. Out of this disease/treatment model came the product’s name: the Privacy Doctor.

ImageThe project mock-up that came out of the hackathon envisioned a Buzzfeed Quiz-type inventory of social media habits, perceived threats to privacy, and desired uses and gratifications of an online presence. In addition to helping people solve a real problem, the Privacy Doctor would also educate users about how the Internet works (both technically and socially) and empower them to make decisions that best suited their needs and tolerance for risk. I felt very strongly that the Privacy Doctor should not collect any information that wasn’t strictly needed to perform the diagnosis or generate a prescription, and that what information we did collect should not be retained, much less monetized. I was, of course, thinking like a social scientist well-versed in the ethics of academic survey research: only ask for information that you really need, and prioritize your subjects’ privacy above all other considerations. I quickly discovered, though, that the tech start-up culture (epicenter: Silicon Valley) has its own guiding principles: being cool and making money. These are not things that I have ever been particularly good at, much to my chagrin.

There are two main ways to make money from a web start-up: either sell a service from the beginning (UberLyftAirBnB), or hope to be acquired by an established player for an extravagant sum of money (WhatsAppInstagram). The tech behemoths (GoogleFacebook) make their money from selling information about their users to marketers – you and I aren’t the customers, we’re the product. I didn’t want to charge money for what I considered a public service, and the idea of selling user data to marketers made my hair stand on its ends, so I convinced my partners that we could eventually “sell” the Privacy Doctor to an established organization dedicated to online privacy.

I had dispatched the profit motive monster fairly easily, but avoiding the pressure to be “cool” proved just as hard as it was in high school. Suggestions included wearable tech, a Google Glass tie-in, and – most worryingly – automating as much of the process as possible. “No one is going to take the time to manually answer questions about their online presence – why can’t that be automated?,” the argument went. My gut told me that asking users to allow the Privacy Doctor to login to their online accounts (Facebook, Google, Twitter, and the like) defeated the purpose of a media literacy project, but I was willing to go along with it – as long as users still had the option of manually completing a questionnaire. I ran into the same argument with respect to the “prescription.” “You’re making it too hard for people – if they wanted to manually change their own Facebook privacy settings they would have done so on their own. People want technology to make their lives easier, not give them more work to do! Since people will have already given the Privacy Doctor permission to log in to their accounts for the diagnosis, there’s no reason it can’t also change their privacy settings for them if they want it to.”

If they want it to.

No one in their right mind would want such a thing, I thought. But that was also the reaction to any number of technologically-fueled social changes that we now take for granted, from personal lifestyle blogs that more closely resemble the journals and diaries of yore, to the practice of “checking in” to a location via Facebook or Foursquare, then broadcasting your whereabouts to a group of “friends,” who may or may not care where you are and whom you may or may not want to run into “accidentally on purpose.” Maybe my critics were right, and there was in fact demand for the service that I had just derided as creepy and evil. Maybe I just didn’t get it. After all, I hear that in Silicon Valley a 29-year-old like myself is bordering on the ancient and out-of-touch. But my gut kept telling me that asking people to give up their online passwords was no way to “fix” online privacy. If I wasn’t going to win a normative argument about what technology should or shouldn’t do, I’d move the argument to the empirical battlefield and do some market research.

What The Data Told Me
From Friday, February 14th to Monday, February 24th, 2014 I collected 103 responses to a 10-item survey administered through Survey Monkey. I used a convenience sample obtained through emailing the graduate student distribution list, posting on Facebook, and posting on Reddit. Additionally, several respondents elected to share the survey link on their own Facebook feeds or to e-mail it to their contacts. The survey instrument is online athttps://www.surveymonkey.com/s/C2MBKQR. It included questions about demographics (age, gender, education), technology use (level of tech savvy, social media use), and online threats to privacy, whether real or perceived. My real interest, however, was in the last question: Would you be interested in using a service that logs into your social media accounts for you in order to assess your privacy settings and change them for you so that they meet your needs better?Snip20140422_2

The data was unambiguous: the Privacy Doctor was a bad idea. The answers to the final open-ended question drove the point home. The quotes below are representative of the responses I received:

  • “Creating such an app would be such a PERFECT way to get access to massive amounts of private data that it would have to be handled very carefully. It would have to be open source and run entirely on the users own machine, not one scrap of code running remotely or communicating with your server. If someone uses your app, the VERY FIRST thing you should do is inform them that they are gullible and do not have the capacity to make reasonable judgments about what is safe or not online then advise them to get rid of their Internet-connected devices entirely.”
  • “I feel that a service that ‘logs into’ your social media is dangerous, even if it is made by a reputable company.”

Armed with data – the social scientist’s weapon of choice – all I had left to do was to break the news to my teammates and my mentors. The Privacy Doctor was Dead On Arrival.