This past July, I had the opportunity to give a talk on networked authoritarianism at the 12th Hackers On Planet Earth conference in New York City. I’ll update this post with a link to the video as soon as it’s available, but in the meantime, here is a PDF of my slides.
Who gets to express what ideas online, and how? Who has the authority and the responsibility to police online expression and through what mechanisms?
Dozens of researchers, advocates, and content moderation workers came together in Los Angeles this December to share expertise on what are emerging as the critical questions of the day. “All Things in Moderation” speakers and participants included experienced content moderators — like Rasalyn Bowden, who literally wrote the moderation manual for MySpace — and pioneer researchers who understood the profound significance of commercial content moderation before anyone else, alongside key staff from industry. After years of toiling in isolation, many of us working on content moderation issues felt relief at finally finding “our people” and seeing the importance of our work acknowledged.
If the idea that commercial content moderation matters is quickly gaining traction, there is no consensus on how best to study it — and until we understand how it works, we can’t know how to structure it in a way that protects human rights and democratic values. One of the first roundtables of the conference considered the methodological challenges to studying commercial content moderation, key among which is companies’ utter lack of transparency around these issues.
While dozens of companies in the information and communication technology (ICT) sector publish some kind of transparency report, these disclosures tend to focus on acts of censorship and privacy violations that companies undertake at the behest of governments. Companies are much more comfortable copping to removing users’ posts or sharing their data if they can argue that they were legally required to do it. They would much rather not talk about how their own activities and their business model impact not only people’s individual rights to free expression and privacy, but the very fabric of society itself. The data capitalism that powers Silicon Valley has created a pervasive influence infrastructure that’s freely available to the highest bidder, displacing important revenue from print journalism in particular. This isn’t the only force working to erode the power of the Fourth Estate to hold governments accountable, but it’s an undeniable one. As Victor Pickard and others have forcefully argued, the dysfunction in the American media ecosystem — which has an outsized impact on the global communications infrastructure — is rooted in the original sin of favoring commercial interests over the greater good of society. The FCC’s reversal of the 2015 net neutrality rules is only the latest datapoint in a decades-long trend.
The first step toward reversing the trend is to get ICT companies on the record about their commitments, policies and practices that affect users’ freedom of expression and privacy. We can then evaluate whether these disclosed commitments, policies and practices sufficiently respect users’ rights, push companies to do better, and hold them to account when they fail to live up to their promises. To that end, the Ranking Digital Rights (RDR) project (where I was a fellow between 2014 and 2017) has developed a rigorous methodology for assessing ICT companies’ public commitments to respect their users’ rights to freedom of expression and privacy. The inaugural Corporate Accountability Index, published in November 2015, evaluated 16 of the world’s most powerful ICT companies across 31 indicators, and found that no company in the Index disclosed any information whatsoever about the volume and type of user content that is deleted or blocked when enforcing its own terms of service. Indeed, Indicator F9 — examining data about terms of service enforcement — was the only indicator in the entire 2015 Index on which no company received any points.
We revamped the Index methodology for the 2017 edition, adding six new companies to the mix, and were encouraged to see that three companies — Microsoft, Twitter, and Google — had modest disclosures about terms of service enforcement. Though it didn’t disclose any data about enforcement volume, the South Korean company Kakao disclosed more about how it enforces its terms of service than any other company we evaluated. Research for the 2018 Index and company engagement is ongoing, and we are continuing to encourage companies to clearly communicate what kind of content is or is not permitted on their platforms, how the rules are enforced (and by whom), and to develop meaningful remedy mechanisms for users whose freedom of expression has been unduly infringed. Stay tuned for the release of the 2018 Corporate Accountability Index this April.
Our experience has proven that this kind of research-based advocacy can have a real impact on company behavior, even if it’s never as fast as we might like. Ranking Digital Rights is committed to sharing our research methodology and our data (downloadable as a CSV file and in other formats) with colleagues in academia and the nonprofit sector. The Corporate Accountability Index is already being cited in media reports and scholarly research, and RDR is working closely with civil society groups around the world to hold a broader swath of companies accountable. All of RDR’s methodology documents, data, and other outputs are available under a Creative Commons license (CC-BY) — just make sure to give RDR credit.
Watch the video from my talk at MoneyLab #3: Failing Better from December 2015. The talk is based on my article “First they came for the poor: Surveillance of welfare recipients as an uncontested practice,” published in Media and Communication in 2015.
I had the opportunity to give a series of invited talks in late November and early December 2016. I presented my paper, First they came for the poor: surveillance of welfare recipients as an uncontested practice, at the Institute for Network Cultures’ MoneyLab conference in Amsterdam. I also discussed my dissertation research and my fellowship at Ranking Digital Rights as part of a lunch seminar at the University of Amsterdam’s DATACTIVE research lab. Finally, I represented Ranking Digital Rights at the opening conference of the University of Copenhagen’s The Peoples’ Internet research project.
Later in December, the Centre for Innovation in Global Governance published “Corporate Accountability for a Free and Open Internet,” by Rebecca MacKinnon, Priya Kumar and myself, as part of its Global Commission on Internet Governance paper series. It will also be published as part of a volume in 2017.
Nathalie Maréchal, University of Southern California/Ranking Digital Rights
Jillian York, Centre for Internet and Human Rights/Onlinecensorship.org
Many Internet researchers straddle the line between academic Internet research and digital rights advocacy, including hacker communities. This workshop aims to strengthen the ties between these two modes of inquiry, leveraging AoIR 2016’s location in Berlin to invite digital rights activists from outside the academy to engage with the scholarly conversation. While many scholars and activists express interest in cross-sector collaboration, there are a number of barriers to such efforts, including mismatches between the career incentives, funding mechanisms, and timelines prevalent in the academic and NGO worlds. Nevertheless, the organizers of this half-day workshop have found that collaboration between civil society and academia are crucial to both research and to change.
The first portion of the workshop (two hours) aims to:
- Explore what “research” means to scholars and to activists
- Surface the barriers to cross-sector collaboration
- Brainstorm strategies for transcending such barriers
- Provide a networking forum for scholars and activists working on similar or complementary projects
The second portion of the workshop (two hours) will include a dive into the world of commercial content moderation. Using a fishbowl format, we will hear from experts looking at the the topic from a variety of angles: as an issue of labor, of free expression, and of information hegemony. Participants will be encouraged to take part in the discussion and share new ideas for research and advocacy.
Interested participants should complete the registration form at https://goo.gl/forms/9isVlduYmkeqMeG43 by Sept 1, 2016. The organizers hope to have roughly equal participation from academia and from civil society, and will use the requested information to plan the details of the workshop.
Questions can be addressed to Nathalie Maréchal, email@example.com, at any time.
We look forward to seeing you in Berlin!
Nathalie & Jillian
We’re living in scary times. Terrorism in Europe. Rape on college campuses. Police violence in the U.S. Cop-killing in Canada. The most polarizing U.S. election in my lifetime. And that’s barely scratching the surface.
We’re also living in polarized times. Our lived experiences and scientific research tell us that we live in media echo chambers, surrounded by points of view we already agree with. We post, share and like in violent agreement with our friends without actually hearing, much less listening to, other points of view. The public sphere has imploded, and some days it feels like we’ve collectively given up on civil discourse.
We’re living in times where you can share a thought before you’ve fully thought it through, zing it out around the world on social media and belatedly realize you stepped in it. That you should have contextualized *why* you shared someone else’s thought without providing your own. That a like would have been sufficient. That just because you read a thing and thought it was interesting, doesn’t mean you have to share it. That being tired and multitasking is never a great plan, especially not when Facebook and violence are involved.
Last Friday I stepped in it a bit. I started the day ok, with a short post expressing what the slogan #BlackLivesMatter means to me, why it matters when white people affirm it, and how it all relates to human rights. So far, so good. It felt good to see the likes and supportive comments roll in.
But then I hit “share” on a few memes and posts from other people without contextualizing whether I agreed with every word choice, or if I was sharing them as “food for thought,” or what. I shared before I thought. That was dumb. It was human, but it was also pretty dumb.
I forgot that my 892 Facebook friends don’t share the same cultural understanding of the world, that they exist in very different contexts where the same meme means very different things. Some of that may be due to the echo chambers I mentioned earlier, but mostly it’s because I’m lucky to have a very diverse groups of friends, family, colleagues and acquaintances. I have friends with high school educations and PhDs. Atheists, Muslims, Christians and Jews. People who only use Facebook for selfies and cat memes, and people who use it for political commentary and debate. Black parents who fear their sons might get shot by a cop, a self-appointed vigilante or some dude with a gun who hates hip hop. Members of the military and law enforcement who put their lives on the line to serve their countries and communities. Their loved ones who know the toll that service takes, and who fear the man or woman they love might not come home one day.
My forgetting all this is all the more inexcusable because I (should) know better. I’ve spent my life bouncing around countries, contexts and cultures. I have degrees in communication, of both the international and regular variety. I’ve read the work by danah boyd, Alice Marwick, Michael Wesch, and others on context collapse online — the discomfort that comes with interacting with your parents, your boss, your drinking buddies, and that one kid from high school who grew up be a Trump voter on the same platform. I thought I had a plan to manage it, using Facebook’s granular privacy settings like only someone who reads privacy policies for a living would do (it’s a weird line of work I’m in, I know).
Facebook recently announced changes to the News Feed algorithm prioritizing personal Facebook content like selfies, vacation photos, and pet videos over news. At first it seemed to me like a cop-out, a way to avoid the hard work of getting it right when it comes to censorship, appearance of political bias, etc. I mostly use Facebook for reading recommendations from my friends who are interested in the same issues I am, or who know way more than I do about things that I want to learn more about. Issues like the Black Lives Matter movement and the context of systemic racism that surrounds it. For that, I appreciate political discourse on Facebook.
On the other hand, if Facebook were only for cuteness and pop culture, maybe the echo chambers would be just a little more permeable. Maybe there would be less armchair punditry (including my own) and we could have more thoughtful, nuanced conversations using a common set of facts as evidence. On that Facebook maybe I wouldn’t feel like I need a publicist to stop me from sticking my foot in my mouth. There could even be an alternate reality where I’d never have conversations involving the terms “personal brand,” “thought leader” or “public intellectual.” Maybe. But for that we’d have to get rid of cable news, too, and more.
Granted, no one is making me post political content to Facebook. Certainly lots of people have strict “no politics on social media” rules for themselves. I respect that. The problem is that not only am I an opinionated loudmouth (if the past is any indication, that seems unlikely to change), I’m also a scholar and activist focusing on human rights and the Internet. One of the great joys of my life is having intellectual discussions with my colleagues. Because they’re spread around the planet, these conversations happen on Facebook. This is a point I want to stress: for many of my friends and colleagues, this is what Facebook is for, the main reason we open that damn mobile app far too many times a day.
Now I get to why I felt compelled to share a barrage of “Black Lives Matter” posts and memes, including a few that I didn’t fully agree with every word. For months now, some of the recurring themes of the political discussions my friends have on Facebook have been the importance of the privileged (that means me) extending comfort to the oppressed and the threatened in their times of need, amplifying the voices of those who are silenced, and stressing that it is not the job of the oppressed to comfort those among the privileged who can’t stand being confronted by their privilege. Generally speaking, I try to comfort the afflicted and afflict the comfortable. One eloquent post that I read (but did not share) offered this exhortation to reshare content from black voices:
But I hope you do say something, even if it’s just a share (often, amplifying black voices is better than adding your own, so it’s win-win), and if you still don’t want to, I just want to make sure that you understand that it’s not about changing anything. It’s not about presuming you have power or influence in some grandstanding way that people will roll their eyes at (even if they do, and some of them will). It’s not about thinking you’re important or that people are listening to you. It’s about simply showing up for these people and making them feel less unheard and less alone.
I think that’s on point, and that was the guiding thought in my mind on Friday as I shared and re-posted words written by others. I stand by that sentiment. Many white people in the U.S., and many people of all backgrounds outside of the U.S., don’t seem to be acknowledging that the feelings of fear and outrage driving the Black Lives Matter movement are rooted in reality. White Americans don’t see this first hand, as John Scalzi illustrates, just as men don’t experience sexual harassment and rape culture the way women do. That’s why we need to listen to what Americans of color have to say, even when it’s uncomfortable, especially when we don’t agree with every word. For many white Americans, talking about race is extremely difficult, just as talking about gender is extremely difficult for many men. But if we can’t have those conversations with our friends and families, how are we going to have them in the broader society? And we must. That is the real, hard work of politics at its best. We’ve had far too much of politics at its abject worst lately.
These conversations are difficult for everyone, including for me. To me, that highlights that we must have them. I’ve been very gratified the past 24 hours by the private conversations I’ve had with friends and family. Conversations that started in a place of mutual incomprehension, but ultimately left all parties involved (I think) feeling heard and valued, and having learned something important. I wouldn’t have had those conversations if they hadn’t started on Facebook.
So I’ll continue having hard conversations online, including on Facebook. I can’t promise I’ll always get everything right, but I promise that I’ll try. Since amid all the horror of last week, the world also lost Elie Wiesel, I’ll give him the last word:
We must take sides. Neutrality helps the oppressor, never the victim. Silence encourages the tormentor, never the tormented. Sometimes we must interfere. When human lives are endangered, when human dignity is in jeopardy, national borders and sensitivities become irrelevant. Wherever men and women are persecuted because of their race, religion, or political views, that place must — at that moment — become the center of the universe.
Death is always tragic. Murder is always abhorrent. Shooting police officers for doing their jobs is reprehensible, as is the pattern (is that word even strong enough? No) of police violence against black Americans in the US. Outside of any slogan, in the normal conversational sense, of course all lives have meaning. Of course all people matter. All people have human rights. That’s not up for debate with me.
But as a political slogan, in the current US climate, saying that “all lives matter” is a deliberate erasure of the fact that in the US, right now, black lives are valued as less than. We see this in our schools, in our legal system, in our prisons, in our media, in health outcomes, everywhere in our culture. This is systemic racism. It exists not because individual white people are racist (though many are), but because the system is rigged. That’s the legacy of our history. No one alive today created this system, but those of us who benefit from it (that means me) have a moral duty to recognize this reality, to check our privilege, and to listen to what the humans at the receiving end of this structural violence have to say.
I realize the irony of getting on my soapbox to say that us white people need to listen more. I do it because my friends and colleagues who are people of color tell me that it matters to them when white people (try to) amplify their voices. Because that’s the only way that some white people will ever hear this message. Because to have said something after Orlando, as I did, and to stay silent now would be even worse. Because as someone who claims to be a human rights activist, to stay silent about this would be racist.
Saying that “black lives matter” is an affirmation that black lives SHOULD matter as much as other lives, especially as white lives. Right now, in America, today, they don’t. Not as much as white lives do, or -indeed- as blue ones do. Saying that “black lives matter” is not a claim that other lives (including police lives) don’t matter. If you hear it that way, I’d encourage you to think about why that is. I’d be happy to have a mutually respectful conversation about it, in whatever medium you prefer.
The dream of universal human rights is about erasing the age-old reality that life is a zero-sum game. That you can only win if someone else loses. We should all keep dreaming, but it is the responsibility of each and every one of us to do our part.
I’ll stop talking now.
At last Friday’s Color of Surveillance conference, FBI General Counsel James A. Baker called for checks on government power to prevent the kinds of abuses discussed during the conference’s historically focused panels, notably COINTELPRO, surveillance of the civil rights and anti-war movements, and the intimidation of activist leaders — including Martin Luther King, Jr. But he also asked if we, as a society, are okay with the public security implications of unbreakable encryption. That question, and the assumptions behind it, exemplifies why the FBI doesn’t get encryption.
The question assumes that there are objective, measurable costs to unbreakable encryption. More fundamentally, it also assumes that untraceable communication is new, when it is anything but: for most of human history, communication was either oral or written on artifacts that had to be physically transported. Encryption is better understood as a return to the pre-digital status quo, when records were the exception rather than the rule, and the secrecy of correspondence was a sacrosanct rule. Far from “going dark,” we are living in a golden age of surveillance, and we don’t need to go all the way back to the Church Report to find evidence that institutional checks on the government’s power are insufficient. Yet last week Baker reiterated the bewildering claim that the FISA court is a meaningful check on NSA surveillance, when it serves as a rubber stamp at best.
The FBI argues certain investigations would be easier to conduct if the public communicated in the clear. The recent brouhaha over the San Bernardino shooter’s iPhone 5c provides one such example. But the argument falls apart on close examination.
The security community has said from the beginning that the FBI never needed Apple’s help unlocking the Farook phone. It was always about setting a legal precedent that they can compel companies to break their own products. The Farook phone presented a unique opportunity as a test case: it was very unlikely to contain new useful information (it was a work phone, and Farook had physically destroyed his two personal phones; neither the 30-day iCloud backup nor the call/SMS metadata revealed any non-work activity associated with that device), but it was connected to an ongoing investigation, and the FBI could plausibly argue that the phone might contain crucial information, thus playing on the media and the public’s fear of terrorism and poor understanding of the underlying tech issues. The case’s conclusion supports this interpretation, as does the fanciful nature of some of the FBI’s arguments (I’m still waiting for an explanation of what a “dormant cyber pathogen” might be).
What seems likely is that the FBI’s leadership, which has been fighting a losing battle against encryption for decades, had been waiting for the right opportunity to push for a court ruling requiring a private company to hack its own product, and thought the Farook phone would get them what they wanted. They miscalculated, and found an out that would allow them to save face with respect to the general public. The experts of course know that the FBI has egg all over its face, but the FBI doesn’t care what experts think. It’s going for a political win, and we are clearly living in a post-empirical political environment.
The U.S. is currently governed by fear and emotion, rather than facts. Terrorism and child abuse imagery are probably the most frequently invoked horsemen of the infocalypse these days. Senator Dianne Feinstein — whose anti-encryption draft bill was leaked on Friday as well — has raised the bogeyman of a pedophile communicating with her grandchildren through their gaming consoles to justify banning encryption. But it is just that, a bogeyman: there are many more effective ways to protect children from predators (including appropriate parental controls on devices used by children, and talking to kids about risks both online and in the real world, for example) that neither sacrifice the security and privacy of millions of individuals, nor jeopardize society’s ability to evolve and change. She has similarly claimed that the terrorist cell behind the Paris and Brussels attacks used encryption, when in fact it seems that their op-sec relied on burner phones, insecure tools like Facebook and on face-to-face communication. It seems that European law enforcement’s tragic failure to prevent the recent attacks should be attributed to poor coordination between agencies (much as 9/11 was), not encryption.
So these “costs of unbreakable encryption” are neither proven, nor unavoidable through other means, nor different from the world humanity was stuck with until the 1990s. They certainly aren’t worth the costs of mass surveillance, which are well known. Without going into the abuses by the Stasi, the KGB, or Sisi’s Egypt — parallels that are often rejected in the name of American exceptionalism — the mass surveillance revealed by Snowden in 2013 comes at significant, measurable costs to the American economy. Studies also support the notion that the chilling effect — self-censorship due to the perception of surveillance — is real. As Alvaro Bedoya pointed out in his conversation with Baker last Friday, being a black civil rights activist was effectively against the law during the COINTELPRO era.
As Yochai Benkler laid out in a recent article, “the fact of the matter is that institutional systems are highly imperfect, no less so than technological systems, and only a combination of the two is likely to address the vulnerability of individuals to the diverse sources of power and coercion they face.” We already know that institutional checks on surveillance powers are insufficient, even in democracies, and yes, even in the United States. Unbreakable, ubiquitous encryption is the technical check on surveillance power. The FBI doesn’t need, and should not have, backdoor access to communications, warrant or not. It should find another way to do its job.
I had the opportunity to travel to João Pessoa, Brazil for the 2015 Internet Governance Forum, a UN-sponsored multistakeholder event focused on Internet governance. I moderated a workshop on “Benchmarking ICT companies on human rights.”
There has been growing interest over the past few years in civil society efforts to hold ICT companies accountable for their impact on human rights,. All stakeholders including companies have an interest in setting clear industry standards on dimensions of privacy and freedom of expression. To that end, more research and comparative data about different companies’ policies and practices can encourage companies to compete with one another on respect for users’ rights. Given the international scope and complexity of the sector, this task is more than any single organization can fully tackle on a global scale, and it is important to recognize the diversity of goals and perspectives represented by organizations working in this space. The purpose of this roundtable workshop is to bring together a geographically diverse range of NGO’s and researchers to share experiences and perspectives on creating projects to rank or rate ICT companies. The goal is to create a “how to” guide on launching such projects as well as a collaborative network of organizations and researchers. Company and government stakeholders will also provide feedback on how such projects can most effectively influence corporate practice and government policy.