This past July, I had the opportunity to give a talk on networked authoritarianism at the 12th Hackers On Planet Earth conference in New York City. I’ll update this post with a link to the video as soon as it’s available, but in the meantime, here is a PDF of my slides.
Who gets to express what ideas online, and how? Who has the authority and the responsibility to police online expression and through what mechanisms?
Dozens of researchers, advocates, and content moderation workers came together in Los Angeles this December to share expertise on what are emerging as the critical questions of the day. “All Things in Moderation” speakers and participants included experienced content moderators — like Rasalyn Bowden, who literally wrote the moderation manual for MySpace — and pioneer researchers who understood the profound significance of commercial content moderation before anyone else, alongside key staff from industry. After years of toiling in isolation, many of us working on content moderation issues felt relief at finally finding “our people” and seeing the importance of our work acknowledged.
If the idea that commercial content moderation matters is quickly gaining traction, there is no consensus on how best to study it — and until we understand how it works, we can’t know how to structure it in a way that protects human rights and democratic values. One of the first roundtables of the conference considered the methodological challenges to studying commercial content moderation, key among which is companies’ utter lack of transparency around these issues.
While dozens of companies in the information and communication technology (ICT) sector publish some kind of transparency report, these disclosures tend to focus on acts of censorship and privacy violations that companies undertake at the behest of governments. Companies are much more comfortable copping to removing users’ posts or sharing their data if they can argue that they were legally required to do it. They would much rather not talk about how their own activities and their business model impact not only people’s individual rights to free expression and privacy, but the very fabric of society itself. The data capitalism that powers Silicon Valley has created a pervasive influence infrastructure that’s freely available to the highest bidder, displacing important revenue from print journalism in particular. This isn’t the only force working to erode the power of the Fourth Estate to hold governments accountable, but it’s an undeniable one. As Victor Pickard and others have forcefully argued, the dysfunction in the American media ecosystem — which has an outsized impact on the global communications infrastructure — is rooted in the original sin of favoring commercial interests over the greater good of society. The FCC’s reversal of the 2015 net neutrality rules is only the latest datapoint in a decades-long trend.
The first step toward reversing the trend is to get ICT companies on the record about their commitments, policies and practices that affect users’ freedom of expression and privacy. We can then evaluate whether these disclosed commitments, policies and practices sufficiently respect users’ rights, push companies to do better, and hold them to account when they fail to live up to their promises. To that end, the Ranking Digital Rights (RDR) project (where I was a fellow between 2014 and 2017) has developed a rigorous methodology for assessing ICT companies’ public commitments to respect their users’ rights to freedom of expression and privacy. The inaugural Corporate Accountability Index, published in November 2015, evaluated 16 of the world’s most powerful ICT companies across 31 indicators, and found that no company in the Index disclosed any information whatsoever about the volume and type of user content that is deleted or blocked when enforcing its own terms of service. Indeed, Indicator F9 — examining data about terms of service enforcement — was the only indicator in the entire 2015 Index on which no company received any points.
We revamped the Index methodology for the 2017 edition, adding six new companies to the mix, and were encouraged to see that three companies — Microsoft, Twitter, and Google — had modest disclosures about terms of service enforcement. Though it didn’t disclose any data about enforcement volume, the South Korean company Kakao disclosed more about how it enforces its terms of service than any other company we evaluated. Research for the 2018 Index and company engagement is ongoing, and we are continuing to encourage companies to clearly communicate what kind of content is or is not permitted on their platforms, how the rules are enforced (and by whom), and to develop meaningful remedy mechanisms for users whose freedom of expression has been unduly infringed. Stay tuned for the release of the 2018 Corporate Accountability Index this April.
Our experience has proven that this kind of research-based advocacy can have a real impact on company behavior, even if it’s never as fast as we might like. Ranking Digital Rights is committed to sharing our research methodology and our data (downloadable as a CSV file and in other formats) with colleagues in academia and the nonprofit sector. The Corporate Accountability Index is already being cited in media reports and scholarly research, and RDR is working closely with civil society groups around the world to hold a broader swath of companies accountable. All of RDR’s methodology documents, data, and other outputs are available under a Creative Commons license (CC-BY) — just make sure to give RDR credit.
Watch the video from my talk at MoneyLab #3: Failing Better from December 2015. The talk is based on my article “First they came for the poor: Surveillance of welfare recipients as an uncontested practice,” published in Media and Communication in 2015.
I had the opportunity to give a series of invited talks in late November and early December 2016. I presented my paper, First they came for the poor: surveillance of welfare recipients as an uncontested practice, at the Institute for Network Cultures’ MoneyLab conference in Amsterdam. I also discussed my dissertation research and my fellowship at Ranking Digital Rights as part of a lunch seminar at the University of Amsterdam’s DATACTIVE research lab. Finally, I represented Ranking Digital Rights at the opening conference of the University of Copenhagen’s The Peoples’ Internet research project.
Later in December, the Centre for Innovation in Global Governance published “Corporate Accountability for a Free and Open Internet,” by Rebecca MacKinnon, Priya Kumar and myself, as part of its Global Commission on Internet Governance paper series. It will also be published as part of a volume in 2017.
Nathalie Maréchal, University of Southern California/Ranking Digital Rights
Jillian York, Centre for Internet and Human Rights/Onlinecensorship.org
Many Internet researchers straddle the line between academic Internet research and digital rights advocacy, including hacker communities. This workshop aims to strengthen the ties between these two modes of inquiry, leveraging AoIR 2016’s location in Berlin to invite digital rights activists from outside the academy to engage with the scholarly conversation. While many scholars and activists express interest in cross-sector collaboration, there are a number of barriers to such efforts, including mismatches between the career incentives, funding mechanisms, and timelines prevalent in the academic and NGO worlds. Nevertheless, the organizers of this half-day workshop have found that collaboration between civil society and academia are crucial to both research and to change.
The first portion of the workshop (two hours) aims to:
- Explore what “research” means to scholars and to activists
- Surface the barriers to cross-sector collaboration
- Brainstorm strategies for transcending such barriers
- Provide a networking forum for scholars and activists working on similar or complementary projects
The second portion of the workshop (two hours) will include a dive into the world of commercial content moderation. Using a fishbowl format, we will hear from experts looking at the the topic from a variety of angles: as an issue of labor, of free expression, and of information hegemony. Participants will be encouraged to take part in the discussion and share new ideas for research and advocacy.
Interested participants should complete the registration form at https://goo.gl/forms/9isVlduYmkeqMeG43 by Sept 1, 2016. The organizers hope to have roughly equal participation from academia and from civil society, and will use the requested information to plan the details of the workshop.
Questions can be addressed to Nathalie Maréchal, firstname.lastname@example.org, at any time.
We look forward to seeing you in Berlin!
Nathalie & Jillian
We’re living in scary times. Terrorism in Europe. Rape on college campuses. Police violence in the U.S. Cop-killing in Canada. The most polarizing U.S. election in my lifetime. And that’s barely scratching the surface.
We’re also living in polarized times. Our lived experiences and scientific research tell us that we live in media echo chambers, surrounded by points of view we already agree with. We post, share and like in violent agreement with our friends without actually hearing, much less listening to, other points of view. The public sphere has imploded, and some days it feels like we’ve collectively given up on civil discourse.
We’re living in times where you can share a thought before you’ve fully thought it through, zing it out around the world on social media and belatedly realize you stepped in it. That you should have contextualized *why* you shared someone else’s thought without providing your own. That a like would have been sufficient. That just because you read a thing and thought it was interesting, doesn’t mean you have to share it. That being tired and multitasking is never a great plan, especially not when Facebook and violence are involved.
Last Friday I stepped in it a bit. I started the day ok, with a short post expressing what the slogan #BlackLivesMatter means to me, why it matters when white people affirm it, and how it all relates to human rights. So far, so good. It felt good to see the likes and supportive comments roll in.
But then I hit “share” on a few memes and posts from other people without contextualizing whether I agreed with every word choice, or if I was sharing them as “food for thought,” or what. I shared before I thought. That was dumb. It was human, but it was also pretty dumb.
I forgot that my 892 Facebook friends don’t share the same cultural understanding of the world, that they exist in very different contexts where the same meme means very different things. Some of that may be due to the echo chambers I mentioned earlier, but mostly it’s because I’m lucky to have a very diverse groups of friends, family, colleagues and acquaintances. I have friends with high school educations and PhDs. Atheists, Muslims, Christians and Jews. People who only use Facebook for selfies and cat memes, and people who use it for political commentary and debate. Black parents who fear their sons might get shot by a cop, a self-appointed vigilante or some dude with a gun who hates hip hop. Members of the military and law enforcement who put their lives on the line to serve their countries and communities. Their loved ones who know the toll that service takes, and who fear the man or woman they love might not come home one day.
My forgetting all this is all the more inexcusable because I (should) know better. I’ve spent my life bouncing around countries, contexts and cultures. I have degrees in communication, of both the international and regular variety. I’ve read the work by danah boyd, Alice Marwick, Michael Wesch, and others on context collapse online — the discomfort that comes with interacting with your parents, your boss, your drinking buddies, and that one kid from high school who grew up be a Trump voter on the same platform. I thought I had a plan to manage it, using Facebook’s granular privacy settings like only someone who reads privacy policies for a living would do (it’s a weird line of work I’m in, I know).
Facebook recently announced changes to the News Feed algorithm prioritizing personal Facebook content like selfies, vacation photos, and pet videos over news. At first it seemed to me like a cop-out, a way to avoid the hard work of getting it right when it comes to censorship, appearance of political bias, etc. I mostly use Facebook for reading recommendations from my friends who are interested in the same issues I am, or who know way more than I do about things that I want to learn more about. Issues like the Black Lives Matter movement and the context of systemic racism that surrounds it. For that, I appreciate political discourse on Facebook.
On the other hand, if Facebook were only for cuteness and pop culture, maybe the echo chambers would be just a little more permeable. Maybe there would be less armchair punditry (including my own) and we could have more thoughtful, nuanced conversations using a common set of facts as evidence. On that Facebook maybe I wouldn’t feel like I need a publicist to stop me from sticking my foot in my mouth. There could even be an alternate reality where I’d never have conversations involving the terms “personal brand,” “thought leader” or “public intellectual.” Maybe. But for that we’d have to get rid of cable news, too, and more.
Granted, no one is making me post political content to Facebook. Certainly lots of people have strict “no politics on social media” rules for themselves. I respect that. The problem is that not only am I an opinionated loudmouth (if the past is any indication, that seems unlikely to change), I’m also a scholar and activist focusing on human rights and the Internet. One of the great joys of my life is having intellectual discussions with my colleagues. Because they’re spread around the planet, these conversations happen on Facebook. This is a point I want to stress: for many of my friends and colleagues, this is what Facebook is for, the main reason we open that damn mobile app far too many times a day.
Now I get to why I felt compelled to share a barrage of “Black Lives Matter” posts and memes, including a few that I didn’t fully agree with every word. For months now, some of the recurring themes of the political discussions my friends have on Facebook have been the importance of the privileged (that means me) extending comfort to the oppressed and the threatened in their times of need, amplifying the voices of those who are silenced, and stressing that it is not the job of the oppressed to comfort those among the privileged who can’t stand being confronted by their privilege. Generally speaking, I try to comfort the afflicted and afflict the comfortable. One eloquent post that I read (but did not share) offered this exhortation to reshare content from black voices:
But I hope you do say something, even if it’s just a share (often, amplifying black voices is better than adding your own, so it’s win-win), and if you still don’t want to, I just want to make sure that you understand that it’s not about changing anything. It’s not about presuming you have power or influence in some grandstanding way that people will roll their eyes at (even if they do, and some of them will). It’s not about thinking you’re important or that people are listening to you. It’s about simply showing up for these people and making them feel less unheard and less alone.
I think that’s on point, and that was the guiding thought in my mind on Friday as I shared and re-posted words written by others. I stand by that sentiment. Many white people in the U.S., and many people of all backgrounds outside of the U.S., don’t seem to be acknowledging that the feelings of fear and outrage driving the Black Lives Matter movement are rooted in reality. White Americans don’t see this first hand, as John Scalzi illustrates, just as men don’t experience sexual harassment and rape culture the way women do. That’s why we need to listen to what Americans of color have to say, even when it’s uncomfortable, especially when we don’t agree with every word. For many white Americans, talking about race is extremely difficult, just as talking about gender is extremely difficult for many men. But if we can’t have those conversations with our friends and families, how are we going to have them in the broader society? And we must. That is the real, hard work of politics at its best. We’ve had far too much of politics at its abject worst lately.
These conversations are difficult for everyone, including for me. To me, that highlights that we must have them. I’ve been very gratified the past 24 hours by the private conversations I’ve had with friends and family. Conversations that started in a place of mutual incomprehension, but ultimately left all parties involved (I think) feeling heard and valued, and having learned something important. I wouldn’t have had those conversations if they hadn’t started on Facebook.
So I’ll continue having hard conversations online, including on Facebook. I can’t promise I’ll always get everything right, but I promise that I’ll try. Since amid all the horror of last week, the world also lost Elie Wiesel, I’ll give him the last word:
We must take sides. Neutrality helps the oppressor, never the victim. Silence encourages the tormentor, never the tormented. Sometimes we must interfere. When human lives are endangered, when human dignity is in jeopardy, national borders and sensitivities become irrelevant. Wherever men and women are persecuted because of their race, religion, or political views, that place must — at that moment — become the center of the universe.