Interviewer: Jillian York

Ron Deibert is a Canadian professor of political science, a philosopher, an author, and the founder of the renowned Citizen Lab, situated in the Munk School of Global Affairs at the University of Toronto. He is perhaps best known to readers for his research on targeted surveillance, which won the Citizen Lab a 2015 EFF Award. I had the pleasure of working with Ron early on in my career on another project he co-founded, the OpenNet Initiative, a project that documented internet filtering (blocking) in more than 65 countries, and his mentorship and work has been incredibly influential for me. We sat down for an interview to discuss his views on free expression, its overlaps with privacy, and much more.

York: What does free expression mean to you?

The way that I think about it is from the perspective of my profession, which is as a professor. And at the core of being an academic is the right…the imperative, to speak freely. Free expression is a foundational element of what it is to be an academic, especially when you’re doing the kind of academic research that I do. So that’s the way I think about it. Even though I’ve done a lot of research on threats to free expression online and various sorts of chilling effects that I can talk about…for me personally, it really boils down to this. I recognize it’s a privileged position: I have tenure, I’m a full-time professor at an established university…so I feel that I have an obligation to speak freely. And I don’t take that for granted because there’s so many parts of the world where the type of work that we do, the things that we speak about, just wouldn’t be allowed.

York: Tell me about an early experience that shaped your views on free expression or brought you to the work that you do.

The recognition that there were ways in which governments—either on their own or with internet service providers—were putting in place filtering mechanisms to prevent access to content. When we first started in the early 2000s there was still this mythology around the internet that it would be a forum for free expression and access to information. I was skeptical. Coming from a security background, with a familiarity with intelligence practices, I thought: this wasn’t going to be easy. There’ll be lots of ways in which governments are going to restrict free speech and access to information. And we started discovering that and systematically mapping it.

That was one of the first projects at the Citizen Lab: documenting internet censorship. There was one other time, that was probably in the late 2000s where I think you and Helmi Noman...I remember you talking about the ways in which internet censorship has an impact on content online. In other words, what he meant is that if websites are censored, after a while they realize there’s no point in maintaining them because their principal audience is restricted from accessing that information and so they just shut it down. That always stuck in my head. Later, Jon Penney started doing a lot of work on how surveillance affects freedom of expression. And again there, I thought that was an interesting, kind of not so obvious connection between free expression and censorship. 

York: You shifted streams awhile back from a heavy focus on censorship to surveillance research. How do you view the connection between censorship and surveillance, free expression, and privacy?

They’re all a mix. I see this as a highly contested space. You have contestation occurring from different sectors of society. So governments are obviously trying to manage and control things. And when governments are towards the more authoritarian end of the spectrum they’re obviously trying to limit free expression and access to information and undertake surveillance in order to constrain popular democratic participation and hide what they’re doing. And so now we see that there’s an extraordinary toolkit available to them, most of it coming from the private sector. And then with the private sector you have different motivations, usually driven principally by business considerations. Which can end up – often in unintended ways – chilling free expression. 

The example I think of is, if social media platforms loosen the reins over what is acceptable speech or not and allow much more leeway in terms of the types of content that people can post – including potentially hateful, harmful content – I have seen on the other end of that, speaking to victims of surveillance, that they’re effectively intimidated out of the public sphere. They feel threatened, they don’t want to communicate. And that’s because of perhaps something that you could even give managers of the platforms some credit for, and you could say, well they’re doing this to maximize free speech.  When in fact they’re creating the conditions for harmful speech to proliferate and actually silence people. And of course these are age-old battles. It isn’t anything particular to the internet or social media, it’s about the boundaries around free expression in a liberal, democratic society. 

Where do we draw the lines? How do we regulate that conduct to prevent harmful speech from circulating? It’s a tricky question, for sure. Especially in the context of platforms that are global in scope, that cut across multiple national jurisdictions, and which provide people with the ability to have effectively their own radio broadcast or their own newspaper – that was the original promise of the internet, of course. But now we’re living in it. And the results are not always pretty, I would say. 

York: I had the pleasure of working with you very early on in my career on a project called the OpenNet Initiative and your writings influenced a lot of my work. Can you tell our readers a little bit about that project and why it was important?

That was a phenomenal project in hindsight, actually. It was, like many things, you don’t really know what you’re doing until later on. Many years later you can look back and reflect on the significance of it. And I think that’s the case here. Although we all understood we were doing interesting work and we got some publicity around it. I don’t think we fully appreciated what exactly we were mounting, for better or for worse. My pathway to that was that I set up the Citizen Lab in 2001 and one of the first projects was building out some tests about internet censorship in China and Saudi Arabia. That was led by Nart Villeneuve. He had developed a technique to log onto proxy computers inside those countries and then just do a kind of manual comparison. Then we read that Jonathan Zittrain and Ben Edelman were doing something, except Ben was doing it with dialup. He was doing it with dialup remotely and then making these tests. So we got together and decided we should collaborate and put in a project proposal to MacArthur Foundation and Open Society Foundations. And that’s how the project got rolling. Of course Rafal [Rohozinski] was also involved then, he was at Cambridge University. 

And we just started building it out going down the roads that made logical sense to elaborate on the research. So if you think about Ben and Nart doing slightly different things, well the next sequence in that, if you wanted to improve upon it, is, okay well let’s build a software that automates a lot of this. Build a database on the back end where we had a list of all the websites. At that time we couldn’t think of any other way to do it than to have people inside the country run these tests. I was actually thinking about the other day, you were on Twitter and you and I maybe had an exchange about this at the time, about well, we need volunteers to run these tests, should we put out a call on Twitter for it? And we were debating the wisdom of that. It’s the kind of thing we would never do now. But back then we were like, “yeah, maybe we should.” There were obviously so many problems with the software and a lot of growing pains around how we actually implement this. We didn’t really understand a lot of the ethical considerations until we were almost done. And OONI (Open Observatory of Network Interference) came along and kind of took it to the next level and actually implemented some of the things that were being bandied about early on and actually Jonathan Zittrain [moving forward referred to as JZ]. 

So JZ had this idea of—well actually we both had the same idea separately—and didn’t realize it until we got together. Which was kind of a SETI@home for internet censorship. What OONI is now, if you go back, you can even see media interviews with both of us talking about something similar. We launched at the lab at one point something called Internet Censorship Explore, and we had automated connections to proxy computers. And so people could go to a website and make a request, I want to test a website in Saudi Arabia, in Bahrain, or whatever. Of course the proxies would go up and down and there were all sorts of methodological issues with relying on data from that. There are ethical considerations that we would take into account now that we didn’t then. But that was like a primitive version of OONI, and that was around 2003. So later on OONI comes along and it just so happened that we were winding the project down for various reasons, and they took off at that time and we just said, this is fantastic, let’s just collaborate with them. 

One more important thing: there was an early decision. We were meeting at Berkman, it was JZ, John Palfrey, myself, Rafal Rohozinski, Nart, and Ben Edelman. We’re all in a room and I was like, “we should be doing tests for internet censorship but also surveillance.” And I can remember, with the Harvard colleagues there was a lot of concern about that… about potentially getting into national security stuff. And I was like, “Well, what’s wrong with that? I’m all for that.” So that’s where we carved off a separate project at the lab called the Information Warfare Monitor. And then we got into the targeted espionage work through that. In the end we had a ten year run. 

York: In your book Reset, you say there’s “no turning back” from social media. Despite all of the harms, you’ve taken the clear view that social media still has positive uses. Your book came out before Elon Musk took over Twitter and before the ensuing growth of federated social networks. Do you see this new set of spaces as being any different from what we had before?

Yeah, 100%. They’re the sort of thing I spoke about at the end where I said we need to experiment with platforms or ways of engaging with each other online that aren’t mediated through the business model of personal data surveillance or surveillance capitalism. Of course, here I am speaking to you, someone who’s been talking about this. I also think of Ethan Zuckerman, who’s also been talking about this for ages. So this is nothing original to me, I’m just saying, “Hey, we don’t need to do it this way.” 

Obviously, there are other models. And they may be a bit painful at first, they may have growing pains around getting at that type of, the level of engagement you need for it to cascade into something. That’s the trick, I think. In spite of the toxic mess of Twitter, which by the way pretty much aligns with what I wrote in Reset, the concern around, you have someone coming into this platform and then actually loosening the reins around whatever constraints existed in a desperate attempt to accelerate engagement led to a whole toxic nightmare. People fled the platform and experimented with others. Some of which are not based around surveillance capitalism. The challenge is, of course, to get that network effect. To get enough people to make it attractive to other people and then more people come onboard. 

I think that’s symptomatic of wider social problems as a whole, which really boil down to capitalism, at its core. And we’re kind of at a point where the limits of capitalism have been reached and the downsides are apparent to everybody, whether it’s ecological or social issues. We don’t really know how to get out of it. Like how would we live in something other than this? We can think about it hypothetically, but practically speaking, how do we manage our lives in a way that doesn’t depend on—you know, you can think about this with social media or you can think about it with respect to the food supply. What would it look like to actually live here in Toronto without importing avocados? How would I do that? How would we do that in my neighborhood? How would we do that in Toronto? That’s a similar kind of challenge we face around social media. How could we do this without there being this relentless data vacuum cleaning operation where we’re treated like livestock for their data farms? Which is what we are. How do we get out of that? 

York: We’re seeing a lot of impressive activism from the current youth generation. Do you see any positive online spaces for activism given the current landscape of platforms and the ubiquity of surveillance? Are there ways young people can participate in digital civil disobedience that won’t disenfranchise them?

I think it’s a lot harder to do online civil disobedience of the sort that we saw—and that I experienced—in the late 1990s and early 2000s. I think of Electronic Disturbance Theatre and the Zapatistas. There was a lot of experimentation of website defacement and DDoS attacks as political expression. There were some interesting debates going on around Cult of the Dead Cow and Oxblood Ruffin and those sorts of people. I think today, the finegrain surveillance net that is cast over people’s lives right down to the biological layer is so intense that it makes it difficult to do things without there being immediate consequences or at least observation of what you’re doing. I think it induces a more risk-averse behavior and that’s problematic for sure.

There are many experiments happening, way more than I’m aware of. But I think it’s more difficult now to do things that challenge the established system when there’s this intense surveillance net cast around everything. 

York: What do you currently see as the biggest threat to the free and open internet?

Two things. One is the area that we do a lot of work in, which is targeted espionage. To encapsulate where we’re at right now, the most advanced mercenary surveillance firms are providing services to the most notorious abusers of human rights. The worst despots and sociopaths in the world, thanks to these companies, now have the ability to take over any device anywhere in the world without any visible indication that anything is wrong on the part of the victim. So one minute your phone is fine, and the next it’s not. And it’s streaming data a continent away to some tyrant. That’s really concerning. For just about everything to do with rights and freedom and any type of rule-based political order. And if you look at, we’ve remarkably, like as we’re speaking, we’ve delivered two responsible disclosures to Apple just based on capturing these exploits that are used by these mercenary surveillance companies. That’s great, but there’s a time period where those things are not disclosed, and typically they run about an average of 100 days. There’s a period of time where everyone in the world is vulnerable to this type of malfeasance. And what we are seeing, of course, is all sorts of, an epidemic of harms against vulnerable, marginalized communities, against political opposition, against lawyers. All the pillars of liberal, democratic society are being eroded because of this. So to me that’s the most acute threat right now. 

The other one is around AI-enabled disinformation. The combination of easy-to-use platforms, which enabled a generation of coordinated, inauthentic campaigns that harass and intimidate and discredit people – these are now industrialized, they’re becoming commodified and, again, available to any sociopath in the world now has this at their fingertips. It’s extraordinarily destructive on so many levels. 

Those two are the biggest concerns on my plate right now. 

York: You’ve been at the forefront of examining how tech actors use new technology against people—what are your ideas on how people can use new technology for good?

I’ve always thought that there’s a line running from the original idea of “hacktivism” that continues to today that’s about having a particular mindset with respect to technology. Where, if one is approaching the subject through an experimental lens, trying to think about creating technical systems that help support free expression, access to information, basic human rights.  That’s something very positive to me, I don’t think it’s gone away, you can see it in the applications that people have developed and are widely used. You can see it in the federated social media platforms that we spoke about before. So it’s this continuous struggle to adapt to a new risk environment by creating something and experimenting. 

I think that’s something we need to cultivate more among young people, how to do this ethically. Unfortunately, the term “hacktivism” has been distorted. It’s become a pejorative term to mean somebody who is doing something criminal in nature. I define it in Reset, and in other books, as something that I can trace back to, at least for me, I see it as part of this American pragmatist position, a la John Dewey. We need to craft together something that supports the version of society that we want to lean towards, the kind of technical artifact-creating way of approaching the world. We don’t do that at the Lab any longer, but I think it’s something important to encourage.

York: Tell me about a personal experience you’ve had with censorship or with utilizing your freedom of expression for good.

We have been sued and threatened with lawsuits several times for our research. And typically these are corporations that are trying to silence us through strategic litigation. And even if they don’t have grounds to stand on, this is a very powerful weapon for those actors to silence inconvenient information from coming forward for them. For example, Netsweeper, one morning I woke up and had in my email inbox a letter from their lawyer saying they were suing me personally for three million dollars. I can remember the moment I looked at that and just thought, “Wow, what’s next?” And so obviously I consulted with the University of Toronto’s legal counsel, and the back and forth between the lawyers went on for several months. And during that time we weren’t allowed to speak publicly on the case. We couldn’t speak publicly about Netsweeper. Then just at the very end they withdrew the lawsuit. Fortunately, I’d instructed the team to do a kind of major capstone report on Netsweeper – find every Netsweeper device we can in the world and let’s do a big report. And that ended up being something called Planet Netsweeper. We couldn’t speak about that at the time, but I was teeing that up in the hope that we’d be able to publish. And fortunately we were able to. But had that gone differently, had they successfully sued us into submission, it would have killed my research and my career. And that’s not the first time that’s happened. So I really worry about the legal environment for doing this kind of adversarial research. 

York: Who’s your free speech hero? 

There’s too many, it’s hard to pick one…I’ll say Loujain AlHathloul. Her bravery in the face of formidable opposition and state sanctions is incredibly inspiring. She became a face of a movement that embodies basic equity and rights issues: lifting the ban on women driving in Saudi Arabia. And she has paid, and continues to pay, a huge price for that activism. She is a living illustration of speaking truth to power. She has refused to submit and remain silent in the face of ongoing harassment, imprisonment and torture. She’s a real hero of free expression. She should be given an award – like an EFF Award!

Also, Cory Doctorow. I marvel at how he’s able to just churn this stuff out and always has a consistent view of things.