The promise of the internet was that it would be a tool to melt barriers and aid truth-seekers everywhere. But it feels like polarization has worsened in recent years, and more internet users are being misled into embracing conspiracies and cults.
You can also find this episode on the Internet Archive.
From QAnon to anti-vax screeds to talk of an Illuminati bunker beneath Denver International Airport, Alice Marwick has heard it all. She has spent years researching some dark corners of the online experience: the spread of conspiracy theories and disinformation. She says many people see conspiracy theories as participatory ways to be active in political and social systems from which they feel left out, building upon beliefs they already harbor to weave intricate and entirely false narratives.
Marwick speaks with EFF’s Cindy Cohn and Jason Kelley about finding ways to identify and leverage people’s commonalities to stem this flood of disinformation while ensuring that the most marginalized and vulnerable internet users are still empowered to speak out.
In this episode you’ll learn about:
- Why seemingly ludicrous conspiracy theories get so many views and followers
- How disinformation is tied to personal identity and feelings of marginalization and disenfranchisement
When fact-checking does and doesn’t work
- Thinking about online privacy as a political and structural issue rather than something that can be solved by individual action
Alice Marwick is an Associate Professor in the Department of Communication and cofounder and Principal Researcher at the Center for Information, Technology and Public Life at the University of North Carolina, Chapel Hill. She researches the social, political, and cultural implications of popular social media technologies. In 2017, she co-authored Media Manipulation and Disinformation Online (Data & Society), a flagship report examining far-right online subcultures’ use of social media to spread disinformation, for which she was named one of Foreign Policy magazine’s 2017 Global Thinkers. She is the author of Status Update: Celebrity, Publicity and Branding in the Social Media Age (Yale 2013), an ethnographic study of the San Francisco tech scene which examines how people seek social status through online visibility, and co-editor of The Sage Handbook of Social Media (Sage 2017). Her forthcoming book, The Private is Political (Yale 2023), examines how the networked nature of online privacy disproportionately impacts marginalized individuals in terms of gender, race, and socio-economic status. She earned a political science and women's studies bachelor's degree from Wellesley College, a Master of Arts in communication from the University of Washington, and a PhD in media, culture and communication from New York University.
I show people these TikTok videos that are about these kind of outrageous conspiracy theories, like that the Large Hadron Collider at CERN is creating a multiverse. Or that there's, you know, this pyramid of tunnels under the Denver airport where they're trafficking children and people kinda laugh at them.
They're like, this is silly. And then I'm like, this has 3 million views. You know, this has more views than probably most of the major news stories that came out this week. It definitely has more views than any scientific paper or academic journal article I'll ever write, right? Like, this stuff has big reach, so it's important to understand it, even if it seems kind of frivolous or silly, or, you know, self-evident.
It's almost never self-evident. There's always some other reason behind it, because people don't do things arbitrarily. They do things that help them make sense of their lives. They give their lives meaning these are practices that people engage in because it means something to them. And so I feel like my job as a researcher is to figure out, what does this mean? Why are people doing this?
That’s Alice Marwick. The research she’s talking about is something that worries us about the online experience – the spread of conspiracy theories and misinformation. The promise of the internet was that it would be a tool that would melt barriers and aid truth-seekers everywhere. But sometimes it feels like polarization has worsened, and Internet users are misled into conspiracies and cults. Alice is trying to figure out why, how – and more importantly, how to fix it.
I’m Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.
This is our podcast series: How to Fix the Internet.
This is a topic that many of us have a personal connection to – so we started off our conversation with Alice by asking what drew her into this area of research.
So like many other people I got interested in missing disinformation in the run up to the 2016 election. I was really interested in how ideas that had formerly been like a little bit subcultural and niche in far right circles were getting pushed into the mainstream and circulating really wildly and widely.
And in doing that research, it sort of helped me understand disinformation as a frame for understanding the way that information ties into marginalization, I think more broadly and disinformation is often a mechanism by which people who are marginalized the stories that the dominant culture tells about those marginalized people, the way that it circulates.
I think it's been a primary focus for a lot of people in a lot of ways over the last few years. I know I have spent a lot of time on alternative social media platforms over the last few years because I find the topics kind of interesting to figure out what's happening there. And also because I have a friend who has kind of entered that space and, uh, I like to learn, you know, where the information that he's sharing with me comes from, essentially, right. But one thing that I've been thinking about with him and and with other folks is, is there something that happened to him that made him kind of easily radicalized, if you will? And I, I don't think that's a term that, that you recommend using, but I think a lot of people just assume that that's something that happens.
That there are people who, um, you know, grew up watching the X-files or something and ended up more able to fall into these misinformation and disinformation traps. And I'm wondering if that's, if that's actually true. It seems like from your research, it's not.
It's not, and that's because there's a lot of different things that bring people to disinformation, because disinformation is really deeply tied to identity in a lot of ways. There's lots of studies showing that more or less, every American believes in at least one conspiracy theory, but the conspiracy theory that you believe in is really based on who you are.
So in some cases it is about identity, but I think the biggest misconception about [00:04:00] disinformation is that the people who believe it are just completely gullible and that they don't have any critical thinking skills and that they go on YouTube and they watch a video or they listen to a podcast and all of a sudden their entire mindset shifts.
So why is radicalization not the right term? How do you think about this term and why you've rejected it?
The whole idea of radicalization is tied up in this countering violent extremism movement that is multinational, that is tied to this huge surveillance apparatus, to militarization, to, in many ways, like a very Islamophobic idea of the world. People have been researching why individuals commit political violence for 50 years and they haven't found any individual characteristics that make someone more susceptible to doing something violent, like committing a mass shooting or participating in the January 6th insurrection, for example. What instead that we see is that there's a lot of different puzzle pieces that can contribute to whether somebody takes on a set, an ideology, and whether they commit acts of violence and service of that ideology.
And I think the thing that's frustrating to researchers is sometimes the same thing can have two completely different effects in people. So there's this great study of women in South America who were involved in guerilla warfare, and some of those women, when they had kids, they were like, oh, I'm not gonna do this anymore.
It's too dangerous. You know, I wanna focus on my family. But then there was another set of women that when they had kids, they felt they had more to lose and they had to really contribute to this effort because it was really important to the freedom of them and their children.
So when you think about radicalization, there's this real desire to have this very simplistic pathway that everybody kind of just walks along and they end up a terrorist. But that's just not the way the world works.
The second reason I don't like radicalization is because white supremacy is baked into the United States from its inception. And white supremacist ideas and racist ideas are pretty foundational. And they're in all kinds of day-to-day language and media and thinking. And so why would we think it's radical to be, for example, anti-black or anti-trans when anti-blackness and anti-transness have like these really long histories?
Yeah, I think that's right. And there is a way in which radicalization makes it sound as if, um, that's something other than our normal society. Iin many instances, that's not actually what's going on.
There's pieces of our society, the water we swim in every day that are getting, um, that are playing a big role in some of this stuff that ends up in a very violent place. And so by calling it radicalization, we're kind of creating an other that we're not a part of that I think will mean that we might miss some of the, some of the pieces of this.
Yeah, and I think that when we think about disinformation, the difference between a successful and an unsuccessful disinformation campaign is often whether or not the ideas exist in the culture already. One of the reasons QAnon, I think, has been so successful is that it picks up a lot of other pre circulating conspiracy theories.
It mixes them with anti-Semitism, it mixes them with homophobia and transphobia, and it kind of creates this hideous concoction, this like potion that people drink that reinforces a lot of their preexisting beliefs. It's not something that comes out of nowhere. It's something that's been successful precisely because it reinforces ideas that people already had.
I think the other thing that I saw in your research that might have been surprising or at least was a little surprising to me, is how participatory Q-Anon is.
You took a look at some of the Q-Anon. Conversations, you could see people pulling in pieces of knowledge from other things, you know, flight patterns and, and unexplained deaths and other things. It's something that they're co-creating, um, which I found fascinating.
It's really similar to the dynamics of fandom in a lot of ways. You know, any of us who have ever participated in, like, a Usenet group or a subreddit about a particular TV show, know that people love putting theories together. They love working together to try to figure out what's going on. And obviously we see those same dynamics at play in a lot of different parts of internet culture.
So it's about taking the participatory dynamics of the internet and sort of mixing them with what we're calling conspiratorial literacy, which is sort of the ability to assemble these narratives from all these disparate places to kind of pull together, you know, photos and Wikipedia entries and definitions and flight paths and you know, news stories into these sort of n narratives that are really hard to make coherent sometimes, ‘cause they get really complicated.
But it's also about a form of political participation. I think there's a lot of people in communities where disinformation is rampant, where they feel like talking to people about Q-Anon or anti-vaxing or white supremacy is a way that they can have some kind of political efficacy. It's a way for them to participate, and sometimes I think people feel really disenfranchised in a lot of ways.
I wonder because you mentioned internet culture, if some of this is actually new, right? I mean, we had satanic panics before and something I hear a lot of in various places is that things used to be so much simpler when we had four television channels and a few news anchors and all of them said the same thing, and you couldn't, supposedly, you couldn't find your way out into those other spaces. And I think you call this the myth of the epistemically consistent past. Um, and is that real? Was that a real time that actually existed?
I mean, let's think about who that works for, right? If you're thinking about like 1970, let's say, and you're talking about a couple of major TV networks, no internet, you know, your main interpersonal communication is the telephone. Basically, what the mainstream media is putting forth is the narrative that people are getting.
And there's a very long history of critique of the mainstream media, of putting forth a narrative that's very state sponsored, that's very pro-capitalist, that writes out the histories of lots and lots of different types of people. And I think one of the best examples of this is thinking about the White Press and the Black Press.
And the Black Press existed because the White Press didn't cover stories that were of interest to the black community, or they strategically ignored those stories. Like the Tulsa Race massacre, for example, like that was completely erased from history because the white newspapers were not covering it.
So when we think about an. Epistemically consistent past, we're thinking about the people who that narrative worked for.
I really appreciate this point. To me, what was exciting about the internet and, you know, I'm a little older. I was alive during the seventies, um, and watched Walter Cronkite and, you know, this idea that, you know, old white guys in New York get, decide what the rest of us see, which is, that's who ran the networks, right.
That, that, you know, and maybe we had a little pbs, so we got a little Sesame Street too.
But the promise of the Internet was that we could hear from more and more diverse voices, and reduce the power of those gatekeepers. What is scary is that some people are now pretty much saying that the answers to the problems of today’s Internet is to find four old white guys and let them decide what all the rest of us see again.
I think it's really easy to blame the internet for the ills of society, and I, I guess I'm a digital critic, but I'm ultimately, I love the internet, like I love social media. I love the internet. I love online community. I love the possibilities that the internet has opened up for people. And when I look at the main amplifiers of disinformation, it's often politicians and political elites whose platforms are basically independent of the internet.
Like people are gonna cover, you know, leading politicians regardless of what media they're covering them with. And when you look at something like the lies around the Dominion voting machines, like, yes, those lies start in these really fringy internet communities, but they're picked up and amplified incredibly quickly by mainstream politicians.
And then they're covered by mainstream news. So who's at fault there? I think that blaming the internet really ignores the fact that there's a lot of other players here, including the government, you know, politicians, these big mainstream media sources. And it's really convenient to blame all social media or just the entire internet for some of these ills, but I don't think it's accurate.
Well, one of the things that I saw in your research and, and our friend, Yochai Benkler has done in a lot of things is the role of amplifiers, right? That these, these these places where people, you know, agree about things that aren't true and, and converse about things that aren't true. They predate the internet, maybe the internet gave a little juice to them, but what really gives juice to them is these amplifiers who, as I think you, you rightly point out, are some of the same people who were the mainstream media controllers in that hazy past of yore, um, I think that if this stuff never makes it to more popular amplifiers. I don't think it becomes the kind of thing that we worry about nearly so much.
Yeah, I mean, when I was looking at white supremacist disinformation in 2017, someone I spoke with pointed out that the mainstream media is the best recruitment tool for white supremacists because historically it's been really hard for white supremacists to recruit. And I'm not talking about like historically, like in the thirties and forties, I'm talking about like in the eighties and nineties when they had sort of lost a lot of their mainstream political power.
It was very difficult to find like-minded people, especially if people were living in places that were a little bit more progressive or were multiracial. Most people, in reading a debunking story in the Times or the Post or whatever, about white supremacist ideas are going to disagree with those ideas.
But even if one in a thousand believes them and is like, oh wow, this is a person who's spreading white supremacist ideas, I can go to them and learn more about it. That is a far more powerful platform than anything that these fringe groups had. in the past, and one of the things that we've noticed in our research is that often conspiracy theories go mainstream precisely because they're being debunked by the mainstream media
Wow. So there's two kinds of amplifiers. There's the amplifiers who are trying to debunk things and accidentally perhaps amplify. But there are, there are people who are intentional amplifiers as well, and that both of them have the same effect, or at least both of them can spread the misinformation.
Yeah. I mean, of course, debunking has great intentions, right? We don't want horrific misinformation and disinformation to go and spread unchecked. But one of the things that we noticed when we were looking at news coverage of disinformation was that a lot of the times the debunking aspect was not as strong as we would've expected.
You know, you would expect a news story saying, this is not true, this is false, the presumptions are false. But instead, you'd often get these stories where they kind of repeated the narrative and then at the end there was, you know, this is incorrect. And the false narrative is often much more interesting and exciting than whatever the banal truth is.
So I think a lot of this has to do with the business model of journalism, right? There's a real need to comment on everything that comes across Twitter, just so that you can get some of the clicks for it. And that's been really detrimental, I think, to. journalists who have the time and the space to really research things and craft their pieces.
You know, it's an underpaid occupation. They're under a huge amount of economic and time pressure to like get stories out. A lot of them are working for these kind of like clickbaity farms that just churn out news stories on any hot topic of the day. And I think that is just as damaging and dangerous as some of these social media platforms.
So when it comes to debunking, there's a sort of parallel, which is fact checking. And, you know, I have tried to fact check people, myself, um, individually. It doesn't seem to work. Does it work when it's, uh, kind of built into the platform as we've seen in different, um, in different spaces like Facebook or Twitter with community notes they're testing out now?
Or does that also kind of amplify it in some way because it just serves to upset, let's say, the people who have already decided to latch onto the thing that is supposedly being fact checked.
I think fact checking does work in some instances. If it's about things that people don't already have, like a deep emotional attachment to. I think sometimes also if it's coming from someone they trust, you know, like a relative or a close friend, I think there are instances in which it doesn't get emotional and people are like, oh, I was wrong about that, that's great. And then they move on.
When it's something like Facebook where, you know, there's literally like a little popup saying, you know, this is untrue. Oftentimes what that does is it just reinforces this narrative that the social platforms are covering things up and that they're biased against certain groups of people because they're like, oh, Facebook only allows for one point of view.
You know, they censor everybody who doesn't believe X, Y, or Z. And the thing is that I think both liberals and conservatives believe that, obviously the narrative that social platforms censor conservatives is much stronger. But if you look at the empirical evidence, conservative stories perform much better on social media, specifically Facebook and Twitter, than do liberal stories.
So it, it's kind of like, it makes nobody happy. I don't think we should be amplifying, especially extremist views or views that are really dangerous. And I think that what you wanna do is get rid of the lowest hanging fruit. Like you don't wanna convert new people to these ideas like you, there might be some people who are already so enmeshed in some of these communities that it's gonna be hard for them to find their way out. But let's try to minimize the number of people who are exposed to it.
That's interesting. It sounds like there are some models of fact checking that can help, but it really more applies to the type of information that's being, uh, fact checked than, than the specific way that the platform kind of sets it up. Is that what I'm hearing? Is that right?
Yeah, I mean, the problem is with a lot of, a lot of people online, I bet if you ask 99 people, if they consider themselves to be critical thinkers, 95 would say, yes, I'm a critical thinker. I'm a free thinker.
A low estimate, I'm pretty sure.
A low estimate. So let's say you ask a hundred people in 99 say they're critical thinkers. Um, you know, I, I interview a lot of people about who have sort of what we might call unusual beliefs, and they all claim that they do fact checking and that they, when they hear something, they want to see if it's true.
And so they go and read other perspectives on it. And obviously, you know, they're gonna tell the researcher what they think I wanna hear. They're not gonna be like, oh, I saw this thing on Facebook and then I, like, spread it to 2000 people. And then it, you know, it turned out it was false. Um, but especially in the communities like Q-Anon, or anti-vaxxers, they already think of themselves as like researchers.
A lot of people who are into conspiracy theories think of themselves as researchers. That's one of their identities. And they spend quite a bit of time going down rabbit holes on the internet, looking things up and reading about it. And it's almost like a funhouse mirror held up to academic research because it is about the pleasure of learning, I think, and the joy of sort of educating yourself and these sort of like autodidactic processes where people can kind of learn just for the fun of learning. Um, but then they're doing it in a way that's somewhat divorced from what I would call sort of empirical standards of data collection or, you know, data assessment.
So, let's flip it around for a second. What does it look like if we are doing this right? What are the things that we would see in our society and in our conversations that would indicate that we're, we're kind of on the right path, or that we're, we're addressing this?
Well, I mean, the problem is this is a big problem. So it requires a lot of solutions. A lot of different things need to be worked on. You know, the number one thing I think would be toning down, you know, violent political rhetoric in general.
Now how you do that, I'm not sure. I think it comes from, you know, there's this kind of window of discourse that's open that I think needs to be shut, where maybe we need to get back to slightly more civil levels of discourse. That's a really hard problem to solve. In terms of the internet, I think right now there's been a lot of focus on the biggest social media sites, and I think that what's happening is you have a lot of smaller social sites and it's much more difficult to play whack-a-Mole with a hundred different platforms than it is with three.
Given that we think that a pluralistic society is a good thing and we shouldn't all be having exactly the same beliefs all the time. How do we nurture that diversity without, you know, without the kind of violent edges? Or is it inevitable? Is there a way that we can nurture a pluralistic society that doesn't get to this us versus them, what team are you on kind of approach that I think underlies some of the spilling into violence that we've seen?
This is gonna sound naive, but I do think that there's a lot more commonalities between people than there are differences. So I interviewed a woman who's a conservative evangelical anti-vaxxer last week, and you. She and I don't have a lot in common in any way, but we had, like, a very nice conversation and one of the things that she told me is be she has this one particular interest that's brought her into conversation with a lot of really liberal people.
And so because she's interacted with a lot of them, she knows that they're not like demonic or evil. She knows they're just people and they have really different, they have really different opinions on a lot of really serious issues, but they're still able to sort of chat [00:32:00] about the things that they do care about.
And I think that if we can trace those lines of inclusion and connectivity between people, I think that's much, that's a much more positive, I think, area for growth than it is just constantly focusing on the differences. And that's easy for me to say as a white woman, right? Like it's much harder to deal with these differences if the difference in question is that the person thinks you're, you know, genetically inferior or that you shouldn't exist.
Those are things that are not easy. You can't just kumbaya your way out of those kinds of things. And in that case, I think we need to center the concerns of the most vulnerable and of the most marginalized, and make sure they're the ones whose voices are getting heard and their concerns are being amplified, which is not always the case, unfortunately.
So let's say that we got to that point and um, you know, the internet space that you're on isn't as polarized, but it's pluralistic. Can you describe a little bit about what that feels like in your mind?
I think one thing to remember is that most people don't really care about politics. You know, a lot of us are kind of Twitter obsessed and we follow the news and we see our news alerts come up on our phone and we're like, Ooh, what just happened? Most people don't really care about that stuff. If you look at a site like Reddit, which gets a bad rap, but I think Reddit is just like a wonderful site for a lot of reasons.
It's mostly focused around interest-based communities, and the vast, vast majority of them are not about politics. They're about all kinds of other things. You know very mundane stuff. Like you have a dog or a cat, or you like the White Lotus and you wanna talk about the finale. Or you, you know, you live in a community and you want to talk about the fact that they're building a new McDonald's on like Route Six or whatever.
Yes, in those spaces you'll see people get into spats and you'll see people get into arguments and in those cases, there's usually some community moderation, but generally I think a lot of those communities are really healthy and positive. The moderators put forth like these are the norms.
And I think it's funny, I think some people would say Reddit uplifting, but I think you see the same thing in some Facebook groups as well, um, where you have people who really love, like quilting or I'm in dozens and dozens of Facebook groups on all kinds of weird things.
Like, “I found this weird thing at a thrift store,” or “I found this painting, you know, what can you tell me about it?” And I get such a kick out of seeing people from all these walks of life come together and talk about these various interests. And I do think that. You know, that's the utopian ideal of the internet that I think got us all so into it in the eighties and nineties.
This idea that you can come together with people and talk about things that you care about, even if you don't have anyone in your local immediate community who cares about those same things, and we've seen over and over that, that can be really empowering for people. You know, if you're an LGBTQ person in an area where there aren't that many other LGBTQ people, or if you're a black woman and you're the only black woman at your company, you know, you can get resources and support for that.
If you have an illness that isn't very well understood, you know, you can do community education on that. So, You know, these pockets of the internet, they exist and they're pretty big. And when we just constantly focus on this small minority of people who are on Twitter, you know, yelling at each other about stuff, I think it really overlooks the fact that so much of the internet is already this place of like enjoyment and, you know, hope.
Oh, I, that is so right and so good to be reminded of, um, that, that, that it's not that we have to fix the internet, it's that we have to grow the part of the internet that never got broken. Right. That is fixed.
Let’s take a quick moment to say thank you to our sponsor.
“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
Now back to our conversation with Alice Marwick. In addition to all of her fascinating research on disinformation that we’ve been talking about so far, Alice has also been doing some work on another subject very near and dear to our hearts here at EFF – privacy.
Alice has a new book coming out in May 2023 called The Private is Political – so of course we couldn’t let her go without talking about that.
I wanted to look at how you can't individually control privacy anymore because all of our privacy is networked because of social media and big data. We share information about each other, information about us as collected by all kinds of entities.
You know, you can configure your privacy settings till the cows come home, but it's not gonna change whether your photo gets swept up in, you know, some AI that then uses it for other kinds of purposes. And the second thing is to think about privacy as a political issue that has big impacts on everyone's lives, especially people who are marginalized in other areas.
I interviewed, oh, people from all kinds of places and spaces with all sorts of identities, and there's this really big misconception that people don't care about privacy. But people care very deeply about privacy and the way that they. Show that care manifest in like so many different kinds of creative ways.
And so I'm hoping, I'm looking forward to sharing the stories of the people I spoke with.
That's great. Can you tell us one or I, I don't wanna spoil it, but -
Yeah, no. So I spoke with Jazz in North Carolina. These are all pseudonyms. And Jazz is an atheist, gender queer person, and they come from a pretty conservative Southern Baptist family and they're also homeless. They have a child who lives with their sister and they get a little bit of help from their family, like, not a lot, but enough that it can make the difference between whether they get by or not.
So what they did is they created two completely different sets of internet accounts. They have two Facebooks, two Twitters, two email addresses. Everything is different and it's completely firewalled. So on one, they use their preferred name and their pronouns. On the other, they use the pronouns they were assigned at birth and the name that their oarents gave them. And so the contrast between the two was just extreme. And so Jazz said that they feel like their real, their Facebook page that really reflects them, that's their “me” page. That's where they can be who they really are because they have to kind of cover up who they are in so many other areas of their lives.
So they get this sort of big kick out of having this space on the internet where they can be like fiery and they can talk about politics and gender and things that they care about, but they have a lot to lose if the, if that, you know, seeps into their other life. So they have to be really cognizant of things like who does Facebook recommend that you friend, you know, who might see my other email address, who might do a Google search for my name?
And so I call this privacy work. It's the work that all of us do to maintain our privacy and we all do it. Um, and, but it's just much more intense for some kinds of people. Um, and so I see in jazz, you know, a lot of these themes, somebody who is. Suffering from intersectional forms of marginalization, but is still kind of doing the best they can.
And, you know, moving forward in the world, somebody who's being very creative with the internet, they're using it in ways that none of the designers or technologists ever intended, and they're helping it work for them, but they're also not served well by these technologies because they don't have the options to set the technologies up in ways that would fit their life or their needs.
Um, and so what I'm really calling for here is to, rather than thinking about privacy as individual, as something we each have to solve, as seeing it as a political and a structural problem that cannot be solved by individual responsibility or individual actions.
I so support that. That is certainly what we've experienced in the world as well, you know, the fight against the Real Names policy, say at Facebook, which, which really impacted, um, LGBTQ and trans community, especially because people are, they're changing their names, right? And that's important.
This real names policy, you know, first of all it's based on not good science. This idea that if you attach people's names to what they say, they will behave better. Which is, you know, belied by all of Facebook. Um, and, and, you know, it doesn't have any science behind it at all. But also these negative effects for, for, for people who, you know, for safety, you know, we work with a lot of domestic violence victims, you know, being able to separate out. One identity from another is tremendously important. And, and again, can, can matter for people's very lives. Or it could just be like, you know, when I'm Cindy at the dog park, I, I, I'm not interested in being, you know, Cindy, who's the ED of EFF, and being able to segment out your life and show up as, as different people, like, there's, there's a lot of power in that, even if it's not, you know, um, necessary to save your life.
Yeah, absolutely. Sort of that, that ability to maintain our social roles and to play different aspects of ourselves at different times. That's like a very human thing, and that's sort of fundamental to privacy. It's what parts of yourself do you wanna reveal at any given time. And when you have these huge sites like Facebook where they want a real name and they want you to have a persistent identity, it makes that really difficult.
Whereas sites like Reddit where you can have a pseudonym and you can have 12 accounts and nobody cares, and the site is totally designed to deal with that. You know, that works a lot better with how most people, I think, want to use the internet.
What other things do you think we can do? I mean, I'm assuming that we need some legal support here as well as technical, um, uh, support for, uh, more private internet, really More privacy protective internet.
I mean, we need comprehensive data privacy laws.
The fact that every different type of personal information is governed differently and some aren't governed at all. The fact that your email is not private, that, you know, anything you do through a third party is not private, whereas your video store records are private.
That makes no sense whatsoever. You know, it's just this complete amalgam. It doesn't have any underlying principle whatsoever. The other thing I would say is data brokers. We gotta get 'em out. We gotta get rid of them. You shouldn't be able to collect data in one for one purpose and then use it for God knows how many other purposes.
I think, you know, I was very happy under the Obama administration to see that the FTC was starting to look into data brokers. It seems like we lost a lot of that energy during the Trump administration, but you know, to me they're public enemy number one. Really don't like 'em.
We are with you. And you know this isn’t new – as early as 1973 the federal government developed something called the Fair Information Practice Principles that included recognizing that it wasn’t fair to collect data for one purpose and then use it for another without meaningful consent – but that’s the central proposition that underlies the data broker business model. I appreciate that your work confirms that those ideas are still good ones.
Yeah, I think there's sort of a group of people doing critical privacy critical surveillance studies, um, a more diverse group of people than we've typically seen studying privacy. For a long time it was just sort of the domain of, you know, legal scholars and computer scientists. And so now that it's being sort of opened up to qualitative analysis and sociology and other forms, you know, I think we're starting to see a much more comprehensive understanding, which hopefully at some point will, you know, affect policy making and technology design as well.
Yeah, I sure hope so. I mean, I think we're in a time when our US Supreme Court is really not grappling with privacy harms and is effectively making it harder and harder to at least use the judicial remedies to try to address privacy harm. So, you know, this development of the rest of society and people's thinking about eventually, I think, will leak over into, into the judicial side.
But it's one of the things that a fixed internet would give us is the ability to have actual accountability for privacy harms at a level that much better than what we have now. And the other thing I hear you really developing out is that maybe the individual model, which is kind of inherent in a lot of litigation, isn't really the right model for thinking about how to remedy all of this either.
Well, a lot of it is just theatrical, right? It reminds me of, you know, security theater at the airport. Like the idea that by clicking through a 75-page, you know, terms of service change that's written at, you know, a level that would require a couple of years of law school, that it would take years if you spent, if you actually sat and read those, it would take up like two weeks of your life every year.
Like that is just preposterous. Like, nobody would sit and be like, okay, well here's a problem. What's the best way to solve it? It's just a loophole that allows companies to get away with all kinds of things that I think are, you know, unethical and immoral by saying, oh, well we told you about it.
But I think often what I hear from people is, well, if you don't like it, don't use it. And that's easy to say if you're talking about something that is, you know, an optional extra to your life. But when we're talking about the internet, there aren't other options. And I think what people forget is that the internet has replaced a lot of technologies that kind of withered away. You know, I've driven across country three times, and the first two times was kind of pre-mobile internet or a pre, you know, ubiquitous internet. And you had a giant road atlas in your car. Every gas station had maps and there were payphones everywhere. You know, now most payphones are gone, you go to a gas station, you ask for directions, they're gonna look at you blankly, and no one has a road atlas. You know, there are all these infrastructures that existed pre-internet that allowed us to exist without smartphones in the internet. And now most of those are gone. What are you supposed to do if you're in college and you're not using, you know, at the very least, your course management system, which is probably already, you know, collecting information on you and possibly selling it to a third party.
You can't pass your class. If you're not joining your study group, which might be on Facebook or any other medium, or WhatsApp or whatnot. Like, you can't communicate with people. It's absolutely ridiculous that we're just saying, oh, well, if you don't like it, don't use it. Like you don't tell people, you know.
If you're being targeted by like a murderous sociopath, oh, just don't go outside, right? Just stay inside all the time. That's just not, it's terrible advice and it's not realistic.
No, I think that is true and certainly trying to find a job. I mean, there are benefits to the fact that all of this stuff is networked, but it really does shine a light on the fact that, that this terms of service approach to things as if this is a contract, like a freely negotiated contract like I learned in law school with two equal parties, having a negotiation and coming to a meeting of the minds like this is, it's a whole other planet from that approach.
And to try to bring that frame to, you know, whether you enforce those terms or not, is, it's jarring to people. It's not how people live. And so it feels this way in which the legal system is kind of divorced from, from our lives. And, and if we get it right, the legal terms and the things that we are agreeing to will be things that we actually agree to, not things that are stuffed into a document that we never read or we really realistically can't read.
Yeah, I would love it if the terms of service was an actual contract and I could sit there and be like, all right, Facebook, if you want my business, this is what you have to do for me. And make some poor entry level employees sit there and go through all my ridiculous demands. Like, sure, you want it to be a contract, then I'm gonna be an equal participant.
You want those green m and ms in the green room?
Yeah, I want, I want different content moderation standards. I want a pony, I want glittery gifs on every page. You know, give it all to me.
Yeah. I mean, you know, there's a, there's a way in which a piece of the fed-averse strategy that I think, uh, we're kind of at the beginning of, uh, perhaps, uh, in this moment is, um, is that a little bit, you have a smaller community, you have people who run the servers, um, who you can actually interact with.
I mean, I don't know that, again, I don't know that there's ponies, but, um, but you know, one of the things that will help get us there is smaller, right? We can't do content moderation at scale. Um, and we can't do, you know, contractual negotiations at scale. So smaller might be helpful and I don't think it's gonna solve all the problems.
I'm, you know, but I think that there, there's a way in which you can at least get your arms around the problem. If you're dealing with a smaller community that then can inter, inter-operate with other communities, but isn't beholden to them with one rule to rule them all.
Yeah, I mean, I think the biggest problem right now is we need to get around usability and ux and these platforms need to be just as easy to use as like the easiest social platform. You know, it needs to be something that if you aren't, you know, if you don't have a college education, if you're not super techy, if you aren't familiar with, you know, if you're only familiar with very popular social media platforms, you still be, are able to use things like Mastodon.
I don't think we're quite there yet, but I can see a future in which we get there.
Well thank you so much for continuing to do this work.
Oh, thank you. Thank you, Cindy. Thank you, Jason. It was great to chat today.
I'm so glad we got to talk to Alice. That was a really fun conversation and one that I think really underscored a point that I've noticed, um, which is that over the last, I don't know, many years we've seen Congress and other legislators try to tackle these two separate issues that we talked with Alice about.
One being sort of like content on the internet and the other being privacy on the internet. And when we spoke with her about privacy, it was clear that there are a lot. Obvious and simple and direct solutions to kind of informing how we can make privacy on the internet something that actually exists compared to content, which is a much stickier issue.
And, and it's, it's interesting that Congress and other legislators have consistently focused on one of these two topics, or let's say both of them at the expense of, of the one that actually is fairly direct when it comes to solutions. That really sticks out for me, but I'm, I'm wondering, I've blathered on, what do you find most interesting about what we talked with her about? There was a lot there.
Well, I think that Alice does a great service to all of us by pointing out all the ways in which the kind of easy solutions that we reach to, especially around misinformation and disinformation and easy stories we tell ourselves are not easy at all and not empirically supported. So I think one of the things she does is just shine a light on the difference between the kind of stories we tell ourselves about how we could fix some of these problems and the actual empirical evidence about whether those things will work or not.
The other thing that I appreciated is she kind of pointed to spaces on the internet where things are kind of fixed. She talked about Reddit, she talked about some of the fan fiction places she talked about. Facebook groups and pointing out that, you know, sometimes we can be overly focused on politics and the darker pieces of the internet, and that these places that are supportive and loving and good communities that are doing the right thing, they already exist.
We don't have to create them, we just have to find a way to foster them, um, and build more of them. Make the, make more of the internet. That experience. But it, it's, it's refreshing to realize that, you know, Massive pieces of the internet were never broken, um, and don't need to be fixed.
That is 100% right. We're sort of tilted, I think, to focus on the worst things, which is part of our job at EFF. But it's nice when someone says, you know, there are actually good things. And it reminds us that a lot of, in a lot of ways it's working and we can make it better by focusing on what's working.
Well that’s it for this episode of How to Fix the Internet.
Thank you so much for listening. If you want to get in touch about the show, you can write to us at firstname.lastname@example.org or check out the EFF website to become a member, donate, or look at hoodies, tshirts, hats and other merch, just in case you feel the need to represent your favorite podcast and your favorite digital rights organization.
This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll see you next time in two weeks
I’m Jason Kelley
Probably Shouldn’t by J.Lang featuring Mr_Yesterday
CommonGround by airtone featuring: simonlittlefield
Additional beds and alternate theme remixes by Gaëtan Harris