Too many young people – particularly young people of color – lack enough familiarity or experience with emerging technologies to recognize how artificial intelligence can impact their lives, in either a harmful or an empowering way. Educator Ora Tanner saw this and rededicated her career toward promoting tech literacy and changing how we understand data sharing and surveillance, as well as teaching how AI can be both a dangerous tool and a powerful one for innovation and activism.

By now her curricula have touched more than 30,000 students, many of them in her home state of Florida. Tanner also went to bat against the Florida Schools Safety Portal, a project to amass enormous amounts of data about students in an effort to predict and avert school shootings – and a proposal rife with potential biases and abuses.

Tanner speaks with EFF's Cindy Cohn and Jason Kelley on teaching young people about the algorithms that surround them, and how they can make themselves heard to build a fairer, brighter tech future.

play
Privacy info. This embed will serve content from simplecast.com


Listen on Apple Podcasts Badge Listen on Spotify Podcasts Badge  Subscribe via RSS badge
You can also listen to this episode on the Internet Archive and on YouTube.

In this episode you’ll learn about:

  • Convincing policymakers that AI and other potentially invasive tech isn’t always the answer to solving public safety problems.
  • Bringing diverse new voices into the dialogue about how AI is designed and used.
  • Creating a culture of searching for truth rather than just accepting whatever information is put on your plate.
  • Empowering disadvantaged communities not only through tech literacy but by teaching informed activism as well.

Ora Tanner is co-founder and former chief learning officer at The AI Education Project, a national non-profit centering equity and accessibility in AI education; she also is an Entrepreneur-in-Residence with Cambiar Education. She has presented at numerous academic conferences, summits, and professional development trainings, and spoken on panels as an EdTech expert to discuss topics related to AI, education, emerging technologies, and designing innovative learning experiences. She earned both a B.S. and M.S. in physics and completed course work toward a Ph.D. in instructional technology at the University of South Florida. 

Music

Music for How to Fix the Internet was created for us by Reed Mathis and Nat Keefe of BeatMower.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: 

  • Meet Me at Phountain by gaetanh (c) copyright 2022
    http://ccmixter.org/files/gaetanh/64711
  • Hoedown at the Roundabout by gaetanh (c) copyright 2022
    http://ccmixter.org/files/gaetanh/64711
  • JPEG of a Hotdog by gaetanh (c) copyright 2022
    http://ccmixter.org/files/gaetanh/64711
  • reCreation by airtone (c) copyright 2019
    http://dig.ccmixter.org/files/airtone/59721

Resources

Student and School Surveillance:

Bias in AI and Machine Learning:

Predictive Policing:

 Transcript

Ora: I had the opportunity to give public comment at the Marjory Stoneman Douglas High School Commission. And they're the committee that's responsible for the legislation, that's being passed for school shootings because of what took place in Parkland.

When I went up to the mic, I actually felt somewhat intimidated because the entire room was all police officers and it's all white males. We were there to present our feedback regarding the Florida Schools Safety Portal, which is a project they were starting that was just collecting massive amounts of data .

Basically the portal was to predict the probability of a student being on the path to being a school shooter. And so they thought using AI technology was a way to do this.

It was problematic because 58% of the students in the state are black and Hispanic, and it's the majority of their information that's in those databases. So if you're using those data sources, it's most likely it's going to be a black or Hispanic person that's going to get a high probability score.

Cindy: That's Ora Tanner. She's an educator and an entrepreneur working to change how we understand data sharing and surveillance, and especially its impact on young people.

Jason: Ora's going to talk with us about educating young people on artificial intelligence and how she's tackling the ways that algorithms both harm and empower young people. This is How to Fix the Internet, a podcast of the Electronic Frontier Foundation. I'm Jason Kelly, a digital strategist at EFF sitting in for Danny O'Brien.

Cindy: And I'm Cindy Cohen. EFF's executive director.

Jason: Welcome to How to Fix the Internet.

Cindy: Ora, thank you so much for joining us today. Now, before you started your latest venture, you were a classroom teacher for a while and I think that woke you up to some of the issues that you're focusing on now. Can you talk to us about what happened and how you became committed to teaching students about AI and surveillance?

Ora: I have actually taught at every grade level from preschool through college, but it was while I was at a school here in Tampa where I'm located teaching eighth grade science and they just had all this different technology in all the classrooms, but none of the teachers were really using it. 

There were two things, I noticed the students didn't really have any familiarity or experience with them, which to me was very troubling because I'm like, "Hey, this is eighth grade. You should have some minimal types of experiences with technology." But then I also saw the power when I showed them like, "This is how it can be used. You can use it to create." So that inspired me to go back to school to pursue my PhD in instructional technology.

Cindy: So how did you end up researching specifically AI and then surveillance? 

Ora: During my PhD work I had the opportunity to have a fellowship with the Aspen Tech Policy Hub out in San Francisco. During that time I really was taking just a deep dive into artificial intelligence, then I got into algorithmic bias and then I just realized, "Hey, this bias seems to always be against the same group of people: poor people, black people, women."

And so the first thing that came to my mind was my students. And I'm like, "If my students don't know about this and how it works and what the plans are of the people who are creating this technology, they're just going to get slammed by the future, and they're not going to know how to navigate or understand." And just in my head I had an image like if a student's trying to get a job, but they don't understand the bias and the application tracking systems, they'll wonder, "Hey, why don't I ever get a call back? Or why does it..." It will just seem like some ubiquitous force that they won't really understand. So that's when I got the idea to start creating a curriculum or learning experiences to teach students about emerging technologies, especially AI.

Jason: And one of the things that you've found was there was a surveillance within the school system, right? Something in Florida in particular. Can you talk a little bit about that kind of surveillance that you found?

Ora: So as I was doing my research, I came up on a project that they were working on in the state called the Florida Schools Safety Portal. And I say that in air quotes.

So because Parkland had happened a couple years prior, so everyone's on high alert, but I guess in order to try to prevent that, the solution at the state level was to create this database that just had massive amounts of data like their juvenile justice records or grades in school, if they've been suspended. Just different types of data from all these databases and they're going to put it into one huge one, which was just a lot of privacy concerns there, the sharing of data across all these different organizations and agencies. But they were going to use it to predict whether a child, the probability of them being the next school shooter, which is just mind blowing that they thought they would be able to do that just from a logistical point of view.

And so I was able to kind of bring a lot of attention to it. And I even spoke at one of their commission meetings to just tell them, "Hey, these are the dangers of going down this route. These are the unintended consequences. These are the harms that could happen by trying to pursue this path of trying to stop school shooting."

Cindy: There are problems in kind of two ways. And I'd love for you to unpack them for people, because I think sometimes people think intuitively that this is a good idea, and I think you've done a good job when I've watched you, unpacking I think both sides like A, it doesn't work, and B, it's really dangerous in other ways as well.

Ora: A lot of times with solutions, people just automatically go to technology because there's this misconception that technology fixes everything and we just have throw technology at it. And most times people like policy makers are making these decisions, but they don't have full understanding of how the technology actually works. And so if they did, they would probably make some different decisions.  And so that's what I was trying to do with the Aspen Tech Policy Hub, like educate the policy makers and the lawmakers, people in the Department of Education about it as well.

So with the Florida School Safety Portal, it was problematic because 58% of the students in the state are black and Hispanic, and it's the majority of their information that's in those databases. So if you're using those data sources, it's most likely it's going to be a black or Hispanic person that's going to get a high probability score. So I try to do both sides like, "Hey, if you're going to attempt to do this, this is what it literally would take to build a database. And this is how the predictions would work. This is the amount of data you would have to have. This is the types of data you would need to have in this database in order for it not to be biased and to be fair." That's if you are going to pursue it. And on the other hand, "Hey, this is why this totally will not work, and this is not a good idea."

Jason: So you've got this database and it's got data from all these different sources for young people, right? It's got information about I assume whether they've been involved in a crime or whether they've gotten in trouble at school. And you put all that into a big system and then it spits out what? Like a number or like a sort of a pre-crime status for a student?

Ora: There really wasn't a lot of details shared with the public like just exactly how this was going to work. And I showed up to some meetings. I used the Freedom of Information Act to get some information because it just wasn't freely available. They had their presentation they did, and with the graphics like, "Hey, it's easy. Here are the things. It goes to this one place. We're protecting students." But the actual details were not very clear.

Cindy: We've seen this in the predictive policing context, right? Where essentially the people who've gotten access to the way that the algorithm works and what it's trained up on, all it does is to really just replicate the decisions that the police are already making, right? So instead of predicting policing, it's predicting the police, Which is I suspect you probably had a version of that as well, where the school officers who are flagging the kids that they always flag, which we know has bias built in it. And then this algorithm is looking at what the officers are flagging and then predicting that they're going to keep flagging the same kids. 

We've seen that kind of circular logic in some of the other things that are trying to predict whether somebody's likely to be a bad actor.

These kinds of predictive systems when they're trying to predict like whether you want to buy shoes or something, they have a massive amount of data and they have ground truth about whether somebody's actually buying shoes or has bought shoes in the past. And one of the things that is always troubling in these kinds of trying to predict who's a bad person things is that they don't have nearly enough data to begin with, so trying to train up a model to do this kind of prediction is always isn't going to work.

I know our audience anyway, there are a lot of people who do understand how these models work, and it's important to bring that critical eye when you're trying to predict future behavior by humans. It goes terribly wrong very quickly.

Ora: Yeah. I definitely agree. Especially if the school shooter, even though it's horrific that it happens just, but just the numbers are so small. So to try to do that. And even the FBI, they've tried to look to see if there's any patterns among the people who have done it, but the conclusion is there's not enough info. And even if you did, it's like the thing with AI, it's you're always acting on historical data, and so that's another big problem.

Cindy: Yeah, absolutely.

Jason: 

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology, enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

Jason: One of the things that we've seen in Florida is a lot of problems around Pasco County in particular, where we've looked into what's tended to be a data-driven policing strategy for young people there that sounds very similar or connected to the Florida School Safety Portal. What we saw there is there's a lot of harassment as a result of being labeled a potential X by these systems. Is there a similar kind of problem?

Ora: That was something they mentioned with the Safety Portal that they would be piloting a version. But it was kind of like the room itself was all like police officers and I think there were a couple of lawyers, so it's literally a room full of white male police officers. And so it appeared that they just had their mindset like, "Hey, this is the course we're going to take. We think this is going to work."

And I know also just at upper levels in our state, it is the desire, I don't know if it still is, but to have Florida be the leader in intelligent policing is some of the reports that I had read. And I think that's what's playing out in Pasco. As you said, I think a majority of the students are... I mean the people being harassed are under the age of 18 and they're just knocking on doors and checking on them in the morning, late at night, harassing their families and they haven't done anything. And so it's just like, "Well, hey, you're hanging out with these people and that's going to lead to this." And it's like, "Where are you getting this from? What is prompting you? And what trigger is it that's causing you to come out to a student's house who has not done anything?"

Cindy: Seeing this, you've really thrown yourself into the solution of trying to educate students especially about this. And why did you pick that particular solution and how are you seeing it play out?

Ora Tanner :

Initially I just thought one, it takes too long for legislative solutions to anything. It's just very, very slow moving. And I really just wanted to empower the students, especially in the communities that would be impacted the most by algorithms, which would be the black and brown communities. So I wanted to attempt to give them some type of recourse and just a heads up like, "Hey, this is what's happening." And the ability to push back. And I just imagined say, it's happened a couple times already where people were misidentified through facial recognition systems and then sent to jail. But if you're aware of how those systems work and how flawed they are and the inaccuracies you could push back and be like, "Hey, can I see the data? How did you come to this conclusion? And if they can't, then you need to let me go."

And so I think just education is one of the easiest ways and most empowering ways to get information to students.

Jason: So one thing I think is really interesting about your approach... I'm on the activism team at EFF, and we spend a lot of time killing bad bills, bad projects, bad legislation, bad things that companies are doing, but it's not enough just to kill the bad thing, right? You have to educate people around how to do it better in the future. And you're clearly doing that with this approach. And I'm just wondering if you could talk a little bit about how this has been successful?

Ora: Yeah, I think as far as the policing and surveillance, I know prior to the pandemic I was just on this education campaign and I was just trying to tell as many people as possible. So I spoke to pre-service teachers at University of South Florida, especially those who teach special education students, so they would be aware of it. I was just talking to other teachers and none of them were aware of it, which was problematic. 

So what I'm doing now I co-founded an organization called the AI Education Project. And so I developed curricula to teach specifically high school students, but now we've branched out into community college and college university students about artificial intelligence and its social impacts.

So the last couple of two semesters I've been teaching a course called the Social Impacts of Artificial Intelligence. And we do talk about the Florida Safety Portal and also what's happening in Pasco County. And so for their final project I do challenge the students to look at local based problems that have to do with AI and emerging technologies and bias systems and challenge them how would they redesign those systems? And so I think it's been really impactful so far because we're educating a lot of students. We started off with 300 in summer 2020, and as of the end of fall semester 2021, we've reached 30,000 students with the large majority of them being here in the state of Florida.

Cindy: That's fabulous. And what I really appreciate about the approach is that you're really tying it to the local community and empowering the local communities, and that's something we've heard throughout this podcast that the things that really work are the things that don't come top down, and you're a highly trained scientist so in some ways you're the top of this. But that really teaching the communities that are affected by a lot of these technologies and giving them the power to stand up for themselves and design systems that'll work for them. I think that there's lots of ways in which computers and computer modeling could really help these communities, and instead, what we're seeing are all of these things aimed at them by law enforcement or schools or other things. And so I really love the approach of trying to flip this on its head and giving the power to the affected communities.

Ora: One, I don't want students to be afraid of technology. So that's one of my top number ones things. I just want them to understand it and be able to make informed decisions, but I also want them to know, "Hey, these are the problems, the issues where it's not getting it right." So we have had some students that were so taken aback or just enamored with the whole thing that it has sparked them to want to go into this area and study it when they go to university or do something about it. So that's the good byproduct of all of this, like you can't solve problems that you don't know exist. And so once you're aware, if you feel deeply and passionately about it, it's like, "Hey, I'm going to do something about this." And I also show students you don't have to be an AI engineer, you don't have to be a programmer or developer to address this issue. And so that's another part of the curriculum.

Jason: Do students as you're telling this story about showing them how the specific assessment tools work in the criminal justice system, for example, are they surprised? Are any of them already aware of these? Or is it completely new to most people? Because I think I would be surprised if anyone really knew about it. The average person probably doesn't, but I'm wondering what their responses are?

Ora: Yes. In 100% of the cases they were just totally unaware of just all the different ways AI is being used, how their data's being collected, sold, when they're on these platforms. And the interesting thing that's funny… the course I teach at, well, the course in general is for everyone and the curriculum's for everyone. So we have students from different backgrounds. But at the university it's open to all majors and I always get the computer science students coming in and they come in with this hubris like, "Oh, I already know this. I build models. I whatever." And then when we start going into this, they had no clue, no idea whatsoever. But to me it's not surprising because I think it's only like 12% of computer science and data science majors, even address ethical issues.

So before they were just coding, they were making all the stuff and, "Oh, I can make it do this and that. Now I have to consider who may I be harming? How is this going to be used? How is this going to play out?"

Cindy: I'd like to shift gears because our focus here is how do we fix the internet? And I want to hear your vision of what it looks like if we get this right. What if we flip the script around and we have technologies are actually serving communities, that communities are in control of, and that really work for students. What do you see? What's your vision?

Ora: What I'm trying to do now is just there needs to be some new narratives. So right now the main narratives are owned by the big tech companies and academia. So I just think if there's this spectrum, so you have the technological utopians, if you want to call them. And that's more your big tech and your policy makers, which are, "Hey, technology is the best. It can do no wrong. It can be used to solve any problem." And then you have the technological skeptics, which are more like your sociologists and people like that like, "Hey, it's going to destroy us all. It's the worst thing ever." And the people in the middle, the contextualist, "Hey, it depends." So I think most of what people see are on that one end that, "Hey, technology is great and there's a lot of hype around it, and we don't actually look at the reality of it."

So I think just encouraging some new voices to talk about what's actually happening, what are the effects? And then if we can get those two ends of the spectrum to work together, because we have these huge conferences where all the developers are at, you have these other huge conferences where just the sociologists and that end of the spectrum. But you never have one where they're together to solve these problems. So I would love to see that happen or just a more evenly distributed amount of information about the realities of the technology.

Cindy: If we had more evenly distributed information, how would our world change? How would a student in the future go through their experience that's different than what's happening today?

Ora: Yeah. Well, I'm actually seeing that now. I've taught in the Upward Bound Program. Students I'm teaching right now in my courses they are actually thinking. Before it's like you download an app, "Agree all." They don't think about any of it. But now I have my students are like, "OMG, they're going to sell my information to third parties and they're tracking me." Just making informed decisions. Also knowing we're coming up on midterm elections. They're understanding the information they're seeing is being targeted at them to nudge them. They know what nudging is. So, "Hey, I might have to go outside of my filter bubble to get some other perspectives on this." So I think it would just make people more informed. A culture of just searching for truth instead of just, "Hey, whatever you tell me I'm going to believe it."

Cindy: Being back in charge rather than being manipulated by the way that the information is presented and by whom and where. Yeah, that would be great. 

Ora: Yes.

Cindy: So what are the values we are protecting here? Already I hear self determination, control, ready access to information. Are there others? 

Ora: Yes. One of my big things I just call it like tech equity, diversity, however it is, diversity beyond just race and gender. It's also cognitive diversity. So we need to hear from teachers, we need to hear from sociologists, we need to hear from young people when it comes to these different aspects of technology. So I really push for, as far as the tech equity piece, I really push for black voices. So there is a lot of awesome work being done, especially in AI. Even in other things like metaverse black scholars, but their work is never highlighted. It's always Stanford, Berkeley, MIT. And so there's great work being done at HBCUs, historically black colleges and universities.

And even with the courses I teach for my students, I'm very careful not to only present Western views because a lot of times we just get stuck in the United States and it's like, "Okay. There's these other big land masses where they have people doing awesome work as well." So yeah, just that cognitive diversity coming from just a lot of different areas.

Cindy: Do you have a particular thing that you do with your students that might give us a little more concreteness about what you're doing in your classes that would be fun to share?

Ora: We have them watch this video on how AI is being used in the criminal justice system, like pretrial risk assessment. And we have them to basically write a letter to the people who make the algorithm, or on behalf of someone who's been wrongly accused, or to the attorney who's doing the case as to why. So we've gotten some really powerful things from that. Another activity we have them do is I call it PITCH AI. 

We ask them to envision a world, how would they use AI to solve problems in a career or field that they care about? And so we've gotten some really creative things of what people would do. 

Cindy: Give me an example. I'd love to hear what people are thinking they would do. 

Ora: we had a student she grew up in foster care like her and her sister. And so she described how she would use AI to do better matching between a child, especially when they have a sibling and the host family. And so she describes this is the data that would be collected. This is how the algorithm would work,

I had another student who was interested in political science and they talked about, I think it was called Dictator Tracker, how they would collect all this data on people who are running for office, who do they have relationships with? Who are they corresponding with to predict the probability of them being a world dictator? And so that one was kind of-

Cindy: That might be one. Hopefully, we don't have enough dictators to train a model for that, but I appreciate the effort.

Jason: And I like that you're taking... You're literally turning the targeting of this AI against the powerful in some ways, which it's often being used against the powerless. So that's really wonderful.

Cindy: So well, that was terrific. Thank you so much for the work that you're doing. I mean, really honestly, I feel like in some ways you're building the next generation of EFF staffers and activists and go, go, go, we need more. And we especially need people from these communities, right? I mean, it's just not right that the people who are generally most impacted by these kinds of systems are the ones who have the least knowledge, the least transparency, and the least control. I mean, it's just exactly backwards. So thank you for the work that you're doing.

Ora: Thank you so much for having me on. This was great and fun.

Cindy: Well, I loved how she turned her horror at learning about the massive collection of data about students and the difficulties of trying to predict who's a school shooter from that and turned it into really a passion for helping to empower her community.

Jason: And I really appreciate that she's using it, not just as an educational opportunity, but as an activist opportunity. She's really doing or having students do hardcore grassroots activism, learning why these systems are the way they are, how to fix them, and then reaching out to people who make them to try to make them better. I think that's really it's a model for what EFF does on the activism team and in general.

Cindy: I do think that the part where the students are not only learning how horrible it is, but are learning how to write the letter to explain it to other people. I mean, that's the piece where the knowledge turns into power.

Jason: I'd love for her to have been a teacher in my school.

Cindy: I'd love her vision of a better future, right? It's the stuff that we're just hearing over and over again, local control, local empowerment, real transparency, and the simple truth, knowledge is power.

Jason: Absolutely. Well, thank you all for joining us. If you've enjoyed this episode, please visit eff.org/podcast where you'll find more. You can learn more about the issues and you can also donate to become an EFF member. Members are the only reason we can do this work. Plus you can get cool stuff like an EFF hat or an EFF hoodie, or even an EFF camera cover for your laptop. Please get in touch if you have thoughts about this podcast by emailing us at podcast@eff.org. We do read every single email that we get.

Music for How to Fix the internet was created for us by Nat Keefe and Reed Mathis of Beat Mower. This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. I'm Jason Kelly.

Cindy: And I'm Cindy Cohn. 

VOICE:
This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: 

Meet Me at Phountain 

Hoedown at the Roundabout and 

JPEG of a Hotdog all by Gaetan H

and 

Recreation by airtone.