Episode 107 of EFF’s How to Fix the Internet

Modern life means leaving digital traces wherever we go. But those digital footprints can translate to real-world harms: the websites you visit can impact the mortgage offers, car loans and job options you see advertised. This surveillance-based, algorithmic decision-making can be difficult to see, much less address. These are the complex issues that Vinhcent Le, Legal Counsel for the Greenlining Institute, confronts every day. He has some ideas and examples about how we can turn the tables—and use algorithmic decision-making to help bring more equity, rather than less.  

EFF’s Cindy Cohn and Danny O’Brien joined Vinhcent to discuss our digital privacy and how U.S. laws haven’t kept up with safeguarding our rights when we go online. 

Click below to listen to the episode now, or choose your podcast player:

play
Privacy info. This embed will serve content from simplecast.com

Listen on Apple Podcasts Badge Listen on Spotify Podcasts Badge  Subscribe via RSS badge

You can also listen to this episode on the Internet Archive and on YouTube.

The United States already has laws against redlining, where financial companies engage in discriminatory practices such as preventing people of color from getting home loans. But as Vinhcent points out, we are seeing lots of companies use other data sets—including your zip code and online shopping habits—to make massive assumptions about the type of consumer you are and what interests you have. These groupings, even though they are often inaccurate, are then used to advertise goods and services to you—which can have big implications for the prices you see. 

But, as Vinhcent explains, it doesn’t have to be this way. We can use technology to increase transparency in online services and ultimately support equity.  

In this episode you’ll learn about: 

  • Redlining—the pernicious system that denies historically marginalized people access to loans and financial services—and how modern civil rights laws have attempted to ban this practice.
  • How the vast amount of our data collected through modern technology, especially browsing the Web, is often used to target consumers for products, and in effect recreates the illegal practice of redlining.
  • The weaknesses of the consent-based models for safeguarding consumer privacy, which often mean that people are unknowingly waving away their privacy whenever they agree to a website’s terms of service. 
  • How the United States currently has an insufficient patchwork of state laws that guard different types of data, and how a federal privacy law is needed to set a floor for basic privacy protections.
  • How we might reimagine machine learning as a tool that actively helps us root out and combat bias in consumer-facing financial services and pricing, rather than exacerbating those problems.
  • The importance of transparency in the algorithms that make decisions about our lives.
  • How we might create technology to help consumers better understand the government services available to them. 

Vinhcent Le serves as Legal Counsel with the Greenlining Institute’s Economic Equity team. He leads Greenlining’s work to close the digital divide, protect consumer privacy, ensure algorithms are fair, and insist that technology builds economic opportunity for communities of color. In this role, Vinhcent helps develop and implement policies to increase broadband affordability and digital inclusion as well as bring transparency and accountability to automated decision systems. Vinhcent also serves on several regulatory boards including the California Privacy Protection Agency. Learn more about the Greenlining Institute

If you have any feedback on this episode, please email podcast@eff.org.

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators

Drops of H2O ( The Filtered Water Treatment ) by J.Lang (c) copyright 2012 Licensed under a Creative Commons Attribution (3.0) Unported license. http://dig.ccmixter.org/files/djlang59/37792 Ft: Airtone

Come Inside by Zep Hurme (c) copyright 2019 Licensed under a Creative Commons Attribution (3.0) Unported license. http://dig.ccmixter.org/files/zep_hurme/59681 Ft: snowflake

Warm Vacuum Tube  by Admiral Bob (c) copyright 2019 Licensed under a Creative Commons Attribution (3.0) Unported license. http://dig.ccmixter.org/files/admiralbob77/59533 Ft: starfrosch

reCreation by airtone (c) copyright 2019 Licensed under a Creative Commons Attribution (3.0) Unported license. http://dig.ccmixter.org/files/airtone/59721

Resources

Data Harvesting and Profiling:

Automated Decisions Systems (Algorithms):

Community Control and Consumer Protection:

Racial Discrimination and Data:

Fintech Industry and Advertising IDs

Transcript

Vinhcent: When you go to the grocery store and you put in your phone number to get those discounts, that's all getting recorded, right? It's all getting attached to your name or at least an ID number. Data brokers purchased that from people, they aggregate it, they attach it to your ID, and then they can sell that out. There, there was a website, where you could actually look up a little bit of what folks have on you. And interestingly enough that they had all my credit card purchases, they thought I was a middle-aged woman that loved antiques, ‘cause I was going to TJ Maxx a lot. 

Cindy: That's the voice of Vinhcent Le. He's a lawyer at the Greenlining Institute, which works to overcome racial, economic, and environmental inequities. He is going to talk with us about how companies collect our data and what they do with it once they have it and how too often that reinforces those very inequities.

Danny: That's because  some companies look at the things we like, who we text and what we subscribe to online to make decisions about what we'll see next, what prices we'll pay and what opportunities we have in the future.

THEME MUSIC

Cindy: I'm Cindy Cohn, EFF’s Executive Director.

Danny: And I'm Danny O'Brien. And welcome to How to Fix the Internet, a podcast of the Electronic Frontier Foundation. On this show, we help you to understand the web of technology that's all around us and explore solutions to build a better digital future. 

Cindy: Vinhcent, I am so happy that you could join us today because you're really in the thick of thinking about this important problem.

Vinhcent: Thanks for having me. 

Cindy: So let's start by laying a little groundwork and talk about how data collection and analysis about us is used by companies to make decisions about what opportunities and information we receive.

Vinhcent: It's surprising, right? Pretty much all of the decisions that we, that companies encounter today are increasingly being turned over to AI and automated decision systems to be made. Right. The FinTech industry is determining what rates you pay, whether you qualify for a loan, based on, you  know, your internet data. It determines how much you're paying for a car insurance. It determines whether or not you get a good price on your plane ticket, or whether you get a coupon in your inbox or whether or not you get a job. It's pretty widespread. And, you know, it's partly driven by, you know, the need to save costs, but this idea that these AI automated algorithmic systems are somehow more objective and better than what we've had before. 

Cindy: One of the dreams of using AI in this kind of decision making is that it was supposed to be more objective and less discriminatory than humans are. The idea was that if you take the people out, you can take the bias out.. But  it’s very clear now that it’s more complicated than that. The data has bias baked it in ways that is hard to see, so walk us through that from your perspective. 

Vinhcent: Absolutely. The Greenlining Institute where I work, was founded to essentially oppose the practice of red lining and close the racial wealth gap. And red lining is the practice where banks refuse to lend to communities of color, and that meant that access to wealth and economic opportunity was limited for, you know, decades. Red lining is now illegal, but the legacy of that lives on in our data. So they look at the zip code and look at all of the data associated with that zip code, and they use that to make the decisions. They use that data, they're like, okay, well this zip code, which so, so often happens to be full of communities of color isn't worth investing in because poverty rates are high or crime rates are high, so let's not invest in this. So even though red lining is outlawed, these computers are picking up on these patterns of discrimination and they're learning that, okay, that's what humans in the United States think about people of color and about these neighborhoods, let's replicate that kind of thinking in our computer models. 

Cindy: The people who design and use these systems try to reassure us that they can adjust their statistical models, change their math, surveill more, and take these problems out of the equation. Right?

Vinhcent: There's two things wrong with that. First off, it's hard to do. How do you determine how much of an advantage to give someone, how do you quantify what the effect of redlining is on a particular decision? Because there's so many factors: decades of neglect and discrimination and like that that's hard to quantify for.

Cindy: It's easy to envision this based on zip codes, but that's not the only factor. So even if you control for race or you control for zip codes, there's still multiple factors that are going into this is what I'm hearing.

Vinhcent: Absolutely. When they looked at discrimination and algorithmic lending, and they found out that essentially there was discrimination. People of color were paying more for the same loans as similarly situated white people. It wasn't because of race, but it was because they were in neighborhoods that have less competition and choice in their neighborhood. The other problem with fixing it with statistics is that it's essentially illegal, right? If you find out, in some sense, that people of color are being treated worse under your algorithm, if you correct it on racial terms, like, okay, brown people get a specific bonus because of the past redlining, that's disparate treatment, that's illegal, under in our anti-discrimination law. 

Cindy: We all want a world where people are not treated adversely because of their race, but it seems like we are not very good at designing that world, and for the the last 50 years in the law at least we have tried to avoid looking at race. Chief Justice Roberts famously said “the way to stop discrimination on the basis of race is to stop discriminating on the basis of race. But it seems pretty clear that hasn’t worked, maybe we should flip that approach and actually take race into account? 

Vinhcent: Even if you're an engineer wanted to fix this, right, their legal team would say, no, don't do it because, there was a Supreme court case Ricci a while back where a fire department thought that its test for promoting firefighters was discriminatory. They wanted to redo the tests, and the Supreme court said that  trying to redo that test to promote more people of color, was disparate treatment, they got sued, and now no one wants to touch it. 

MUSIC BREAK

Danny: One of the issues here I think is that as the technology has advanced, we've shifted from, you know, just having an equation to calculate these things, which we can kind of understand.  Where are they getting that data from? 

Vinhcent: We're leaving little bits of data everywhere. And those little bits of data, may be what website we're looking at, but it's also things like how long you looked at a particular piece of the screen or did your mouse linger over this link or what did you click? So it gets very, very granular. So what data brokers do is they, you know, they have tracking software, they have agreements and they're able to collect all of this data from multiple different sources, put it all together and then put people into what are called segments. And they had titles like, single and struggling, or urban dweller down on their luck.

So they have very specific segments that put people into different buckets. And then what happens after that is advertisers will be like, we're trying to look for people that will buy this particular product. It may be innocuous, like I want to sell someone shoes in this demographic. Where it gets a little bit more dangerous and a little bit more predatory is if you have someone that's selling payday loans or for-profit colleges saying, Hey, I want to target people who are depressed or recently divorced or are in segments that are associated with various other emotional states that make their products more likely to be sold.

Danny: So it's not just about your zip code. It's like, they just decide, oh, everybody who goes and eats at this particular place, turns out nobody is giving them credit. So we shouldn't give them credit. And that begins to build up a kind of, it just re-enacts that prejudice. 

Vinhcent: Oh my gosh, there was a great example of exactly that happening with American express. A gentleman, Wint, was traveling and he went to a Walmart in I guess a bad part of town and American Express reduced his credit limit because of the shopping behavior of the people that went to that store. American Express was required under the equal credit opportunity act to give him a reason, right. That why this credit limit changed. That same level of transparency and accountability for a lot of these algorithmic decisions that do the same thing, but they're not as well regulated as more traditional banks. They don't have to do that. They can just silently, change your terms or what you're going to get and you might not ever know.  

Danny: You've talked about how red lining was a problem that was identified and there was a concentrated effort to try and fix that both in the regulatory space and in the industry. Also we've had like a stream of privacy laws again, sort of in this area, roughly kind of consumer credit. In what ways have those laws sort of failed to keep up with what we're seeing now? 

Vinhcent: I will say the majority of our privacy laws for the most part that maybe aren't specific to the financial sector, they fail us because they're really focused on this consent based model where we agree and these giant terms of service to give away all of our rights. Putting guardrails up so predatory use of data doesn't happen, hasn't been a part of our privacy laws. And then with regards to our consumer protection laws, perhaps around FinTech, our civil rights laws, it's because it's really hard to detect  algorithmic discrimination. You have to provide some statistical evidence to take a company to court, proving that, you know, their algorithm was discriminatory. We really can't do that because the companies have all that data so our laws need to kind of shift away from this race blind strategy that we've kind of done for the last, you know, 50, 60 years where like, okay, let's not consider a race, let's just be blind to it. And that's our way of fixing discrimination. With algorithms where you don't need to know someone's race or ethnicity to discriminate against them based on those terms, that needs to change. We need to start collecting all that data. You can be anonymous and then testing the results of these algorithms to see whether or not there's a disparate impact happening: aka are people of color being treated significantly worse than say white people or are women being treated worse than men?

If we can get that right, we get that data. We can see that these patterns are happening. And then we can start digging into where does this bias arise? You know, where is this like vestige of red lining coming up in our data or in our model. 

Cindy: I think transparency is especially difficult in this question of  machine learning decision-making because as Danny pointed out earlier, often even the people who are running it don't, we don't know what it's picking up on all that easily. 

MUSIC BREAK

Danny: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

Cindy: We understand that different communities are being impacted differently...Companies are using these tools and we are seeing the disparate impacts.

What happens when those situations end up in the courts? Because from what I’ve seen the courts have been pretty hostile to the idea that companies need to show their reasons for those disparate impacts.

Vinhcent: Yeah. So, you know, my idea, right, is that if we get the companies on records, like showing that oh, you're causing disparate impact, it's their responsibility to provide a reason, a reasonable business necessity that justifies that disparate impact.

And that's what I really want to know. What reasons are you using, what reasons all these companies using to charge people of color more  for loans or insurance, right? It's not based off their driving record or their, their income. So what is it? And once we get that information, right, we can begin to have a conversation as a society around what are the red lines for us around like the use of data, what certain particular uses, say, targeting predatory ads towards depressed people should be banned. We can't get there yet because all of those cards are being held really close to the vest of the people who are designing the AI.

Danny:  I guess there is a positive side to this in that I think at a society level, we recognize that this is a serious problem. That excluding people from loans, excluding people from a chance to improve their lot is something that we've recognized that racism plays a part in and we've attempted to fix and that machine learning is, is contributing to this. I play around with some of the sort of more trivial versions of machine-learning, I play around with things like GPT three. What's fascinating about that is that it draws from the Internet's huge well of knowledge, but it also draws from the less salubrious parts of the internet. And you can, you can see that it is expressing some of the prejudices that it'ss been fed with.

My concern here is that that what we're going to see is a percolation of that kind of prejudice into areas where we we've never really thought about the nature of racism. And if we can get transparency in that area and we can tackle it here, maybe we can stop this from spreading to the rest of our automated systems. 

Vinhcent: I don't think all AI is bad. Right? There's a lot of great stuff happening in Google translate, I think is great. I think in the United States, what we're going to see is at least with housing and employment and banking, those are the three areas where we have strong civil rights protections in the United States. I'm hoping and pretty optimistic that we'll get action, at least in those three sectors to reduce the incidents of algorithmic bias and exclusion. 

Cindy: What are the kinds of things you think we can do that will make a better future for us, with these and pull out the good of machine learning and less of the bad

Vinhcent: I think we're at the early stage of algorithmic regulation and kind of reigning in the free hand that tech companies have had over the past decade or so.  I think what we need to have, do we need to have an inventory of AI systems, as they're used in government, right?

Is your police department using facial surveillance? Is your court system using criminal sentencing algorithms? Is your social service department determining your access to healthcare or food assistance using an algorithm? We need to figure out where those systems are, so we can begin to know, all right, where do we, where do we ask for more transparency?

When we're using taxpayer dollars to purchase an algorithm, then that's going to make decisions for millions of people. For example, Michigan purchased the Midas algorithm, which was, you know, over $40 million and it was designed to send out unemployment checks to people who recently lost their job.

They accused thousands, 40,000 people of fraud. Many people went bankrupt, and the algorithm was wrong. So when you're purchasing these, these expensive systems, there needs to be a risk assessment done around who could be impacted negatively by this obviously wasn't tested enough in Michigan.

Specifically in the finance industry, right, banks are allowed to collect data on mortgage loan race and ethnicity. I think we need to expand that, so that they are allowed to collect that data on small, personal loans, car loans, small business loans.

That type of transparency and allowing regulators, academia, folks like that to study those decisions that they've made and essentially hold, hold those companies accountable for the results of their systems is necessary.

Cindy: That's one of the things is that you think about who is being impacted by the decisions that the machine is making and what control do they have over how this thing is workin, and it can give you kind of a shortcut for how to think about, these problems. Is that something that you're seeing as well? 

Vinhcent: I think what is missing actually is that right? There is a strong desire for public participation, at least from advocates in the development of these models. But none, none of us including me have figured out what does that look like?

Because  the tech industry has pushed off any oversight by saying, this is too complicated. This is too complicated. And having delved into it, a lot of it is, is too complicated. Right. But I think people have a role to play in setting the boundaries for these systems. Right? When does something make me feel uncomfortable? When does this cross the line from being helpful to, to being manipulative? So I think that's what it should look like, but how does that happen? How do we get people involved into these opaque tech processes when they're, they're working on a deadline, the engineers have no time to care about equity and deliver a product. How do we slow that down to get community input? Ideally in the beginning, right, rather than after it's already baked, 

Cindy: That's what government should be doing. I mean, that's what civil servants should be doing. Right. They should be running processes, especially around tools that they are going to be using. And the misuse of trade secret law and confidentiality in this space drives me crazy. If this is going to be making decisions that have impact on the public, then a public servant’s job ought to be making sure that the public's voice is in the conversation about how this thing works, where it works, where you buy it from and, and that's just missing right now.

Vinhcent: Yeah, that, that was what AB 13, what we tried to do last year. And there was a lot of hand wringing about, putting that responsibility on to public servants. Because now they're worried that they'll get in trouble if they didn't do their job. Right. But that's, that's your job, you know, like you have to do it that's government's role to protect the citizens from this kind of abuse. 

MUSIC BREAK

Danny:  I also think there's a sort of new and emerging sort of disparity and inequity in that the fact that we're constantly talking about how large government departments and big companies using these machine learning techniques, but I don't get to use them. Well, I would love, as you said, Vincent, I would love the machine learning thing that could tell me what government services are out there based on what it knows about me. And it doesn't have to share that information with anyone else. It should be my little, I want to pet AI. Right? 

Vinhcent: Absolutely. The public use of AI is so far limited to like these, putting on a filter on your face or things like that, right? Like let's give us real power right over, you know, our ability to navigate this world to get opportunities. Yeah, how to flip. That is a great question and something, you know, I think I'd love to tackle with you all. 

Cindy: I also think if you think about things like the administrative procedures act, getting a little lawyerly here, but this idea of notice and comment, you know, before something gets purchased and adopted. Something that we've done in the context of law enforcement purchases of surveillance equipment in these CCOPS ordinances that EFF has helped pass in many places across the country. And as you point out disclosure of how things are actually going after the fact isn't new either and something that we've done in key areas around civil rights in the past and could do in the future. But it really does point out how important transparency, both, you know, transparency before, evaluation before and transparency after is as a key to, to try to solving, try to get at least enough of a picture of this so we can begin to solve it.

Vinhcent: I think we're almost there where governments are ready. We tried to pass a risk assessment and inventory bill in California AB 13 this past year and what you mentioned in New York and what it came down to was the government agencies didn't even know how to define what an automated decision system was.

So there's a little bit of reticence. And I think, uh, as we get more stories around like Facebook or, abuse in these banking that will eventually get our legislators and government officials to realize that this is a problem and, you know, stop fighting over these little things and realize the bigger picture is that we need to start moving on this and we need to start figuring out where this bias is arising.

Cindy: We would be remiss if we were talking about solutions and we didn't talk about, you know, a baseline strong privacy law. I know you think a lot about that as well, and we don't have the real, um, comprehensive look at things, and we also really don't have a way to create accountability when, when companies fall short. 

Vinhcent: I am a board member of the California privacy protection agency. California what is really the strongest privacy law in the United States, at least right now part of that agency's mandate is to require folks that have automated decision systems that include profiling, to give people the ability to opt out and to give customers transparency into the logic of those systems. Right. We still have to develop those regulations. Like what does that mean? What does logic mean? Are we going to get people answers that they can understand. Who is subject to, you know, those disclosure requirements, but that's really exciting, right? 

Danny: Isn't there a risk that this is sort of the same kind of piecemeal solution that we sort of described in the rest of the privacy space? I mean, do you think there's a need for, to put this into a federal privacy law? 

Vinhcent: Absolutely. Right. So this is, you know, what California does, hopefully will influence a overall federal one. I do think that the development of regulations in the AI space will happen. In a lot of instances in a piecemeal fashion, we're going to have different rules for healthcare AI. We're going to have different rules for, uh, housing employment, maybe lesser rules for advertising, depending on what you're advertising. So to some extent, these roles will always be sector specific. That's just how the United States legal system has developed these rules for all these sectors. 

Cindy: We think of three things and the California law has a bunch of them, but,  you know, we think of private right of action. So actually empowering consumers to do something, if this doesn't work for them and that's something we weren't able to get in California. We also think about non-discrimination, so if you opt out of, tracking, you know, you still get the service, right. We kind of fix this situation that we talked about a little little earlier where you know, we pretend like consumers have consent, but, the reality is they really don't have consent. And then of course, for us, no preemption, which is really just a tactical and strategic recognition that if we want the states to experiment with stuff that's stronger we can't have the federal law come in and undercut them, which is always a risk. We need the federal law to hopefully set a very high baseline, but given the realities of our Congress right now, making sure that it doesn't become a ceiling when it really needs to be a floor. 

Vinhcent: It would be a shame if California put out strong rules on algorithmic transparency and risk assessments and then the federal government said, no,you can't do that where you're preempted. 

Cindy: As new problems arise,  I don't think we know all the ways in which racism is going to pop up in all the places or other problems, other societal problems. And so we do want the states to be free to innovate, where they need to.

MUSIC BREAK

Cindy: Let's talk a little bit about what the world looks like if we get it right, and we've tamed our machine learning algorithms. What does our world look like?

Vinhcent: Oh my gosh, it was such a, it's such a paradise, right? Because that's why I got into this work. When I first got into AI, I was sold that promise, right? I was like, this is objective, like this is going to be data-driven things are going to be great. We can use these services, right, this micro-targeting, let's not use it to sell predatory ads, but let's give these people that need it, like the government assistance program.

So we have California has all these great government assistance programs that pay for your internet. They pay for your cell phone bill, enrollment is at 34%.

We have a really great example of where this worked in California. As you know, California has cap and trade. So you're taxed on your carbon emissions, that generates billions of dollars in revenue for California. And we got into a debate, you know a couple years back about how that money should be spent and what California did was create an algorithm with the input of a lot of community members that determined which cities and regions of California would get that funding. We didn't use any racial terms, but we used data sources that are associated with red lining. Right? Are you next to pollution? You have high rates of asthma, heart attacks. Does your area have more higher unemployment rates? So we took all of those categories that banks are using to discriminate against people in loans, and we're using those same categories to determine which areas of California get more access to a cap and trade reinvestment funds. And that's being used to build electronic electric vehicle charging stations, affordable housing, parks, trees, and all these things to abate the, the impact of the environmental discrimination that these neighborhoods faced in the past.

Vinhcent: So I think in that sense, you know, we could use algorithms for Greenlining, right? Uh, not redlining, but to drive equitable, equitable outcomes. And that, you know, doesn't require us to change all that much. Right. We're just using the tools of the oppressor to drive change and to drive, you know, equity. So I think that's really exciting work. And I think, um, we saw it work in California and I'm hoping we see it adopted in more places. 

Cindy: I love hearing a vision of the future where, you know, the fact that there are individual decisions possible about us are things that lift us up rather than crushing us down. That's a pretty inviting way to think about it. 

Danny: Vinhcent Le thank you so much for coming and talking to us. 

Vinhcent: Thank you so much. It was great. 

MUSIC BREAK

Cindy: Well, that was fabulous. I really appreciate how he articulates thethe dream of machine learning that we would get rid of bias and discrimination in official decisions. And instead, you know, we've, we've basically reinforced it. Um, and, and how, you know, it's, it's hard to correct for these historical wrongs when they're kind of based in so many, many different places. So just removing the race of the people involved, it doesn't get it all the ways in discrimination creeps into society.

Danny: Yea,  I guess the lesson that, you know, a lot of people have learned in the last few years, and everyone else has kind of known is this sort of prejudice is, is wired in to so many systems. And it's kind of inevitable that algorithms that are based on drawing all of this data and coming to conclusions are gonna end up recapitulating it.

I guess one of the solutions is this idea of transparency. Vinhcent was very honest about with just in our infancy about learning how to make sure that we know how algorithms make the decision. But I think that has to be part of the research and where we go forward with.

Cindy: Yeah. And, you know, EFF, we spent a little time trying to figure out what transparency might look like with these systems because the center of the systems, it's very hard to get the kind of transparency that we think about. But there's transparency in all the other places, right. He started off, he talked about an inventory of just all the places it's being used.

Then looking at how the algorithms, what, what they're putting out. Looking at the results across the board, not just about one person, but about a lot of people in order to try to see if there's a disparate impact. And then running dummy data through the systems to try to, to see what's going on.

Danny: Sometimes we talk about algorithms as though we've never encountered them in the world before, but in some ways, governance itself is this incredibly complicated system. And we don't know why like that system works the way it does. But what we do is we build accountability into it, right? And we build transparency around the edges of it. So we know how the process at least is going to work. And we have checks and balances. We just need checks and balances for our sinister AI overlords. 

Cindy: And of course we just need better privacy law. We need to set the floor a lot higher than it is now. And, of course that's a drum we beat all the time at EFF. And it certainly seems very clear from this conversation as well. What was interesting is that, you know, Vincent comes out of the world of home mortgages and banking and, other areas, and Greenlining itself, you know, who, who gets to buy, houses where, and at what terms, that has a lot of mechanisms already in place both to protect people's privacy, but to have more transparency. So it's interesting to talk to somebody who comes from a world where we're a little more familiar with that kind of transparency and how privacy plays a role in it than I think in the general uses of machine learning or on the tech side. 

Danny: I think it's, it's funny because when you talk to tech folks about this, you know, actually kind of pulling our hair out because we, this is so new and we don't understand how to handle this kind of complexity. And it's very nice to have someone come from like a policy background and come in and go, you know what? We've seen this problem before we pass regulations. We change policies to make this better, you just have to do the same thing in this space.

Cindy: And again, there's still a piece that's different, but as far less than I think sometimes people think about it. But what I, the other thing I really loved is is that he really, he gave us such a beautiful picture of the future, right? And, and it's, it's, it's one where we, we still have algorithms. We still have machine learning. We may even get all the way to AI. But it is empowering people and helping people. And I, I love the idea of better being able to identify people who might qualify for public services that we're, we're not finding right now. I mean, that's just a it's a great version of a future where these systems serve the users rather than the other way around, right. 

Danny: Our friend, Cory Doctorow always has this banner headline of seize the methods of computation. And there's something to that, right? There's something to the idea that we don't need to use these things as tools of law enforcement or retribution or rejection or exclusion. We have an opportunity to give this and put this in the hands of people so that they feel more empowered and they're going to need to be that empowered because we're going to need to have a little AI of our own to be able to really work better with these these big machine learning systems that will become such a big part of our life going on.

Cindy: Well, big, thanks to Vinhcent Le for joining us to explore how we can better measure the benefits of machine learning, and use it to make things better, not worse.

Danny:  And thanks to Nat Keefe and Reed Mathis of Beat Mower for making the music for this podcast. Additional music is used under a creative commons license from CCMixter. You can find the credits and links to the music in our episode notes. Please visit eff.org/podcasts where you’ll find more episodes, learn about these issues, you can donate to become a member of EFF, as well as lots more. Members are the only reason we can do this work plus you can get cool stuff like an EFF hat, or an EFF hoodie or an EFF camera cover for your laptop camera. How to Fix the Internet is supported by the Alfred P Sloan foundation’s program and public understanding of science and technology. I'm Danny O’Brien.