From the course: Ethics and Law in Data Analytics
The ethics and variables of recidivism
From the course: Ethics and Law in Data Analytics
The ethics and variables of recidivism
- In the introduction to the lab at the end of module one, I told you how important getting the context was going to be and part of that was getting the ethical context and so I asked you to think about some values that would enter into it. The ones I came up with were quality and autonomy. Now we're going to get a little bit more specific. So we're not talking about the values necessarily, we're talking about specific moral issues related to this kind of data set, the recidivism data set. So let's go through these, there's going to be three things that I want you think about, the first two problems are indeed ethical problems but they're really, really hard to test in a lab setting. I don't know how we would do that. But it's important that you understand these and have these in the back of your mind as you're working. The third one is going to be a little bit technical and it's going to be the direct task that you have in your lab. So let's turn to problem one, the problem of unequal gains. So let's say we did a study, it's hypothetical, I'm completely making up the numbers and the results. But let's say we did this study and what we found is that human judges, I don't know exactly what our methodology was, it doesn't matter, we did the study and we found that human judges over-predict recidivism 30% of the time. Meaning that they hold people in jail more often than they need to. Now we tested that historical data against an algorithm, we gave the algorithm historical data to see how it would do. And the algorithm over-predicted recidivism, but only 12.5% of the time. So math, you're thinking, the algorithm that wins, that's so much more accurate, that's causing less human suffering than the human judges. So let's go to the algorithm. Well there are some things to think about here. So further analysis of this hypothetical study shows that the algorithm reduces over-prediction but now we're breaking it down by race. So a Caucasian defendant, they get a reduction from 30% to 10%, so they're pretty well off. But then we look at African-American defendants and yes, the algorithm made their lives better as a whole group because it reduced the unnecessary jail time that they had, but not by as much. The reduction was only from 30% to 15%. Now you might be thinking, but wait a minute, that's still good for everybody, everybody benefits and so obviously we still got to do this algorithm, it's better than the human judge. Well, there's something positive here, you have in fact reduced the amount of unnecessary jail time that people are suffering and that's good. But there's also a huge negative here. Now it's more, this is obviously a hypothetical study, this is pretend, but if this kind of thing happened, we would have to say that now it's even more obvious, how, and we can be mathematically precise about how the color of a person's skin affects how much time they spend in jail. And that is not okay, right? So what I would say about this is, it's not time to abandon the project, but just because everybody gained, because the gains were unequal, that's an ethical problem, that's an ethical issue. So we would not throw our algorithm out entirely, but we have to work to improve it so that there wouldn't be this unequal gain. Okay, let's talk about another problem, I call this the problem of divergent goals. And this has to do with, what we're trying to do when we punish somebody, when we put somebody in jail, like, why would we even do that? It turns out that there's three justifications for punishment, and that's going to be a little tricky, because the algorithm is only going to care about one of those, and it gets even more specific, I said three or four, because one of the justifications for punishment breaks down into two. So, there might be four, depending on how you count that. The first theory is retributivism. So the idea that the only time you can punish somebody, the only time punishment's justified, is if it's for a crime that they did in the past. Right, something that they've actually done, now it's time to punish them. But if you think about the algorithm that we've been working with, it doesn't really seem to have any acknowledgement of that importance, right? It's all about predicting the future, not about the past. So this algorithm doesn't take this theory of punishment into account at all. Another theory of punishment, rehabilitationism, and truth in advertising, the word rehab's there, that's exactly what it means, punishment's justified if and only if it's likely to rehabilitate somebody. And this recidivism algorithm isn't trying to do that, that's not its goal. So it's trying to be accurate in its prediction of who's going to reoffend, but it's not taking this goal, will it help them become a better person in the future? That's off the radar screen in this algorithm. This brings us to our third theory of punishment, consequentialism, again, has to do with consequences, and punishment's justified only when it has good consequences for the whole society. So to help you understand this, let me rework an example that John Stewart Mill, a famous consequentialism, consequentialist gave. So, what if we found this serial killer? That sounds bad, right? But this serial killer is very, very old and frail, so he's definitely not going to reoffend, just because of the frailty. So the question is, you're the only person who knows this, no one else knows it, it's just you, and you also happen to know that this person's not going to reoffend. Should they be punished for their life of being a serial killer? Now a consequentialist would say, clearly, no. All you're doing is increasing the amount of suffering in the world, and it's not doing society any good, right? No one knows about it, and this person isn't going to reoffend, so it'd just be unnecessarily cruel, immoral even, to punish this person. That's the idea of consequentialism as a theory of punishment. Okay, so it's a little complicated, because there's two ways to achieve consequentialism. So these aren't really two extra theories, they are ways to achieve consequentialism. So one is deterrence, and it is what it sounds. It achieves social good by discouraging somebody from reoffending. So they know that the penalty is this number of years in jail, if they do that crime, and so we're appealing to their rational side. Because they don't want to be in jail for that long, they don't do the crime, that's deterrence. I'm not sure how that would be relevant to the algorithms that we're looking at. I might be missing something, someone could write a paper about it, I'd be happy to read that, but it doesn't seem that this has any relevance to the recidivism algorithms. What the recidivism algorithm does care about, the one of the four methods that it does care about, is incapacitation, which achieves the consequentialist objective by keeping criminals literally away from society, 'cause they're locked in jail, they can't commit crimes that way. That's what the recidivism algorithm is trying to predict. Who those future criminals are going to be, let's keep them away from society, and the ones that don't need to be in jail, let's let them out in society, so they can be healthy contributors to society. Okay, now as I mentioned, those first two problems are important to keep in the back of your mind as you're doing this module, but there's no way to directly test them in a lab. This third problem, however, you are going to spend some time working on a dataset to try to figure out how you can deal with the problem of unfair proxies. So let me give you, this is like really, really, general kinds, like, don't memorize this slide, and then go in and pretend you're a data scientist, that's not going to do the trick, but I just want to give you a little picture of how a predictive algorithm is set up. So we first we take a set of individuals, and these are past individuals, 'cause this has to be historical data, and these individuals were booked in jail, they spent some time in jail. So we know their names and everything, we have that data. But as a matter of fact, in a period of time, let's say in two years, some of them reoffended, and some of them did not. So what you would do if you're a data scientist is you would say aha, so there's the fact, some of them reoffended, some of them didn't, so those are just pure data. But now we want to try to find a variable, let's call it x, that correlates strongly with reoffending, and at the same time does not correlate with not reoffending, right? So if that makes sense, think about that, should make sense, right? We want to find what is connected to the reoffenders. And then what we're going to do with that is we're going to take that variable, we're calling it x, and drop it into this new formula that predicts behavior. So we have a set of new individuals, we don't know if they're going to reoffend in the future or not but we say, aha. We don't know if they're going to reoffend, but they do have this variable x that has been strongly correlated with reoffending, and so we're going to predict, this algorithm's going to predict that they're more likely to reoffend. That's the very, very basic level of how this all works. So that actually introduces a problem, because whatever we come up with for x, whatever that variable is that we choose, it probably, there might be some exceptions, but as a general rule, it probably shouldn't be something that someone can't change about themselves. Classic example would be race. So if we said, aha, people who, whose parents have this level of income, they're more likely to reoffend, therefore, we're going to make you sit in jail for longer because of something your parents did. That rightly strikes us as unethical, because, how could they change that? This has to be something that they could have done differently, such as offending the first time. So, as a data scientist, you can't use something like race, you can't say aha, people with this race tend to reoffend, so let's put that information into the algorithm. That's not moral. So what you have to do is find another variable that strongly correlates with race, or whatever it is that you're trying to find out. But it's really tricky, because this new variable has to provide, it has to be strongly correlated on the one hand. On the other hand, it has to have new information, that is to say, it can't be a proxy that simply imports race in, in a way that you might not understand. So the example that we used a couple times in this course is ZIP code, so in America, there tends to be racial segregation by ZIP codes. So you can't say, well, we're not going to use race, but we are going to use ZIP code. Well, yes, you're using a proxy, but it's still unfair, because it's still a proxy for race. So it's not discovering any new information, it's just importing the same biased information in a new way. So good luck, your questions are going to be around this problem of getting at unfair proxies, and I hope it goes well for ya.
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
Data ethics and law in business3m 8s
-
Handling consumer data5m 19s
-
Handling employee data7m 54s
-
Ethics in hiring with big data4m 17s
-
Digital market manipulation3m 50s
-
The evolution of privacy and technology3m 3s
-
Data privacy and security best practices3m 51s
-
GDPR5m 39s
-
GDPR, big data, and AI4m 13s
-
IRAC application7m 25s
-
The ethics and variables of recidivism11m 2s
-
-
-