Blog

Rashida Richardson: Oral Histories of Surveillance

I find the privacy frame as an approach to only challenging intrusions by technology has also help put us in this position where we don’t have laws that actually grapple with the diversity of the people in our society and how technology affects them differently.
—Rashida Richardson

Our Data Bodies (ODB) is excited to present a series of oral histories from organizers, writers, and cultural workers whose work has been steeped in resisting discriminatory surveillance technology and racialized surveillance capitalism, and has illuminated strategies for abolition and abolitionist reform. This oral history features Rashida Richardson, a lawyer, researcher and Director of Research Policy at AI Now Institute in New York City.

 


 

Kim M Reynolds: Let’s start with the work that you do around how anti-discrimination and civil rights laws maintain racial hierarchy and how AI [artificial intelligence] is starting to do that. 

 

Rashida Richardson: The work that I’m doing looking at anti-discrimination laws and AI more generally—more specifically predictive analytics—came about because I felt like a lot of the conversations that I was having in more technical communities presumes that anti-discrimination laws and civil rights laws would provide the proper forms of redress for the types of harms and biases we were seeing coming out of a lot of AI technologies. I have done a lot of research on race theory so I was always critical [of the legal system] even though I’m a lawyer and I use that as an instrument to train, change, and redress. I understand its deficiencies and know it’s also historically been the primary tool for creating and maintaining racial hierarchy and that what we define as racial categories. It’s socially constructed but it’s also concretized by the law. Who is white and what is whiteness has evolved over time, and it’s all based on either different forms of laws and regulations or interpretations of those laws. Law also has this function of what I like to call ‘social norming,’ in that it kind of creates standards by which we operate.

 

My argument is that AI, predictive analytics, and most data driven technologies are both replicating, reinforcing, concealing, and then ultimately probably supplanting the role of the law and creating sort of the normative standards of how we operate and how people interpret what are really social constructs. So like, how is race defined in a predictive analytic system to either de-bias it or account for it when race is a social construct, and you may not actually know whether it’s based on the perception of the person that’s doing the labelling or the self-identification of the person and what happens when that compounded over time.

 

But also, I’m building an argument to say that the problems with our anti-discrimination laws are anti-discrimination, discrimination, and civil rights laws are also being ingested into AI systems and they’re being built based on the parameters of what they think is legally permissible and not. When you don’t understand that the laws don’t fully capture all forms of discrimination and were never really created to defend [from] discrimination, then you’re essentially creating tools that are concealing the forms of discrimination that are not the most explicit ones that are prohibited or at least regrettable by law. So, one of the main concerns is what happens when a lot of our technologies now either consumer-facing or those used in government are built based on this false presumption about the law and what it is protecting and not protecting. And due to the Black Box effect in automation bias and all of that, how do you even redress arm that you can’t see on a system that is supposed to be legally permissible, but in fact, is enabling discrimination?

 

So I have a long term project here is to like go through the history of US law, and show how it’s created and maintain the caste system here, and then demonstrate how AI and predictive analytics are doing that. I’ve written about predictive policing and how that perpetuates bias. But it’s also another great example, is looking at healthcare, like there was research a few months ago on the health care algorithm that use old insurance records to determine who is most likely to need emergency care. And of course, the system ended up being racially biassed because it was skewed by who has insurance and the money to pay. So, less Black people were received or were likely to be target labelled as needing or needing more urgent care. And one of the main laws that one could use, let’s say, if that was a public hospital as title six, which deals with institutions that receive federal grants, and it’s an anti-discrimination provision, but a lot of our anti-discrimination laws require intent. And you can’t show intent, if it’s just unjust, famous social injustice is in reality, reality. And so like, that’s one part of the flaws is like, if you’re if most models of reform and redress are based on intent and the most explicit forms of racism and bias, then you’re leaving out basically 75% of what actually happened. It goes on addressable but also is assumed that like, whatever is happening is just a natural consequence. This is just how the world works rather than actually seeing it as a form of perpetuating bias. And, and, and normalizing it and a lot of ways.  

 

Kim M Reynolds: Anti-discrimination laws really feel reformist in nature. it’s trying to kind of Band-Aid or account for something that has a foundational function, a foundational cause, and a foundational purpose that can’t necessarily be reformed so it keeps coming up in all these ways.

 

Rashida Richardson: I realize I didn’t emphasize the normalizing function. I’ll use predictive policing as an example. If you’re working on the presumption that police data is objective or is an accurate reflection of reality, then it also serves to normalize what is discriminatory behavior or socially inequitable conditions in reality. If you then presume reality is okay and create a system that optimizes based on that version of reality, then it’s optimizing to discriminate and replicate the same problems.  

 

Kim M Reynolds: I feel like that’s what’s the scariest part of all of this: The lack of understanding about dirty data and the functions of these technologies means that essentially all of the inequities we have in life become normalized.

 

Rashida Richardson: Mm hmm. There’s a lack of interrogation around these things.  

 

Kim M Reynolds: What are some examples of dirty data? Obviously, a lot of things are deeply affected by the uprisings but are the things that you’re seeing intersect with this moment?

 

Rashida Richardson: So, dirty data. I’ll list a few examples of how we see it. I talk about it a lot in the criminal justice system in that it can be excused as misrepresentations and other inaccuracies in errors and data. In law enforcement, the example I give is if there’s corruption or planted evidence or any other type of unlawful action on the behalf of government actors and then you create a data set to represent that a crime actually did occur when it didn’t, then that’s an example of dirty data. Stop-and-frisk is another example. The one that I think people understand more because of our current moment is the “Karens” and “Barbecue Becky” and stuff. False calls for service—if that call is considered to represent a data point that a crime did occur when, in fact, it’s just someone living and someone using power to criminalize their behavior—that also skews the data set to represent that more crime is happening when it’s actually not the case.

 

Another example is what is missing. There are times crimes go under-investigated due to lack of trust with the community and communities that have a higher occurrence of distrust of the police are less likely to report crimes and cooperate in an investigation. That’s missing data. There are crimes that are not prioritized by police departments, like white collar crime, which occurs more frequently than violent and property crimes combined. I’d have to look at the stats again, but it was combined when I wrote.  

 

It’s not just there. The US education system is terribly inequitable. If you’re using aggregate student data about either student performance, let’s say for standardized tests, or even school performance statewide to understand which schools are “succeeding and failing,” but not accounting for social factors like school finance issues or school segregation and other factors that not only affect individual and group student performance but also school performance, then that’s another way that data is skewed. Dirty data, I think, refers to decontextualized data that is used to represent reality or certain social conditions but actually [the data] are quite broad and are more likely to produce skewed outcomes the way that they’re used in data analytics.  

 

How that’s relating to now and everything with the uprisings? One concern I have is that right after George Floyd and Briana Taylor’s murders, I remember Commissioner Shea, who’s the head of the NYPD, making these statements of like, “We can’t rely on stop-and-frisk and other practices so we’re going to start using intelligence-based policing,” which is predictive policing and databases and all of the stuff they already use. One of my major concerns, especially because I was getting a lot of press calls where people were like, “What do you think of the uprisings and blah, blah, blah?” And I’m like, “Well, I’m really concerned that a lot of police officials and government officials are using coded language to talk about implementing more data-driven technologies, many of which are flawed and racially biased.”

 

The concern is we’re not doing structural and systemic analysis of what is wrong with these systems and looking at other or parallel parts of government. One example is the conversation around defund the police. It is important to focus on the police but it should—for those who are engaging in a real conversation about it—also include other acts of government. If you’re only looking at the police, but not looking at all the other government actors that are resistant or just in complete opposition to any form of criminal justice reform, then any efforts to create greater accountability or structural reform can be completely undermined by those people.  

 

You also have parallel agencies like child welfare. Here in New York, I think 96% or 98% of the families that ACS—which is the child welfare agency here—deals with are Black and Latino families. That’s not demographic simplicity; That’s not the demographics of families with issues. But because they see Black and Brown families as the problem and don’t focus on white families that have issues or white families are going to private therapists, their data is hidden or missed by government agencies so they’re not targeted. What happens when we’re having conversations about the corridors of power of government, but not looking at all of the different institutions within government, that produce similar harms of just disrupting communities and families and many other things, too.

 

I know that conversation is different in different localities. I know, in Minnesota, they have been talking about surveillance technologies and the actions of other government agencies and their discussions about structural reform. But here, we’re trying to push for that too. It’s that coded language of we’re going to use “intelligence-based systems.“ Or we’re going to move to “data-driven systems“ to try to fix our human biases. I think we need to be more critical about this type of like semantic wordplay that is happening to avoid talking about what is really being used. But we also don’t want to be too hyper focused on one specific objective and miss the larger needs or changes that need to be put in place for structural reform. 

 

Kim M Reynolds: I think something that is also widespread that you’re hitting on is the re-justification of more policing as a result of racialized violence that’s a complete result of the policing system we have. So, just having a different kind or more “sophisticated” one because it involves some kind of predictive technological base will therefore produce different results. 

 

Rashida Richardson: I feel like the problem with government adoption of database technologies is they’re adopted as the silver bullet solution and then it cuts off the conversation from looking at alternatives, many of which probably don’t need to involve data or technology. It limits the scope of what’s possible. I feel like part of the problem is technology development and government procurement of technology is often where they’re buying technology in search for solution rather than thinking of an alternative vision. What are we actually like? What do you need to have a safe community? What do you need to have an equitable education system? What do you need to have a fair health care system? Then, what is the role in technology and advancing that? That should be the conversation but that’s never the order. 

 

The way things go, it’s always like, “We know what it is.” I heard this term called “therapeutical,” where police believe they know what’s best for the community rather than the community. I think that goes to all parts of government [where] the bureaucratic mindset is, “We know what’s best for the community.” When you go in with that mindset, it presumes you know what the problems are and then it leads to these kind of Band-Aid solutions or solutions for a problem that’s not the problem. The example I always give is that we don’t know the full account of how many police departments even use predictive policing in the United States or globally. There’s only one pilot study that tried to use police data to target officers that are more likely to have an adverse interaction with a civilian when we know police misconduct and abuse is a historical problem in this country. So, also part of the problem is that it’s who’s choosing what are the problems to solve? Most technology development, especially for government use is really this vicious cycle of government funding, private development of solutions, reinforcing the need for more tech solutions, which avoids discussion of any other alternative. 

 

Kim M Reynolds: I’m really interested in how you published, “Confronting Black Boxes: A Shadow Report of the New York City Automated Decision System Task Force.” 

 

Rashida Richardson: I come from working in legal advocacy organizations and before that, AI Now. I work on policy issues on a range of tech issues and the law that created the New York City Automated Decision Task Force was a law that I worked on. The first version of that law was, ‘release the source code for every algorithm in the city,’ and then it evolved into a task force form. I led a diverse coalition. A lot of legal advocacy or privacy and tech and surveillance work and community members that I met along the way that were interested or I convinced them they needed to be interested in this. Then there were researchers who were working on relevant issues but hadn’t really engaged in policy advocacy.  

 

I think there’s three prongs to these types of groups. I’m overgeneralizing, but it’s important because the researchers may not know as much about the policy landscape or even the social domains but bring a certain legitimacy to policymakers who tend to give more credence to people with PhDs. Community members can help share information about what is actually happening on the ground, but also ground us on what needs to be done or what is missing from the conversation. The rest of us—the lawyers—have been working on this stuff for a long time and can see the trajectory and where problems are with different agencies and how to think through strategically.  

 

For two years, we were just monitoring what was going on and there really wasn’t much progress. I think we wrote three or four letters throughout the time. The first one was kind of like, “Hey, this is great. Here’s some people you should consider,” and it was a list of researchers, community members, and community-based organizations to look at for picking people to be on the task force. Once it started, we wrote this letter with tons of recommendations, serving the city on a silver platter, “Here’s the ideas to run with and where we think you should go right now,” and three pages of recommendations of organizations and individuals with specific domain expertise from education, child welfare, Medicaid that they should reach out to. That letter we sent in August of 2018, and it was radio silence. Nothing happened with the task force for a while.  

 

Then, they had public hearings and we did organizing to make sure people were there. At the first one, we sat and listened because we wanted to see what was going on. We didn’t feel like the city was making a real effort, so at the second hearing the full public speaking hour was filled with testimony from a wide range of groups and we published that testimony. Then—again—nothing happened after that hearing. From the beginning, from our first meetings as an advocacy group, I was like, “Let’s just monitor and see how this goes but as a last resort, we’ll do a scathing report at the end.” The city loves to do a revisionist history of, “We did a great job, we did all these things,” when we’re all like, “No, you didn’t, you just did a press release.” Around the summer of last year, after seeing no real community engagement, no real movement, or real change on what was happening I was like, “Okay, we have to do the shadow report.”  

 

Traditionally, shadow reports are reports written by NGOs—civil society—usually after a process or after a government report. But we had to try to time it around the same time the city is going to release their reports so that way we’re providing a direct counter narrative to whatever they’re saying they did and also providing recommendations on what needs to be done so whatever recommendations they’d make are not taken as best practices or what people should be doing because that would set the bar so low. I know from engaging in policy work nationally and globally that so many cities and governments and countries were looking at this process to then decide what they were going to do.  

 

I basically kept notes for two and a half years, knowing this may be the outcome. That’s why the report is so well footnoted and detailed and includes a timeline of everything that happened. The majority of the report is recommendations—That was more of the collective effort. It’s an accumulation of research that happened over time, but I also consulted people because I knew a report of like this size of effort you have to meet people where they’re at. I would tag people in or have quick calls of like, “What do you think the recommendation to the NYPD should be?” Then we’d draft up something and then all the groups that I thought had expertise or something to say I’d tag them in, and then they’d give comments back or send me a message being like, “I think this is great.” That was a way so everyone could engage at the capacity that they had and also a way to get specific expertise in. The recommendations were also not just limited to the gut. It’s a lot of very wonky stuff of like, “Here’s how to reform procurement laws,” or “Here is what the city council should do to provide oversight for these things,” and tons of agency-specific recommendations. We also provided recommendations on what advocacy should look like because one thing that I noticed from our experience was… luckily, a lot of the groups and individuals that participated knew me and that’s kind of how they got in. So, I could… not control, but could lead conversations and moderate in a way so if there was a computer scientist who wanted to talked too much there was a way to shut that down so there wasn’t any one voice that was dominating what we should be doing. 

 

This effort worked because everyone trusted me and gave me that moderating role of being able to shut down or uplift other people, but in most settings there is a tendency to defer to or suggest that academic credentials have more value than lived experience in recommendations and understanding these issues. We also made recommendations to community members and philanthropy around, “You guys need to start putting money aside.” We did a whole public education community event but it was all of the orgs that added a little extra money pooling money that made it work and not all organizations are going to have the resources to do that. Public education is such a crucial part of this and we’re not going to figure out what the right reforms are with just a bunch of lawyers and computer scientists in the room. We’re not going to be able to have inclusive and participatory engagement if there is this huge information asymmetry between community members and people with “academic credentials.” So, it was also trying to see what are the gaps that need to be filled? And who are the power players that have some power to change that? And what do they need to be told to kind of step up, so everyone can have a say in what’s happening moving forward?  

 

Kim M Reynolds: How do you find other people with this urgency? The field can be incredibly dedicated to maintaining the status quo of doing innovation under the guise of ‘innovation development,’ when it really means so many other things. How do you find other people and how you are able to kind of visualize these things that really make an ardent effort to do this? Much of these critiques get so sidelined that they never become part of the main conversation, even just critiques in reformist ways. You’re talking about how, at the root, this thing is white supremacist and it produces violence so we can’t just continue to try and make this work in various ways. 

 

Rashida Richardson: Some of how I got people involved is because I’m coming from legal advocacy organizations and working on a wide breadth of issues. I know people, I know groups or leaders in criminal justice reform or decarceration spaces to be like, “Hey, I know you know about these issues, let me meet you halfway.” I’ve created a lot of resources and I do a lot of translating of technical things, because I don’t think you need to have a CS degree to criticize these systems or understand them. I think having practical understanding of the relevant social domains is a lot more important. Part of it is I take on the work of doing some of that translation and / or built relationships from prior work.  

 

One example is the Atlantic Towers Tenants Association, who were one of the groups that signed on and collaborated on the shadow report. The relationship with them started because another group reached out to me and was like, “Hey, this group and their lawyers need a privacy training because their landlord is going to install Chime facial recognition and they know nothing about those.” I did a privacy training for the tenants and their lawyers and in it I was just breaking down the problems with facial recognition. I helped the lawyers a bit with building their arguments for what they were doing. In doing that training and in subsequent calls, I kind of was like, “Hey, there’s this taskforce effort happening in the city. I know your landlord is private but the reality is these types of systems probably exist in public housing and we just don’t know about it—or they will. It’s important for people to hear from you,” especially because the majority of the tenants were Black and Brown women and we know facial recognition systems don’t work for that demographic in particular and they got it. They were just very active and continue to be engaged in a lot of advocacy. That’s an example of you have to meet people where they’re at and also not make any assumptions. It’s also understanding my own privileges and valuing the experience and expertise of other people.

 

I think that’s part of it. Realizing who’s in the room and who’s not in the room. This may just be because I’m a Black woman, so I’m so used to being the only person of color or ‘one of you’ in a room and realizing how that really distorts or leads to watered-down reform. So knowing that there needs to be more people like me and even more people more extreme than me. You have to have a variety of voices to make sure you’re having a truly robust conversation about what’s at stake and what needs to happen.  

 

There are so many reasons why these critiques don’t exist. I think part of the problem is in the tech and especially research space is the homogeny of it. I feel like a lot of people don’t even recognize—even though there’s all these reports on how little diversity there is—I don’t think white people get it. I think to them, having two people of color in a room is, “We did it.” I think a lot of well-intentioned white people in this space subscribe to this ‘diversity of the mind’ kind of stuff of even if it is a mostly white room, we all have different experiences—I grew up poor in West Virginia, you grew up in Europe—and that is diversity because we have different viewpoints, but not understanding what is lost when you don’t have people who are actually affected in the room or just have a different perspective based on life experience versus the value of actual diversity. I think that’s part of why we don’t hear these critiques. If you’ve lived your entire life without having a negative interaction with any type of government agency, it’s hard to see why you need to be skeptical of government action in general, but also government intentions in buying technological solutions. A lot of these technologies are not necessarily being used to help advance social change or help the community but instead, they’re tools of social control and maintaining the status quo. You’re less likely to even understand that point of view if you’ve only had good experiences.  

 

The other thing is more of a legally wonky critique. Part of the problem is as long as the main critique or argument used to challenge surveillance technologies, AI, and predictive analytics, is privacy, that frame is so limited. One, it is based on this individual rights logic that is such a U.S. / Western society thing that doesn’t realize that marginalized bodies don’t have the same privacy rights. I feel like Brianna Taylor’s murder helped illustrate that in so many ways. In the US, there’s a lot of privacy rights embedded in the home in that you’re able to shoot someone in your home because it’s a protected space. There are certain rights or processes that police should—in most cases—follow in entering the home versus interacting with someone on the street, all based around the idea that the home is a protected space where all of these private things happen so you need to have some more privacy protections there. Breonna Taylor’s murder shows that the home is not a protected space if it’s a Black home or if it’s a Black body in that home. I get kind of perturbed when people talk about facial recognition as like this harm to privacy for all. No, facial recognition harms certain people in a specific way and may cause discomfort to the rest of people and that’s because the way your body moves in public spaces is different if you’re part of a dominant majority versus a marginalized group. The types of harassment or lack of protection that you have when you’re in a marginalized body in a public space or private space is completely different. I find the privacy frame as an approach to only challenging intrusions by technology has also help put us in this position where we don’t have laws that actually grapple with the diversity of the people in our society and how technology affects them differently.  

 

It’s same problem with data. The data regulation approach in Europe, too. If you presume data is owned by one individual or directly connected to individual and not a community-based thing, then you’re less likely to see communal harms. That’s the same critique with privacy is like, it’s looking at individual rights and individual harms, but not communal ones. I feel like that also contributes to some of the deficiencies and criticisms especially in the legal field in that there’s very limited view on both what laws should be used to challenge technology intrusions, but also not understanding that rights—if even though those laws are written as we all have equal rights and equal protection—is not how it works in reality. Now, what do you do when laws are not protecting people equally? This loops back to flaws with law really pertaining being built around the idea of property and individualism. The ideas of individualism, humaneness, protection of rights, and respect of rights is really only extended to those who are conceptualized within the imagination of a “valuable” people. 

 

Kim M Reynolds: Do you want to briefly mention or uplift any kind of work that’s really influenced you or that you feel has made some of your work possible? 

 

Rashida Richardson: I could list tons of books but it’s kind of like I’ve had people throughout my life that have influenced my way of thinking both of how I do structural analysis but also how I approach the work of respecting other people and where they’re at and different forms of expertise. My parents were super politically active and I went to protests from like preschool. Then, in law school, the main professor—I sometimes call her my “law school Mom”—that I worked with was really involved in civil rights movements, is Angela Davis’s best friend, and was her lawyer [when she was] involved in SNCC. I’ve been surrounded by people—specifically Black people—involved in movements and thinking critically about changing what liberation means my whole life.

 

That kind of influence I think is important for what I tend to focus on and also the level of empathy that I have in the work that I do. Throughout my career I’ve worked with different people. One of my first jobs in public interest was working on HIV criminalization in the South, which if you think tech policy is hard try getting HIV and sex worker laws overturned—evil and racist laws—overturned in Louisiana and Tennessee. I was working with all types of people and learning a lot like you don’t need a degree to like do structural analysis. Like, being a homeless trans sex worker teaches you a lot of shit. I learned a lot from people like that. I think that also influenced my approach to advocacy. I’m a really quick learner, I’m a really robust reader, so [it was] also learning what I’m good at and how I can be a bridge for people so they can speak on their own behalf. I don’t think just because I’m a lawyer, and I do a lot of reading and think about things that I need to speak for people.  

 

More scholarly work? Like I said, I’ve kind of been obsessed with critical race theory from a young age. I think I read some Derrick Bell book in high school but really didn’t understand it. I took a class in my sophomore year in college that really helped me unpack and think through these ideas. Just starting so early, I think, with ideas from critical race theorists really helped me think critically especially as a lawyer of not, like, not believing in the mythology of law or tech. More recently, there’s a lot of scholars that I appreciate in how they’re able to translate theory or concrete ideas in palatable ways. I love Ruha Benjamin. She’s great, and I met her early on and fangirled her and she is very kind to me. There is this book I just started reading—So, this is a new inspiration. Brian Jefferson, he’s at the University of Illinois. I really like the way he writes. Pretty much he’s writing all this stuff I want to do.

 

Kimberle Crenshaw. Daniel Martinez HoSang. He’s an American studies professor at Yale that does a lot of work with Kimberle, actually, but more focused on racial formation. I like the way that he writes but it also helps me think because he’s not focused necessarily on law or tech. I end up reading a lot of non-tech stuff to think through because it’s like a social political class. Charles Mills, Cedric Robinson—I’ve been reading a lot more philosophy just because I think it’s important in unpacking. Oh, Simone Browne, she’s great. I fangirled over her, too. I’m just such a nerd. I’m like, “I love what you wrote to people.” Oh, Stuart Hall I love a lot of history, sociology, and philosophy because it helps me ground a lot of my thinking about data and tech. There are more technical authors or people who write explicitly about tech and data that are also helpful. I love Khiara Bridges. She’s at Berkeley and I love everything she writes. That’s like “intellectual goals” because the volume at which she writes is insane, but it’s so detailed and she does the work and it’s rigorous and I’m just like—Gotta get there one day. Isabel Wilkerson—Gotta read her book [Caste: The Origins of our Discontents]. 

 

Kim M Reynolds: What are the things that you’re looking forward to? What kind of hope are you, you know, what impact Are you hoping to make with generating this kind of knowledge?

 

Rashida Richardson: Right now, my research that I’m working on that’s done and about to be submitted [to law reviews] was a definition for automated decision systems that I workshopped with a bunch of different people earlier this year. The last part coming out of the shadow report is they wasted so much time because they couldn’t land on a definition for “automated decision systems.” It’s such a political thing of what you choose to include or exclude what you choose to emphasize or not. So, this paper offers two definitions that should be used for legislative and regulatory purposes, but also goes through impact, rather than the technical capabilities of the system and explains why that’s more important. It also looks at how the systems are used and provides case studies to show not only why it’s important to look at the history and social context of how a system is used to really understand how its impact and how it’s affecting governance as a process but also to show how it fits the definition. My hope is to share this so they adopt my definitions and to get other localities or jurisdictions that are looking at these issues to look at this definition.  

 

The second paper is creating a taxonomy for thinking through databases that should be regulated, like automated decision systems. We’re going to be emphasizing gang databases, because there’s this tendency, especially within government to just think of a database as a database—Like, it’s just storing data, we’re not doing all these other things with them, or we’re not using it to inform decisions, when in fact it is. A gang database designation can affect where you’re placed if you’re arrested and put in jail. It can lead to higher charges by a prosecutor. It can affect decision making all throughout the criminal justice system and it’s such a subjective and amorphous category that’s related to history and social context, so how should we be thinking about databases that are essentially functioning as social profiling technologies? How do you distinguish them from things that really are just a data warehouse? I feel like the problem in government and with public discourse is this tendency to only focus on the most obvious and egregious example. So much conversation is focused on facial recognition and, yeah, but these tech companies are creating all these other technologies. We don’t want to necessarily commend them for doing a cosmetic fix up, like not selling one problematic technology for a year when their entire R&D pipeline may be fueling even worse things.  

 

In government, they’re like, “Let’s create risk categories,” so we’ll focus on law enforcement but not look at our public benefits. Algorithms are essentially cutting off care and access to resources to entire communities, even leading to death. How do we get people in government and in the community to take a step back and look at the entire infrastructure and who is it impacting differently? And why is that and how is it derived from policy? And what is the relationship between all of this? So I guess in short, I’m just trying to expand people’s minds and looking at the problems so that we don’t fall into this trap that I definitely experienced working in criminal justice reform of trying to just tackle the low-hanging fruit and be like, “We’ll get to the other stuff later.” That leads to us being two hundred years later in the same position that we were before. I feel like part of the problem I’ve experienced in dealing with criminal justice tech is if you don’t really see a problem with the criminal justice system, it’s harder to see the larger problem outside of it or connect the dots.

 

What I want to do next year is start to try to convene another little like group or mixture of groups of lawyers, some journalists and media, PR people—people who I think have developed really good theory and ideas in terms of talking through a lot of the disparities that that exist as a result of technological mediation—and community members and advocates so we can start to have a conversation of how we translate these really good ideas and concepts that mostly exist in academia into talking points that everyone can kind of understand, because that’s what’s necessary for us to do real structural reform.

0