To pinpoint when and where future crimes will occur, law enforcement agencies from Amsterdam to Alabama are turning to predictive policing.

However, the technology has attracted significant criticism, citing biases inherent to its algorithms and alleging that its use contributes to the over-policing of marginalized communities. 

Now, following a number of high-profile examples, including the 2020 murder of George Floyd by Minneapolis police officer Derek Chauvin, the conversation is turning to how the same methods can be used to combat police brutality.

Enter Future Wake, an interactive website that has received the Mozilla 2021 Creative Media Award. The project uses artificial intelligence to analyze data on fatal police encounters in the U.S. and predict future incidents. It then creates computer-generated avatars to tell the stories of each composite victim.

We sat down with two of its creators — Oz, based in New York, and Tim in the Netherlands — to talk about the motivations behind their work. Both asked to be referred to by their given names only.

This conversation was edited for length and clarity.

Tell us about Future Wake. What was your goal with this project?

Oz: Future Wake focuses on using the principles of predictive policing to predict when the next fatal encounter with the police will occur. The tactics of predictive policing and the way it’s implemented are relatively unknown. Most of the time, whenever we mention it, people bring up “Minority Report.” They only have a fictionalized understanding of the technology. A lot of people don’t realize that it’s actually in their own cities. So, in order to bring attention to it, we thought about just flipping its application.

When someone enters the website, what do they see?

Tim: At first, you see a warning. We are very much aware of trauma for people who’ve lived through police violence. You don’t see any data immediately. You see the five faces. I thought it was the most important thing to show that we’re talking about humans here, not numbers. 

Oz: Each face is a computer-generated image of the next victim from one of the five most populous cities in the U.S. — Chicago, Houston, Los Angeles, New York and Phoenix. Below each victim, you see a countdown that refers to the moment we predict that they’re going to die. The people are animated to bring this awareness that they are still alive. Once the countdown ends, that will be the end of their lives. We want to breathe life into the people we predicted. 

Tim: When you click on a person, you enter their space. We predicted the location of the fatal encounter with police. We have a Google Streetview in the background, and it’s like you’re having a call with them. Then this person tells the story of their own demise. 

The project has two elements. You ran data about fatal encounters with police in the U.S. through predictive algorithms to determine the details of future victims. Then you use deepfake technology to create avatars that represent them. Why did you feel like you needed both? 

Oz: Our data set starts from 2000 until now. We wanted to highlight these recurring patterns of police brutality in the U.S and to hone in on the fact that a prediction still has consequences for a human being. 

Tell us about the databases you worked with. 

Oz: We used two main ones, called Fatal Encounters and Mapping Police Violence. These are citizen-initiated projects that try to capture every fatal encounter with police officers in the U.S. We did try to find police databases. The FBI has one that it started in 2018, but they’re mainly relying on self-reported outcomes from police agencies. It’s actually under-representative of what’s going on. We put that data through algorithms to predict who — which consists of the gender and ethnicity of the victim — where and when the next fatal encounter would occur.

Each potential victim has a backstory that describes the circumstances in which they were killed. When you click on their face, they tell their story. You used AI text-generation software to do this, right?

Oz: In the databases we used, there were two or three sentences detailing if it was a car chase, if somebody was wielding a weapon or how they were shot. We use this algorithm called GPT-2 to learn the aesthetic of all of these media reports. GTP-2 would then generate future police-related media reports. We then edited the text slightly, to make it in the future tense and the first person.

Source: Aggregate of Fatal Encounters and Mapping Police Violence. Infographic: Coda Story

Why did you use the future tense?

Oz: I see more talk about the horrific events that happened to victims of police brutality after the fact. Occasionally, there are little bubbles of conversation about how we can prevent this in the future. By predicting future victims and showing that this is an ongoing issue, we’re asking, ”How can we protect this person from being a future victim?”

Let’s talk about the countdown clock…

Oz: In traditional predictive policing, they use spatial-temporal models to predict where and when a crime will occur. We replicate that. We want to say that this person is going to die at this specific moment. I used a time series algorithm to predict recurring police-related fatal encounters, and was able to predict an estimated day of when someone would die. The clock is supposed to generate a sense of urgency. 

Source: Aggregate of Fatal Encounters and Mapping Police Violence. Infographic: Coda Story

Was there anything that surprised you in the data?

Oz: We looked at the average time between each fatal encounter for each city and for each demographic. It was pretty creepy. Based on the data that we had, in Chicago, Black males had the shortest time in between each incident. It was an average of 34 days. It was quite shocking. Minorities are overrepresented in the database, but I was still surprised by the fact that everyone is represented.