In 1995, I was working as a software architect with one of the largest software development companies in America. One of the most memorable projects was building a software for The Los Angeles County Sheriff's Department (LASD), the United States' largest sheriff's department, with approximately 18,000 employees.
I was assigned to a rapid task force specializing in rescuing projects that were in danger of failing. All of our projects were always the hardest projects to take on. No other team in the company wanted to come close to these types of projects. They were typically run over budget and years behind schedule, with demoralized and burned out staff. Usually, angry customers came as a default, demanding something be done before they bring a lawsuit to my company.
As soon as I arrived, I found that the project was, without a doubt, the worst I had ever seen. By this point in my career I’d seen many disastrous projects, but this one definitely takes the cake. It was five years behind schedule and millions over budget. The current code was completely useless and in need of a major redo, despite the fact that it had been redone three times already. I was working on the fourth attempt to get the project completed.
In the Beginning
The project, called the Personal Performance Index, or PPI, was a tracking system for collecting and recording data about deputies’ use of force against citizens, administrative complaints against deputies, complaints from the public, and other similar information. It was intended to give management early warning of possible problems so that the sheriff’s department could manage risk and reduce liability.
Things got off to a really sketchy start. The Sheriff's Department purposefully assigned the worst staff to the PPI project because they wanted it to fail. The office they assigned us was a poorly lit, musty room where myself and four other engineers worked hunched over in the dark.
Deputies were so opposed to the development of the PPI that I regularly got yelled at as I walked into the building each morning, and followed by police cars on my way home after work. One day, walking into the lobby of the Sheriff’s Department, I ran smack into one hundred deputies gathered in protest, wearing t-shirts featuring an image of a STOP sign with “PPI” written inside.
Friends and professional peers told me that staying on this project would be terrible for my career. A Career Limiting Move, or CLM—I was repeatedly told that’s what I was getting into. The way I saw it was that I had a job to do. I just wanted to get the work done and not be stuck there for years—I had a nice, sunny office back in Los Angeles that I was eager to return to.
My first step was to do a quick overview. To my surprise, I found that both the UX and database were actually in good shape. The main culprit lied in the early warning report subsystem, known as the Report Engine. Since the PPI's main goal was to detect personnel problems before things got out of hand, it was not a surprise that the Report Engine was the largest part of the software.
It was an early AI based system that used a crude rules based system. Unlike modern unsupervised AI systems that can learn by themselves, this one required us to try to cram all of the rules in the book and feed them into the computer one code at the time. It was jumble of spaghetti code—unstructured, redundant and authored by many different programmers.
Because the project had already failed so many times before I arrived, I knew I had to do something different than start working on the Report Engine rewrite. As an outsider coming in, my goal was to understand the user community. So, I began by going around and asking questions to make sure I understood why the project was created in the first place. I made it clear that I wasn't Internal Affairs, I didn't with with administration. I presented my ask more like, "I'm an immigrant here coming into your world; I'm here to understand you." I just wanted to get information. I figured the more information I got, the faster I could fix the problem.
I quickly earned that the PPI was a politically charged project. Following four controversial police shootings, which happened shortly after Rodney King, the Los Angeles County Board of Supervisors mandated that the LASD construct a database system to identify which officers were likely to use more force than necessary. The idea was that once these officers were identified, the department would intervene with training or counseling.
However, deputies, worried about being punished for unfounded complaints, were very suspicious of the PPI. The 911 dispatchers that were mandated to track police misconduct hated the project too, since it forced them to sell out the police. They told me they didn’t think it should be their duty to be judges in this difficult manner.
The captains I spoke with made it clear that they didn't want a witch hunt. Their hope was for an early warning system that would allow them to be proactive and educate deputies whose behavior raised flags from the system, rather than punish them. They wanted to give deputies a chance to turn bad behavior around before the bad behavior became habit.
Lesson Learned
I incorporated this feedback into my work as I dove into the code. In the end, we built the system as intended for legal purposes, with reporting as the main feature. However, we removed the ID info of the officers and replaced it with anonymous ID's, so anonymity was accomplished. This way, we could limit the impact to each police officer who just made one tiny mistake, but still provide the necessary reporting for the Internal Affairs Department to be able to see problems at a high level. Commanders and higher managers had access to different parts of the system at different security levels, so they were able to look up incidents regarding particular officers when it was called for, or when a certain threshold got triggered. As a result, he system was able to detect troubled deputies and educate them way before things got out of hand. Within one year, the project was completed and fully operational in the field.
As a result of the PPI, the rates of officer-involved shootings, use of force incidents, and personnel complaints within LASD all dropped significantly. Soon after it became operational in 1997, the PPI was widely regarded as the best Early Operating System in the country. The 2018 book, The New World of Police Accountability, by Samuel E. Walker and Carol A. Archbold, said it led to “substantial improvements in police performance, with enormous implications for police-community relations.”
The PPI Report Engine was data mining with a crude early AI system called a knowledge-based system. In other words, it was early in the game AI. However, it has since proven to have had a lot of potential, considering how slow computers were at that time, and how expensive data storage was. Years later, the system was expanded with new technology and it is still currently used in production.
Focus on Why Before Outcome
What I learned from my experience working on the PPI is that it is important to understand the why of a project, rather than worrying about the outcome at first. Without first understanding the why, you probably won’t achieve the desired outcome. This is because you probably won't be understanding the problem correctly, and therefore won't be able to solve the real problem.
When AI is brought into the mix, these lessons about humans are only more relevant. As the Black Lives Matter movement has come into the forefront of so many people’s minds this summer, the time is right to think more deeply about how AI can be used in police departments across the country. AI can identify officers more likely to use more force than necessary, and put interventions in place. AI can save lives.
At the same time, we must not overlook the deeply and uniquely human struggles behind certain problems. With social media and everything becoming more and more digitized, many of us have forgotten how to connect. That's a problem. We use technology so heavily, we forget how to talk to each other like humans. There are many misunderstandings that can take place as a result, in large and small scales, between departments, generations and races.
This experience reminded me to dig deeper at the human level. There are always humans behind the problem, and things like diversity, social and economic factors need to be taken into account when trying to create solutions. We need to be able to sit down and talk. If you just have a human conversation, people will tell you what's going on.