Os Keyes on the Inevitability of Racial Bias in Algorithms

Oct 15, 2020

2 min read

New academic studies show the racism baked into popular consumer apps.

Corporations are taking a stand against racism - including Uber, a self-described “anti-racist company,” according to Chief Executive Officer Dara Khosrowshahi. But a George Washington University study released this summer reveals that the dynamic pricing algorithms of ride-share apps continue to enforce the racial biases these companies claim to be against.

In Chicago, where fare price disclosure is the law, Lyft and Uber riders paid more for drop-offs in neighborhoods with largely non-white populations. And we’re not talking about a small sample size. Researchers Aylin Caliskan and Akshat Pandey looked at 100 million rides

Though there’s no reason to suspect that either Lyft or Uber engaged in intentional racial profiling, the study — which Caliskan said is currently embargoed and undergoing revisions — indicates the limits of the language of anonymity. On Uber’s website, a privacy disclaimer within its legal resources reads as follows:

Uber uses your personal data in an anonymised and aggregated form to closely monitor which features of the Service are used most, to analyze usage patterns and to determine where we should offer or focus our Service. We may share this information with third parties for industry analysis and statistics.

Is it possible to be both aggregated and anonymized or do the structures and social patterns of an unequal society necessarily bake racial identity into data sets? We put the question to Os Keyes, a University of Washington researcher and PhD student who has spent the last five years working with scientists from organizations like Microsoft and the Chan Zuckerberg Initiative to determine how racial and gender biases shape algorithms and reinforce discrimination.

The Filament: Do algorithms create, reinforce, or surface racial disparities? There’s certainly one reading of this data that suggests these pricing algorithms uncovered racism within a system. But companies don’t tend to think of their algorithms in terms of discovery. They focus, predictably, on efficies and margins?

Os Keyes: People either have a really simplistic understanding of bias and injustice or they assume the problem is the data. Look at the criminal justice system. You can still get biased outcomes even if the data is “appropriately sampled” because of the racialized nature of the criminal justice system itself. Deploy a perfectly neutral algorithm for calculating sentencing lengths and it will be disproportionately punitive toward arrestees of color.

When we just look at data, we treat algorithms as working in isolation. We ignore the facts. It doesn't matter if the algorithm doesn't see race when calculating the lengths of drug sentences if the police only ever stop black people. Disproportionate charges to racial minorities aren’t immediately linkable to forced segregation or redlining, but they represent the same sort of force.

Is it reasonable to expect a company to build out anti-racist algorithms? It feels like that breaks down into two questions. The first is whether it’s possible. The second is whether it’s plausible.

The only plausible structural solutions are transparency and accountability. When people say transparency, they generally mean “make the data available and then everything will fix itself.” I sincerely doubt addressing racial bias in their algorithm is something Uber is ever going to do. They're going to say they “want to treat customers equally.”

There needs to be institutional and regulatory intervention. It needs to be a model of regulation that is actively working against a problem.

In the absence of governmental intervention, which is unlikely in the short-term, how can internal stakeholders at algorithm-centric companies argue for change?

This is a matter of equity. When we try to eliminate biases entirely, we fail. That's never going to happen. We want to have systems that not only don't actively contribute to racial bias and disparities, but reduce them. Algorithms need to be biased in a different way. In truth, all algorithms are biased; the least we can do is ensure some are biased in a way that benefits rather than harms those who are most vulnerable.