Why it’s too early to use facial recognition for law enforcement in Africa

This article was contributed to TechCabal by Chiagoziem Onyekwena. Chiagoziem is a solutions architect currently working in AI. He’s also author of get.Africa, a weekly newsletter on African tech

Racing towards global dominance

There is a current race towards global dominance in Artificial Intelligence (AI) between American and Chinese companies, and analysts believe that Africa will play some role in determining the winner. 

If those predictions are accurate, then American companies are currently lagging behind their Chinese counterparts. The US has been slow in exploring Africa’s AI potential. Some exciting developments have popped up, such as Google opening an AI lab in Ghana and IBM Research opening offices in Kenya and South Africa. However, projects of this scale have been few and far between; and the dearth of large-scale AI projects have handed Chinese companies a significant advantage. 

In 2015, Chinese tech company Huawei developed Safe City, its flagship public safety solution. Safe City provides local authorities with tools for law enforcement such as video AI and facial recognition; it’s essentially Big Brother-as-a-service. Since its launch, Safe City has expanded rapidly. According to the Center for Strategic and International Studies (CSIS), as of 2019, there were 12 different programs around the continent. A few of those Safe Cities have reportedly been successful. For example, Huawei claims its deployment in Nairobi, Kenya led to a 46% drop in the crime rate in 2015. 

However, Safe Cities does have its critics – not everyone is impressed. Some critics have expressed concerns about state surveillance, privacy and digital authoritarianism. Additionally, there isn’t enough documentation about the actual efficacy of Safe City and other surveillance solutions similar to it operating in Africa today. Part of the reason for this is because there isn’t a lot to document. One key difference between the AI communities in the US and China is that neither the Chinese government nor its companies are transparent about facial recognition error rates. That’s certainly a cause for concern.

A history of cross-identification bias

In January 2020, Robert Julian-Borchak Williams was arrested in Michigan, USA, for a crime he didn’t commit. When he got a call from the Detroit Police Department inviting him to the station for questioning, Williams thought it was a prank. But what he didn’t know was that he was about to earn an unenviable place in the history of facial recognition-enabled law enforcement. 

In 2018, timepieces worth $3,800 were reportedly stolen from Shinola, an upscale boutique in Detroit. The perpetrator was captured on grainy surveillance footage. He was a portly man, apparently of Black descent, just like Williams was. Police officers arrested Williams because they thought he was the person in the images. When asked point-blank if he was the one, Williams replied, “No, this is not me. You think all black men look alike?”

What Williams was referring to is cross-race identification bias. Cross-race identification bias occurs when an individual from one race cannot differentiate an individual’s facial features from a different race. It’s not a unique bias to any one race, but, in America, it typically affects minorities. A 2017 study by the National Registry of Exonerations found that most innocent defendants who had been exonerated in 28 years prior to the study were African Americans. It also found that a big reason for the wrongful arrests was the risk of eyewitness misidentification in cross-racial crimes. 

Unfortunately, some of the same racial biases that have afflicted law enforcement over the years have made their way into facial recognition technology. And Robert Julian-Borchak Williams was the first recorded victim. The cause, this time, wasn’t cross-race identification bias but a faulty system that had matched images of the shoplifter to Williams’s picture on his driver’s license. However, after it became clear that this was a case of mistaken identity, Williams was released back to his family. 

Technology Inherits Racial Bias in Law Enforcement

Facial recognition technology (FRT) has been around since the mid-’60s. Many recognize American computer scientist Woodrow Wilson Bledsoe as the father of FRT. The use cases for the early versions of facial recognition were narrow. But advancements in machine learning have accelerated its adoption in numerous fields, including law enforcement. 

However, facial recognition technology is still an imperfect technology. As recently as 2019, studies by the US government found that even top-performing facial recognition systems misidentified blacks at rates five to ten times higher than they did whites. Studies like this, coupled with the tension between the African-American community and the police after the killing of George Floyd in 2020, forced multiple Western tech companies such as IBM, Microsoft, and Amazon to pause their facial recognition work for law enforcement. Despite the advancements in the field, the margin for error when identifying Black faces was still far too high. In Western countries where Blacks are in the minority, these biases significantly impact facial recognition-assisted law enforcement quality. However, in Africa, a continent where Black people make up 60-70% of the population, the potential for harm is more significant.

Tackling biases in AI systems

Biases can creep into AI systems in different ways; the most common is the training data. AI algorithms learn to make decisions based on training data, which often reflects historical or social inequities. For example, just like humans, facial recognition algorithms struggle with cross-race identification. In one experiment involving Western and East Asian algorithms, Western algorithms recognized Caucasian faces more accurately than East Asian faces. East Asian algorithms recognized East Asian faces more accurately than Caucasian faces. 

Also, facial recognition algorithms rely on large amounts of data to enable them to make accurate facial recognition decisions. And the most accessible place to “harmlessly” harvest large quantities of photos of faces is the web. But being one of the more minor contributors to the global internet economy, Black people are significantly underrepresented. This underrepresentation in the data set leads to comparatively higher error rates in facial recognition systems.

Another contributing factor to the high error rates is a phenomenon I like to call the ‘Black photogenicity deficit’. Photographic technology is optimized for lighter skin tones, and the digital photography we use today is built on the same principles that shaped early film photography. It then follows that AI systems have difficulty recognizing Black faces simply because modern photography wasn’t designed with the facial features of Black people in mind.

Given these biases, it’s hard to imagine that the error rates for Chinese AI systems would be radically different from US systems. Yet, the efficacy of their solutions is treated as non-topic. Chinese AI companies operating on the continent aren’t pressured to disclose their error rates or pause their surveillance programs. Instead, they insist on using facial recognition technology to aid law enforcement in a continent where the technology is more likely to result in wrongful arrests and convictions than anywhere else. This is not reassuring and makes you wonder how many Robert Julian-Borchak Williams existed in Africa before there was Robert Julian-Borchak Williams in the United States.

Get the best African tech newsletters in your inbox