Countries must do more to combat racial profiling, UN rights experts said yesterday, warning that artificial intelligence programmes like facial recognition and predictive policing risked reinforcing the harmful practice.
Racial profiling is not new but the technologies once seen as tools for bringing more objectivity and fairness to policing appear in many places to be making the problem worse.
"There is a great risk that (AI technologies will) reproduce and reinforce biases and aggravate or lead to discriminatory practices," Jamaican human rights expert Verene Shepherd told AFP.
She is one of the 18 independent experts who make up the UN Committee on the Elimination of Racial Discrimination (CERD), which yesterday published guidance on how countries worldwide should work to end racial profiling by law enforcement.
The committee, which monitors compliance by 182 signatory countries to the International Convention on the Elimination of All Forms of Racial Discrimination, raised particular concern over the use of AI algorithms for so-called "predictive policing" and "risk assessment".
The systems have been touted to help make better use of limited police budgets, but research suggests it can increase deployments to communities which have already been identified, rightly or wrongly, as high-crime zones.
"Historical arrest data about a neighbourhood may reflect racially biased policing practices," Shepherd warned.
"Such data will deepen the risk of over-policing in the same neighbourhood, which in turn may lead to more arrests, creating a dangerous feedback loop."
When artificial intelligence and algorithms use biased historical data, their profiling predictions will reflect that. "Bad data in, bad results out," Shepherd said.