A myriad of technological challenges have raised roadblocks to public acceptance of autonomous vehicles, but who could have guessed that one of them would turn out to be the perceived racism of pedestrian detection systems?
Researchers Benjamin Wilson, Judy Hoffman and Jamie Morgenstern of the Georgia Institute of Technology say people with darker skin are more likely to be hit by autonomous vehicles than people with lighter skin.
According to their findings, object detection systems of the kinds used in autonomous vehicles had uniformly poorer performance when it comes to detecting pedestrians with darker skin types. Seeking different possible explanations for their results, they found that neither time of day nor occlusion (something that was blocking the camera lens) explain this result.
The study used what is called the Fitzpatrick skin type scale, which was introduced to measure a number of physical attributes, such as hair and eye color, in addition to a person’s likelihood to freckle, burn, or tan when exposed to UV light. The scale’s six categories were assigned to one of two groups: one for the lighter and one for the darker skin tones.
In the Georgia Tech study, pedestrian detection in road scenes was tested for the two groups and showed a lower rate of detection for the dark-skinned group. On average, they found that these systems were 5% less accurate at detecting people with darker skin.
This is not the first time that image recognition systems have demonstrated higher accuracy for whites, note Michael Greco and Susan Schaecher, attorneys with the law firm of Fisher Phillips.
For example, last year, Amazon’s facial recognition system was tested by the American Civil Liberties Union (ACLU) and incorrectly matched 28 of the 535 members of Congress to mugshots of arrestees.
The ACLU reported that “Nearly 40% of the false matches were people of color, even though they make up only 20% of Congress.”
Other technologies also proved to be problematical as well, including facial recognition systems that don’t recognize non-white skin to cameras that tell Asian people to stop blinking. In one case automatic soap dispensers in restrooms wouldn’t provide soap to black people.
The recognition problems are not just confined to visual systems, either. Some voice recognition systems have had more trouble recognizing women’s voices than those of men.
It was these cases that provoked the researchers to conduct their study, also citing findings that “many recent examples of machine learning and vision systems displaying higher error rates for certain demographic groups than others.”
They note that a “few autonomous vehicle systems already on the road have shown an inability to entirely mitigate risks of pedestrian fatalities,” and recognizing pedestrians is key to avoiding deaths.
“We hope this study provides compelling evidence of the real problem that may arise if this source of capture bias is not considered before deploying these sort of recognition models,” they concluded.
The Fisher Phillips lawyers explain, “Some have suggested that if the engineers creating the datasets are not diverse, they tend to select images most like themselves and fail to recognize disparate representation in datasets when it occurs.”
There has been a swelling wave of claims alleging unconscious bias, they warn. “Where the workforce is not diverse, there is a risk that decisions may be made on, or influenced by, unconscious bias. Plaintiffs’ lawyers frequently argue that seemingly neutral rules or practices can have a disparate impact on a protected group.”