Why algorithms are not colour blind

When Joy Buolamwini found that a robot recognised her face better when she wore a white mask, she knew a problem needed fixing.
How does this problem come about?
Within the facial recognition community you have benchmark data sets which are meant to show the performance of various algorithms so you can compare them. There is an assumption that if you do well on the benchmarks then you’re doing well overall. But we haven’t questioned the representativeness of the benchmarks, so if we do well on that benchmark we give ourselves a false notion of progress.

»»» Läs vidare [The Guardian]