skip to content

Racism in AI and Tech: Self-driving cars are blind to dark-skinned people, may cause accidents

Driverless cars have been making quite a splash in the news for all bizarre reasons. However, a recent study suggests that these simple tech hiccups are just the tip of the iceberg. T troubling issues might lurk within the technology that powers these autonomous vehicles.

A study was carried out by researchers from King’s College in London. They took a closer look at eight different AI-powered systems meant to spot pedestrians for driverless cars. These systems were trained using real-world data.

AI is blind to dark-skinned people.
Shockingly, the study discovered that these AI programs had a more challenging time identifying pedestrians with darker skin than those with lighter skin. The system struggled to recognize individuals with darker skin tones almost eight percent more frequently than their lighter-skinned counterparts.

It’s truly a shocking statistic, highlighting the natural and potentially life-threatening risks associated with biased AI systems.

How the study was carried out
The study started by meticulously annotating a total of 8,111 images with labels indicating things like gender, age, and skin tone. They marked 16,070 gender labels, 20,115 age labels, and 3,513 skin tone labels to create a comprehensive dataset.

From there, it was all about crunching the numbers. The researchers played with statistics and ultimately found a striking 7.52% gap in detection accuracy between individuals with lighter and darker skin tones.

The research pointed out that the risk for people with darker skin tones increased notably in “low-contrast” or “low-brightness” conditions, like nighttime.

 AI can’t see children as well.
But the surprises didn’t stop there. In addition to the racial bias issue, the detectors had yet another concerning blind spot: children. Astonishingly, the results showed that these detectors were twenty percent less likely to recognize children than adults.

It’s worth highlighting that the systems analyzed in the study weren’t directly from driverless car companies, as those details typically fall into the “proprietary information.” However, according to Lie Zhang, a study’s co-author and a computer science lecturer at King’s College, those companies’ models probably aren’t far off from what was studied.

This is particularly concerning, considering driverless vehicles are achieving significant regulatory milestones.

Zhang explained to New Scientist, “They won’t share their confidential information, so we don’t have insights into their specific models. However, these models are usually based on existing open-source models. Certainly, similar issues must also be present in their models.”

The issue of machine bias is no secret, and as advanced AI technology becomes more deeply woven into our daily lives, the consequences of these biases are becoming more apparent. With actual lives at stake, waiting for regulations to catch up after preventable tragedies is not a path we should be comfortable with.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed