When your company lacks diversity

Racial bias in AI is somewhat well documented. Early models for classifying people were largely trained on some old data set of photos, and something like 90% of the photos were of white people. Google famously categorized black people as gorillas (article below). Similar problems exist with sexism, since the models are trained on historical data, many conclude that ‘doctor’ is correlated to ‘male’ and ‘nurse’ is more ‘female.’ And so forth, it’s one thing to build a model, it’s another thing to make sure the data you’re using isn’t biased.

Not even all the stock photos of Simu Liu could sway the algos.

Sigh, you’re likely correct.

2 Likes

That is not that far from reality for real people. When nearly all the data you have points in one direction it is very easy to have biases that are only from experience rather than from intended inference.

I grew up where 90+% of people around me were LDS whether they were active in the religion or not, so when meeting new people in that environment pretty much regardless of other characteristics the initial assumption was that they were LDS. It was offensive to some people but the offense was not intended, just a bias based on experience.

A perhaps more interesting question is why we thought in the first place that using more complicated models and more data would remove bias.

Heck, it doesn’t even matter what the group in the surrounding area is like. I grew up in an area where almost nobody was Catholic… but everybody at my church was Catholic. As a little kid, I had no clue.

When I hit high school, I had a better idea, when we started discussing world history and the kids thought that we still sold indulgences. (man, maybe I should have tried a side business…)

But getting back to the OP – one wonders about the model testing folks. Sure, your training data set sucked, but nobody thought to test anything outside the training set before releasing the tool into the wild?

If you don’t know that you have to use something different from your training set to test your wide release… well. It’s like the chatGPT folks learning how real humans behave when they want to mess with a system.

I think it’s more fundamental than that.

The enlightenment’s influence still looms large.

Universal reason is thought to be salvation. Marxists and American progressives alike in the early 20th century believed if only they could educate children correctly then societies problems would disappear (though in different ways.) Today liberals and conservatives both still believe the right education is key to their children’s’ moral future.

Many people believe enough data and processing power will bring the singularity and make us all gods. Or perhaps create a less benevolent AI that squashes us all like bugs.

I wasn’t really aware of this show. Looking at Wiki it says it got excellent reviews but just never found its audience in time. I might have to check it out.

AI can’t think, it can just use probability to regurgitate our old thoughts. Guess what? Our old thoughts are biased. Shocking I know.

1 Like

Let me tell you how tech companies work.

2 Likes