AI prejudice and me (you, anyone you know)

Amazon article.jpg

Here’s the thing about prejudices: we all need them.

I’m guessing you didn’t expect that. Probably because you associate “prejudice” with a bad thing (which it certainly can be).

Let’s be clear: a prejudice is a pre-judgement, a conclusion/thought/assumption relied upon in order to make conscious judgements. We humans use them all the time – in fact, we wouldn’t have survived and would not be able to cognitively function without them. There is too much information to process consciously in real time.

So, we have stuff pre-loaded to help us cope. The problem comes with the content of our prejudices: not all of them are correct and some of them are really not ethically good. We then do things in our lives that are bad, just because we had some bad inputs we haven’t consciously addressed.

Good Prejudice

For example, the pre-judgement that the colour red means danger really helped us identify what not to eat as humanity developed and continues to help us process things like stop signs on roads. It may not be correct in every case (not all football teams wearing red shirts are dangerous) but all the errors, at least I can think of, with this prejudice are ethically benign. (That doesn’t mean we won’t find one! Then we’d have to consciously address it.) 

Bad Prejudice

Amazon’s prejudice against women’s not the same. It turned out they were systemically excluding applications by women (see story, above/right). This is both incorrect and ethically bad.

Fortunately, Amazon realises this. Unfortunately, they had automated it through AI because the input (the content of the pre-judgements on which CV-judging was based) was bad – their 10-year body of data was skewed against women.

They decided to switch off the machine, apparently because the problem is un-fixable. Once the machine learned to be badly prejudiced, it couldn’t learn not to be. Or at least Amazon couldn’t think of how (or, at least, how at a cost palatable to them).

Rational, Ethical Beings

This is not a new problem – machine-learned appalling prejudices have raised their ugly heads everywhere.

Can it be otherwise? Can the machines un-learn these problems? The data we put in (the content of the pre-judging) can be improved, for example by getting more diverse groups of people involved in designing the software and choosing the inputs. It will never be perfect – there will always be flawed, ethically bad prejudices lurking that AI risks reinforcing.

Here’s where humans are (presently, at least) superior. We are built to correct this type of error. Some people choose not to correct it, which is why they are ethically bad people.

Screen Shot 2018-10-30 at 16.07.15.png

Our constant duty, as humans – rational, ethical beings – is to identify those of our prejudices that are incorrect and ethically bad, then correct them. Social activity with diverse groups of other humans provides the opportunity to see incorrect/bad prejudices.

But, you say, we need prejudices. Correct. But that doesn’t mean that any one of us needs all of our presently-held set of prejudices. We need some set.

A German philosopher guy named Neurath had a great analogy I’m going to steal – called “Neurath’s Boat.”

Think of your prejudices (or mine, if that makes you feel more comfortable) as an old-fashioned sailing ship, made out of lots of planks of wood. You are at sea but some of the planks are bad and need changing/repairing. But you’re at sea (in the middle of your life), you can’t stand outside the ship (outside your life), and rip out all the planks (all your prejudices) and start over. You need a lot of those planks not to sink!

What do you do? You pick a plank or two at a time, stand on the other ones, pull the selected planks out and fix/replace them.

LEARN!

To those who find addressing their prejudices difficult, it can be learned. Reluctantly Brave works with organisations, teams and people on exactly that.

What Amazon giving up on the machine made me wonder is – can we ever design machine learning that can be like Neurath’s Boat? Or is that too complex for something man-made?

Many AI experts think AI can’t replace human creativity, empathy or ethical capacity. Sounds like bad news for machines Neurath’s Boat-ing themselves…but can we get machine learning far enough not to (rapidly! unnoticeably!) perpetuate the worst errors? I’d love to work on a project to find out.

AdamInclusion, Data, Culture