Are we biased about AI bias?
So many companies doing digital transformation are looking keenly at AI. But when they do, some get nervous when they hear about what seems to be a problem in AI: bias. First, let’s define our terms (as Cassie Kozyrkov has helpfully provided for bias in AI):
Algorithmic bias occurs when a computer system reflects the implicit values of the humans who created it.
As civil unrest hits cities in the USA, Canada and Europe, the other, ugly, negative meaning of bias -- is getting a lot of attention. In recent years, it seems even AI is biased too -- either copying certain biases we hold, or blowing them up out of proportion.
AI is crunching data in such huge volumes, beyond human capacity, that we’re stuck not knowing: why did the AI choose A over B?
Apple co-founder Steve Wozniak was given a higher credit score than his wife despite holding the same assets and bank accounts, suggesting the algorithm treated her unfairly. Did it -- or did she have bad prior credit? Predictive policing company Predpol was found to lead police to unfairly targeted neighbourhoods of racial minorities. But PredPol uses only crimes reported by victims -- so even if cops were biased, that wouldn’t affect the math the AI uses...
The problem? Even the makers of these algorithms can’t explain what’s going on inside the ‘box’.
I have a suggestion. For the moment, forget about AI bias. In fact, forget about AI. Ask yourself, how do you provide value for your customers? As you learn, you think about more meaningful questions for your business. This will help you to understand which data you need to capture to achieve the outcome you want.
Otherwise, AI won’t help you solve your problem. The answer will be 42.