Artificial Bias

Those of you who read Winding Down regularly will know that I am more than a little cynical about AI and its researchers. That’s not so much because I think that AI researchers are charlatans or con merchants. It’s because it’s clear that a lot of them are so wrapped up in designing algorithms and new techniques that they fail to understand, or in many cases even consider, the social and political implications of what the work and the data sets they use are.

Of course, they are not the first to fail to understand the implications of their work. Neither, for instance did the people who founded the Internet, or developed radio and television.

The key socio-political problem is a failure to recognise that whether AI works or not depends as much, if not more, on the data set used to train it. Use a racist data set and you will get racist predictions, for instance. Even worse is the fact that it can easily become self-validating.

You can start with a crime prediction system that provides perfectly reasonable predictions about where to concentrate extra policing, and that extra policing provides a higher rate of crimes, if only because previously unreported crimes are dealt with, and/or saturated policing creates resentment and abuse of the police. This becomes self-reinforcing as the new crime figures fed into the data set show that the area needs more police, because it has a higher crime rate, and it’s a good place to put extra police because it improves the successful ‘solving’ and prosecution rate, justifying the use of the skewed system.

Underlying all this is not only a social and political failure on the part of AI researchers. There is a fundamental failure to realise that while the algorithms they use may be, in some sense, neutral, the training data sets aren’t – they are strictly limited.

Why is there such a blind spot on this issue of the limitations of AI? It’s probably because of the system that most people are aware is general purpose – it’s humans! That means that many, including researchers tend to ‘project’ human generalist characteristics onto AI systems, especially ones which seem to work. Take them out of the milieu and you get unpredictable, and often incorrect results.

Of course part of the problem lies with not just the people who design AI system, but in the people who use them. With something as potentially powerful as AI, it’s necessary to teach people to use them properly, and ensure that the penalties for wilful misuse are commensurate with the damage that they can do...

Alan Lenton
20 October 2019


Read other articles about computers and society

Back to the Phlogiston Blue top page


If you have any questions or comments about the articles on my web site, click here to send me email.