Systemic bias is not just a problem for future AI

Artificial intelligence, like any futuristic technology that’s passing from science fiction into the real world, is viewed with a mix of awe and hostility. There are those who see AI in near-messianic terms, that it will be used to tackle problems in medicine, agriculture and education and create a fairer world. But there are also those who point to its use to create misleading “deepfakes”, perform aggressively personalised marketing, or unfairly profile potential criminals. By Colin Knox, Director of Product Strategy, SolarWinds Passportal.

  • 4 years ago Posted in

AI is a tool, and like any tool it can be used or misused. But this doesn’t mean that AI is in itself neutral or without bias. In fact, many AI systems have already been found to carry obvious biases and be just as prejudiced—if not more so—than the people and systems that they are designed to replace or support.


Bias in artificial intelligence

People are flawed, and these flaws find their way into what they create. People are not designing systems to be biased, but the assumptions they make and the flaws in the data used mean that it’s incredibly difficult not to end up with those encoded into the system, especially if it’s a flexible, learning system like artificial intelligence.

It’s common for HR departments to remove the names from job applications before they sift through them to create a shortlist to be invited for interview. Without a name, there’s less chance of making assumption about the candidate based on their gender. But what if a neutral system could do some of that initial sifting instead? Simple code that would recognise keywords would be of some use, if limited, but AI could potentially assess resumes with greater rigour.

Amazon trialled such a system, partly to try and solve the gender divide that exists in technical roles. What it found, as reported by Reuters, was that the system just didn’t like women. The problem was the existing data, i.e. successful hires, were overwhelmingly male, and so the system assumed that men were more likely to be successful hires. It gave lower scores to resumes that used more feminine language and referred to activities such as the “women’s chess club”. The very problem Amazon wanted to address was now embedded in the system designed to help.

The problem is not restricted to gender. Facial recognition systems have been found to recognise certain colours of skin better than others, simply because they weren’t tested with a wide range of people. Again, this is accidental rather than malice, but the result is the same—either due to sloppy programming or assumptions by the creators, the intent to remove human biases has only served to reinforce them.

But this is not a new problem nor one unique to artificial intelligence.

Bias in everyday systems

One of the most famous systemic biases was deliberate: American Airlines manipulated where flights appeared in its booking system, used by travel agents to book all flights, to give it an advantage… back in 1982. Even software that is not can still be a “black box” that obfuscates what is happening inside.

All organisations should think about the data they are collecting. Could it lead to bias, are the options correct, and is the data even necessary? For example, it’s common to collect title and gender information, but the options traditionally offered no longer cover everyone. Extra options might help, but it might be worth asking why this information is being collected on customers or employees in the first place. Is it helping, or just likely to contribute to some form of bias if it’s ever used?

Also the way that data is captured can leave people out. Some systems won’t accept names with diacritical marks or that are much shorter or longer than usual. This can mean, for example. that longer Portuguese names or short Chinese names are rejected by an attempt to limit the data to what is perceived as “sensible”.

Overcoming the bias of the future

MSPs should think about the systems they use to interact with their clients. When a ticket is raised, does the name on the ticket affect how it will be received. Inherent bias means that technical issues and supporting information may be treated more seriously if the ticket is raised by a man rather than a woman. And while it would be easy to assume such behaviour would come from the most boorish male staff members, it’s actually just as likely to be the bias of a woman—this is, after all, unconscious bias, and even the most enlightened among us is likely to have some sort of bias, even if we are totally unaware of it.

There are also the tools that MSPs provide to businesses that may come with bias built in. Voice is likely to have more of a place in the enterprise in the coming years, and it’s no coincidence that Siri, Alexa, and Google Assistant all come with female voices as a default. In tests, people tend to respond to them better than male voices. But this reinforces stereotypes of the female assistant. Even if people prefer it, is it a good idea to pander to these prejudices. There are gender neutral voices under development that may make help solve this issue.

Artificial intelligence will likely influence every sector, and it will probably have a bigger effect on IT support than many others. MSPs need to make sure that their own systems and practices are as free of bias as possible before the AI revolution hits, to ensure that those biases don’t become irreparably embedded.

By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.