Analytics & Marketing Metrics

Can We Keep Our Biases from Creeping into AI?

People build AI. AI learns from people. People are horrifically flawed. Will AI inadvertently become a plague in the long term? There are those who are actively trying to prevent such a scenario. In an article for Harvard Business Review, Kriti Sharma discusses two different avenues for eliminating bias in AI.

AI Problems

First, Sharma discusses how AI inevitably takes on the unconscious biases of the people who build it. Most of the people building AI are currently male and come from similar cultural backgrounds. According to Sharma, this has resulted in “feminine” passive helpers like Apple’s Siri and “masculine” problem-solving machines like Watson. But a more general implication is that AI development is largely reserved for people with doctorates right now; there must be a diffusion of AI skills and learning so that more people can get involved in building them. Sharma goes so far as to say that AI teams should attempt to incorporate “writers, linguists, sociologists, and passionate people from nontraditional professions.” So basically, her solution to creator bias in AI is to just invite lots more creators.

The other issue Sharma discusses is biases that may exist in the technology that were missed as a result of inadequate testing:

… there is a lack of testing AI products throughout their development cycle to detect potential harms they may do to humans socially, ethically, or emotionally once they hit the market. One way to remedy this is by adding bias testing to a new product’s development cycle. Adding such a test in the R&D phase would help companies remove harmful biases from algorithms that run their AI applications and datasets that they pull from to interact with people.

Bias testing would also help account for the blind spots created by the lack of diversity at present in terms of who builds AI. It could help highlight issues which might not be immediately or traditionally obvious to the engineer and, most importantly, to the end user, like how an automated assistant should respond when harassed. Some degree of testing will need to continue after a product is released, since AI algorithms can evolve as they encounter new data.

For further thoughts, you can view the original article here: https://hbr.org/2018/02/can-we-keep-our-biases-from-creeping-into-ai

Show More

We use cookies on our website

We use cookies to give you the best user experience. Please confirm, if you accept our tracking cookies. You can also decline the tracking, so you can continue to visit our website without any data sent to third party services.