I just read a well-written and interesting NY Times commentary entitled Artificial Intelligence’s White Guy Problem by Kate Crawford. Crawford is a principal researcher at Microsoft and co-chairwoman of a White House symposium on society and artificial intelligence.
The “I” in Artificial Intelligence (AI) relies on inputs from the human beings that create it and teach it. Crawford says that sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.
As designers, we all put a bit of ourselves into our designs whether their nature is analog, power or software-related like AI, even if it might be subconsciously. But with software and the learning process for AI, the data that is being fed into the system can be prejudiced, even if not intentionally so.
In many machine-learning systems, an AI learns just like a baby learns, by observing and imitating a chosen type of system behaviour. If human-modulated behavior is part of that system that can introduce bias depending upon the people being watched and their so-called prejudices in the way they perform tasks. In machine vision systems there can be neural networking algorithms that learn by seeing a multitude of images. Humans that select those images can thereby introduce bias that can ultimately prejudice an AI’s decisions.
This basic problem of prejudices is not new. It’s the advanced technology for AI that is new, which magnifies the problem. Designers and programmers need to constantly refine their software algorithms to meet the needs of the service it will perform in an un-biased manner. So a Google autonomous vehicle that hits a bus needs to have its software algorithms modified – this will be an iterative process. We have entered a new realm of engineering with AI and new measures and rules will need to be formed so that these