The Consequences of Racist, Sexist AI
Increasingly, Caliskan says, job recruiters are relying
on machine learning programs to take a first pass at
résumés. And if left unchecked, the programs can
learn and act upon gender stereotypes in their
decision-making.
“Let’s say a man is applying for a nurse position; he
might be found less fit for that position if the machine
is just making its own decisions,” she says. “And this
might be the same for a women applying for a soft-
ware developer or programmer position. Almost all of
these programs are not open source, and we’re not
able to see what’s exactly going on. So we have a big
responsibility about trying to uncover if they are
being unfair or biased.”
hard on whether the data they are combing is reflec-
tive of historical prejudices. Caliskan admits the best
practices of how to combat bias in AI is still being
worked out. “It requires a long-term research agenda
for computer scientists, ethicists, sociologists, and
psychologists,” she says.
But at the very least, the people who use these pro-
grams should be aware of these problems, and not
take for granted that a computer can produce a less
biased result than a human.
And overall, it’s important to remember: AI learns
about how the world has been. It picks up on status
quo trends. It doesn’t know how the world ought to
be. That’s up to humans to decide.
And that will be a challenge in the future. Already AI
is making its way into the health care system, helping
doctors find the right course of treatment for their
patients. (There’s early research on whether it can
help predict mental health crises.)
But health data, too, is filled with historical bias. It’s
long been known that women get surgery at lower
rates than men. (One reason is that women, as pri-
mary caregivers, have fewer people to take care of
them post-surgery.)
Might AI then recommend surgery at a lower rate for
women? It’s something to watch out for.
So are these programs useless?
Inevitably, machine learning programs are going to
encounter historical patterns that reflect racial or gen-
der bias. And it can be hard to draw the line between
what is bias and what is just a fact about the world.
Machine learning programs will pick up on the fact
that most nurses throughout history have been
women. They’ll realize most computer programmers
are male. “We’re not suggesting you should remove
this information,” Caliskan says. It might actually
break the software completely.
Caliskan thinks there need to be more safeguards.
Humans using these programs need to constantly
ask, “Why am I getting these results?” and check the
output of these programs for bias. They need to think
SUMMER 2017 | THE DOPPLER | 69