Is your phone racist?

Essential Question: How can computers and other machines become racist, sexist, or otherwise biased?

Standards:

  • PS4C: Information Technologies & Instrumentation

  • PS4B: Electromagnetic Radiation

  • PS4A: Wave Properties

Photo: iStockphoto

Photo: iStockphoto

“Call Low-ra.”

“Call Low-ra.”

“Call Loh-ra,” you relent. Siri finally dials your sister Laura’s phone number, but only after having to Anglicize her name. Over the last few decades, new developments have emerged that allow us to communicate with technology and allow instruments to gather information about us as well. While we typically view machines as purely objective, we forget that the algorithms that power these machines are influenced by choices of their human designers. Just as humans, computers are not made racist, yet both humans and computers may develop implicit biases from messages they take in. 

Artificial intelligence is really just a form of statistical learning in which a computer is fed a “training” data set tagged with known outcomes and until the computer recognizes patterns and averages such that it can classify new information that is not yet tagged. In the case of voice recognition, sound samples of many people saying the name “Laura” might be input until the computer can recognize a new sound wave as either being the name “Laura” or not. The problem is that if most of the samples in the training set are said by a native English-speaker, Siri will not recognize variations. Hence, in order to use this technology, a Spanish-speaking person has no choice but to assimilate.

Bias in artificial intelligence has many more pernicious consequences.

A depixelizing tool used to smooth a low-resolution image of Barack Obama makes him appear white.

Recently Amazon, Microsoft and IBM suspended their face-recognition technology when news broke that police agencies were using the software to determine the identities of protesters following the uproar over the killings of George Floyd, Breonna Taylor, and others.  Beyond a potential infringement of First Amendment rights, this technology is largely inaccurate in identifying faces of racial minorities. As with voice recognition, face-recognition algorithms rely on an input of example images to learn. When few of these images depict faces of people of color, the algorithm will have lower accuracy for BIPOC, therefore leading to higher rates of false incrimination when used by police. 

Study showing the accuracy of a various companies’ face-recognition tools in predicting a person’s gender. Data and image by Joy Boulamwini of MIT.

Study showing the accuracy of a various companies’ face-recognition tools in predicting a person’s gender. Data and image by Joy Boulamwini of MIT.

Artificial intelligence is also sweeping the world of diagnostic medicine. One promising development is the ability of AI to diagnose lung cancer on average earlier than a physician. However, the two predominant data sets of chest X-rays that are used to train the software are skewed approximately 60% male, 40% female, meaning the software may be more accurate for men than women. As a result, women will have higher rates of false positives and negatives when using this technology. 

Students learning about the electromagnetic spectrum can study the types of information (sound waves, visible light, x-rays) computers take in to understand how the training data may be skewed. They should evaluate whether the positives of these technologies outweigh the negatives and consider necessary regulations. Computer science students can experiment with ways to alter training data sets to yield more equitable and accurate results. 

Previous
Previous

White Flight

Next
Next

Acid Attacks