Artificial intelligence (AI) developers are no longer satisfied with programs that play checkers and optimize search engine results, and have moved toward loftier ambitions such as diagnosing leukemia and probing the creators’ inner emotions. Humans often perceive AI as being inherently superior to their own minds, completely free of earthly flaws and fallacies. However, according to Meredith Broussard, author of Artificial Unintelligence, all technology is fundamentally imbued with the beliefs and biases of those who design it—for better or worse.
Broussard presented her critical look at contemporary media’s infatuation with AI at an event for the Feminist and Accessible Publishing and Communications Technologies’ Speaker and Workshop Series. She emphasized the important distinction between “real AI,” or technologies that currently exist, and those of science-fiction fantasies.
The ultimate goal of AI research is to create a general intelligence that can adapt to a broad variety of situations, which some scientists argue may not even be possible. Building a sentient computer with this type of intelligence is a far cry from even the most cutting-edge computer programs being developed today. Current applications of AI, such as machine learning, are all forms of narrow AI focussed on mastering very specific tasks.
“Narrow AI is just math,” Broussard said. “It’s computational statistics on steroids.”
This form of AI is not transcendental, but rather a program that, when fed copious amounts of data, improves itself. All of its algorithms are produced by humans, and people inevitably build their own biases into the code they write.
Broussard describes “technochauvinism,” a term she coined in 2018, as the tendency to place computers and their decision-making prowess above human intelligence. The concept originates from the select group of white male mathematicians educated at prestigious universities who started developing the field of AI in the 1950s. According to Broussard, these men embedded their own biases in the technologies they imagined.
Technochauvinism also affects which researchers and projects receive funding. Renowned AI expert Marvin Minsky, who was part of a group of early advocates for space elevators, is just one of the privileged scientists benefitting from this system. His hypothetical technology has managed to stay relevant in intellectual circles despite the billions of dollars in funding it would require and the fact that it will likely never be realized.
The technochauvinism that benefits Minsky, according to Broussard, is also responsible for male students and faculty members in STEM-related disciplines continuing to harass their female peers at alarming rates, making these fields discriminatory towards women, as well as inherently more dangerous. This is a fact Broussard sees reflected in current technologies.
“The computer is not inherently liberating,” Broussard said. “Just because we use technology does not mean that we are furthering the cause of justice. In fact, the opposite is true. Many times when we’ve used technology, what we’re doing is embedding existing biases in code, and we are perpetuating existing social injustices.”
Broussard presented a number of techniques that can be used to repudiate the obsession with AI. Foremost, Broussard advised the audience to understand what AI really is: A machine designed by primarily male scientists that only knows as much as it is taught. Next, governments should establish a federal consumer protection agency to audit and regulate the algorithms that regulate everything from social media feeds to decisions in healthcare.
Broussard concluded by recommending that consumers assume discrimination is the default in all automated systems and learn to recognize the impact these technologies have on labourers. She gave the example of “ghost workers,” the people who perform the often traumatizing job of filtering content posted on some of the internet’s most popular sites.
“When you flag something horrific on Facebook, it [first] gets evaluated by an algorithm, but if the algorithm fails […] then that piece of horrific content goes to a person for evaluation,” Broussard said. “We need to recognize that ghost work is happening, that there are people who are operating these machines behind the scenes, and make better working conditions.”