McGill, News

Professor Johnathan Flowers discusses ableist algorithms in virtual lecture

Professor Johnathan Flowers of California State University, Northridge gave a virtual talk entitled “Ableist Algorithms and Digital Disability” as part of the “Disrupting Disruptions: Feminist Publishing, Communications and Technologies” speaker series on Sept. 11. Organized by professor Alex Ketchum of the McGill Institute for Gender, Sexuality, and Feminist Studies (IGSF), the series explores the intersection of feminist studies, technology, and history.

Flowers’ talk centred around the ableist connotations of discussions around AI. He began by discussing the recent controversy surrounding National Novel Writing Month (NaNoWriMo), a non-profit that connects writers with each other for support in crafting a novel. The organization was criticized by the public after it encouraged writers with physical or cognitive disabilities that impair their writing to use AI if they need it. Flowers argued that this upholds ableist structures through what is called a “technocapitalist disability rhetoric,” where disabilities are seen as problems to be solved through technology, which undermines the personhood of disabled people.

Despite the world becoming more AI-driven, Flowers continues to caution against the extensive use of this technology, arguing that it perpetuates oppressive practices. He argued that AI is a product of “technoableism,” the idea of technology as a means of eliminating disability, instead of addressing the systemic issues disabled individuals experience.

“The increasing integration of algorithmic technologies into our daily lives not only relies on structures of ableism in society, but imposes new ableist social and political structures through their everyday applications,” Flowers explained.

Flowers described AI as a ‘political technology’ in the same sense that Langdon Winner, a political theorist, describes all technologies. Winner’s work states that technologies are inherently political, as they can create or reinforce existing social orders. Drawing from Winner, Flowers emphasized that AI is mired in ableist and colonialist roots, and that referring to it as an ‘algorithmic platform’ is more appropriate since ‘AI’ conceals the power dynamics at play.

“The term ‘artificial intelligence’ implicitly enables the ableist, eugenicist, and racist purposes to which these technologies are routinely put—to be perceived as unmitigated goods and advancements in society,” Flowers explained. 

Additionally, Flowers stated that since the discourse surrounding AI is often centred on the technology itself, it often ignores the benefits it has for people. He used the example of Phonak, a hearing aid company, and explained that the company stated their latest hearing aids were the first of its kind to adopt AI technology.

“The [Phonak] advertisement positions the advances in technology as the primary focus of the description, rather than the material benefits it may bring to disabled persons,” Flowers said. “This reframing relies on a milder form of technoableism that positions algorithmic technologies as the future solution to ‘problems’ of disability.”

Computer algorithms perpetuating ableist structures is a familiar concept to Ketchum, whose research focuses on how marginalized groups respond to digital technologies. She noted that algorithmic ‘gatekeeping’ can affect academics who produce and disseminate feminist scholarship in digital spaces.

“[O]ne issue is that some major academic journal publishers announced that they are selling their database of articles to train AI—a labour issue, an environmental issue, and also (though legally contested) a copyright issue,” Ketchum wrote to The Tribune

Ketchum proposed that feminist publishing and communications can resist ableist algorithms by opting out and not using generative AI. 

Jeremy Frandon, a PhD student in Information Systems Engineering at Concordia University, noted that “AI objectivity” is shaped by the datasets it uses, potentially embedding systemic biases.

“Research groups try to gather data that represents the real world and that is free of sampling bias, but creating a quality dataset is such a challenge that some papers are published just to present a new dataset,” Frandon wrote to The Tribune.

For Flowers, it is crucial to pay attention to the motivations of those who created AI technologies, as well as the biases within the technologies themselves.

“We must attend to the political and social purposes that motivate the introduction of these technologies, rather than simply try to understand the technologies themselves.”

Share this:

Leave a Comment

Your email address will not be published.

*

Read the latest issue

Read the latest issue