McGill, News

Recap: Queerness and AI roundtable

McGill community members gathered for a roundtable discussion on Queerness and AI organized by Web Services and Equity at McGill as part of Queer History Month (QHM) on Oct. 23. Three panellists—McGill’s Associate Director of Inclusive Excellence Kit Malo, Senior Employment Equity Advisor Ande Clegg, and Digital Communications Manager Joyce Peralta—led the talk alongside the roundtable’s emcee, Digital Communications Associate Jaylen Gordon. The goal of the event was to interact with and spread awareness about those AI misrepresents and discriminates.

The panellists led guests in an exercise, prompting Microsoft Copilot to generate an image of “Queer McGill University community members for a McGill website.” The resulting picture showed a large group of people holding rainbow pride flags, many with the colours in the wrong order. Attendees observing the image were quick to point out the lack of diversity with Microsoft Copilot depicting the figures in the image as uniform in body type, ability, style, and identity. Throughout the discussion, many participants expressed feeling offended, but not surprised, by the stereotypical depictions of queer communities by Microsoft Copilot. 

The theme of this year’s QHM is visibility, with events and programming slated throughout the month of October. The roundtable also discussed journalist Reece Rogers’s Wired article “Here’s How Generative AI Depicts Queer People.” In it, Rogers discusses how AI’s depiction of queerness relies on and amplifies stereotypes around lesbian, gay, and bisexual individuals while mischaracterizing the transgender community completely.  

McGill offers a “secure version” of Microsoft Copilot to its student body with the disclaimer that “Human biases may skew the data that was used to train an AI tool […] resulting in content that reflects or amplifies those biases” in its general guidelines

The group acknowledged that issues in AI’s depiction of queerness reflect society’s historic and present mischaracterization of 2SLGBTQ+ people. Panel members argued that for AI to begin generating fair representations of the queer community, it must be trained to reject biased and discriminatory images of queerness while embracing fair representation of the 2SLGBTQ+ community. 

Share this:

Leave a Comment

Your email address will not be published.

*

Read the latest issue

Read the latest issue