The McGill Artificial Intelligence Society (MAIS) held its first in-person event of the school year, a panel titled “Ethics in AI,” on Nov. 17. The audience was at full capacity, drawing in a crowd of approximately 35 people from the McGill community to the Trottier lecture hall.
The panel featured three professionals who engage with issues surrounding AI ethics in their respective disciplines: Masa Sweidan, McGill alumna and business development manager at the Montreal AI Ethics institute (MAIEI); Ignacio Cofone, assistant professor of privacy law, AI law, and business associations at McGill; and Mark Likhten, legal innovation lead at Cyberjustice Lab at L’Universite de Montreal (UdeM).
Kaustav Das Sharma, U4 Engineering and team lead of the McGill AI Podcast, moderated the event. The panellists acknowledged that AI is often misrepresented in media and popular culture, and agreed that it is important for the public to gain a more holistic understanding of AI and the ethical barriers that emerge with its advancement.
“What is important is to […] be clear about what AI is,” Likhten said. “It is a very powerful tool, but it’s still a tool which needs […] human intervention.”
Cofone considered more precise issues, namely bias, transparency, and privacy within AI, as the issues that should garner more public attention. These three issues are at the core of his research in AI regulation.
“One important aspect to be aware of […] is AI bias,” Cofone said. “AI decision-making affects everyone, everyday [….] Transparency [in AI] is important particularly with decision-making processes such as calculating credit scores to see if you would get a house [or] calculating your risk score to see if you go to jail [….] Privacy is important because most AI is trained with [sensitive] information about us.”
There was also discussion regarding how public institutions can work to push inclusivity and diversity to the forefront of AI research and development. Sweidan stated that diversifying AI education is a crucial first step.
“Having education that includes women, BIPOC, and LGBTQ [communities] is extremely important,” Sweidan said. “Having people with different backgrounds, looking at it from the philosophy standpoint, [from computer science], from law, I think that is what leads to a more holistic education, and I think that is an extremely important first step.”
Panellists also discussed the potential of AI systems to inflict harm, and the importance of adequate personal data protection regulation. Ending on a positive note, Das Sharma asked panellists what makes them excited for a future blossoming with cutting-edge AI development. Sweidan said that she is excited by the possibilities for AI creativity, while Likhten cited the applications of AI in justice.
“I am actually very optimistic for the future,” said Likhten. “[Cyberjustice Lab] works a lot with tools [using] AI to improve access to justice, and the possibilities that we see in that field are endless [….] We talk a lot about people getting stripped of their personal data […], and the bad sides of AI, but there are lots, and lots of […] good things that you can do with AI that remain within the boundaries of ethical principles.”