Since OpenAI unveiled ChatGPT in late 2022, showcasing the startling capabilities and power of artificial intelligence (AI), there has been a surge in the number of sophisticated language models — like Google’s Bard, Meta’s Llama, and Anthropic’s Claude. These technologies have captured both the attention and scrutiny of higher-education institutions. As the COVID-19 pandemic recently shook conventional ways of instruction, the education system is seeing yet another disruption: Generative artificial intelligence.
Large Language Models (LLMs), such as GPT, which stands for “Generative Pre-trained Transformer model”, are trained on large amounts of textual data. They operate by predicting the probability of a word occurring within a longer sequence of words, achieved through the breakdown of text into smaller parts known as “tokens.” Notably, GPT-3, trained on an extensive dataset of 500 billion tokens, represents a significant milestone in AI advancement. A study conducted in January 2023 involving 1,000 students in the US found that 89 per cent have used ChatGPT to help with homework assignments, and the latest available data indicates that it currently boasts an impressive user base of over 180 million users, underscoring its growing influence on education.
Amid this technological change and the increasing usage of — and reliance on — generative AI among students, the question for McGill’s academic community is becoming clear: How can we proactively harness the potential of AI while addressing the concerns that it poses in such a way that it benefits the higher education community at McGill as a whole?
Personalized assistance for students and teachers
The ability of AI models to offer each student immediate, personalized assistance is quickly becoming apparent. Educational platforms like Khan Academy and edX are already incorporating AI-powered learning assistance, attempting to leverage the power of one-on-one tutoring on a much wider scale — a scale previously unattainable.
A fourth-year student at McGill, who preferred to remain anonymous, shared with The Tribune their appreciation for using AI in academia and learning, particularly in providing comprehensive reviews of forgotten topics. “I usually look at [ChatGPT] to review concepts that I haven’t seen in a while, because sometimes you need to go back to first-year stuff that you completely forgot and it’s nice to give you a good summary,” they said. “I always find that it condenses information much better than just going on the website if you’re looking for quick summaries of something that interests you, but you don’t want to spend time reading 20 pages.”
They also emphasized how AI enables them to explore ideas independently by engaging in conversations with it and prompting it to answer what would happen in different scenarios.
Educators, meanwhile, are finding AI to be useful in the classroom. It can act as a teaching assistant by providing advice on course content, aiding with lesson planning, offering student feedback, and streamlining tasks such as exam design.
Rico Li, a U2 student in the Faculty of Engineering, recounted an assignment in his software development class — ECSE 250, Fundamentals to Software Development — that featured content and a story description generated by ChatGPT, as well as images gathered from Canva’s Magic Media tool, to provide better context. He believed that his professor’s use of AI in crafting the assignment added a creative touch to it, which made it more engaging and appealing to students’ interests.
In using AI as a collaborative partner, students and educators alike can accomplish tasks more effectively. Nevertheless, an excessive reliance on generative AI tools can lead students to lose their sense of critical thought and creativity — and this can impede genuine learning.
In fact, it can be dangerous to use these tools without an understanding of their unreliability and without fact-checking the information they output. Although the answers these models generate may sound very fluent, knowledgeable, and natural, they can present false information and explanations as true, and even fabricate sources — a phenomenon known as “hallucinations.” Models can also amplify biases present in their training data.
“It can be extremely discouraging [to learning] when we [users] think AI is something omniscient,” Lydia Cao said, referring to her research on the implications of generative AI for education as a Postdoctoral Fellow at the Harvard Graduate School of Education. “I think we’ve got to see AI as a learning partner, whose idea you can build upon, challenge, and criticize.”
So, rather than seeking outright answers from the bot, it proves more helpful to use it for tips or guidance towards solving the problem yourself.
Academic Integrity
The advent of generative AI has heightened concerns around academic integrity, particularly the risk of plagiarism. With students having the option to have an AI model write their entire essays in seconds, it is understandable why educators and policymakers may be worried. One of France’s top universities, Sciences Po, has gone as far as to ban the use of ChatGPT without transparent referencing, with punishments that could lead to expulsion from the school — or even French higher education altogether.
In response to these concerns, AI detection tools like OpenAi’s AI classifier, GPTZero, and Turnitin's own built-in detector have been developed. However, while these tools may be able to maintain some degree of transparency, they often fall short in their ability to accurately detect cases of plagiarism using AI, leading to the generation of false negatives and positives. This was exemplified in July of 2023 when OpenAI discontinued its AI detection tool due to its “low rates of accuracy.”
"It’s hard for machines to detect whether a piece of text is machine-generated or human-written,” Siva Reddy, assistant professor in the School of Computer Science and the Department of Linguistics at McGill University, said. “This is because humans are good at manipulating text such that the machine gets confused. A machine-generated text is often grammatical and follows certain patterns that maximize the probability of the text. [...] One can take such text and edit it here and there, for example, by making a few sentences ungrammatical, or using spelling mistakes or infrequent terminology, thereby making the text out of its usual distribution for the machine to detect. This is an active area of research in natural language processing also known as watermarking.”
Watermarking, along with other factors such as the quality of training data and model complexity, contribute to the unreliability of AI detection tools. This confluence of factors suggests that a disproportionate dependance on these tools could prove more detrimental than beneficial for students.
“Students need to be very concerned if McGill goes this route from a policy perspective. Profs should definitely not use these tools on their own,” Andrew Piper, a professor in the Department of Languages, Literatures, and Cultures and Director of the Bachelor of Arts and Science program at McGill, says.
For educators, it is critical to understand the limitations of these tools before using them to maintain a culture of trust among students and teachers; the goal should be to use AI to enhance students’ learning progress rather than for scrutiny. Piper, for example, employs two things in his classroom, “a) ask students to sign a pledge on how they used AI for their assignment and b) structure assignments in a way to minimize GPT’s usefulness.”
Rethinking assessment methods
The computational powers of AI have undoubtedly challenged traditional methods of assessing students. Piper notes that it has essentially removed an important kind of writing assignment: “Short essays where students have considerable time to think through and craft their answers.”
Piper explained AI's implications mean that take-home essays and assignments — which are common in the humanities — need to be rethought. “Either we move to more in-class assessments or we have to structure our assignments in such a way that they are not easily completed by GPT,” Piper said.
In this evolving landscape, a question looms: How can we prevent students from using AI to complete their assignments for them? Cao argues that students must recognize the intrinsic value of their educational pursuits beyond mere assessment-oriented goals.
“We need to help learners to understand that learning is much more than generating a product,” she explains. “For example, learning to write is not just about producing a coherent piece of text but about developing the capacity to organize one’s thoughts, build on others’ ideas, analyze claims, create new ideas, and communicate with others. For students, often learning is driven by the assessment.”
“Depending on how the professor structured the final, the student will adapt their learning strategy or even what they’re going to learn accordingly. So I think professors must help students see the purpose of their learning beyond merely generating a product and really understand, in a metacognitive way, why I'm here — why I'm learning what I'm learning."
Encouraging this mindset requires open dialogue between educators and students regarding the roles and appropriate uses of AI in the classroom — something that many students told The Tribune is rarely done in their courses at McGill. Initiating these conversations can help students discern how to most effectively utilize AI in their learning of the subject matter.
Where do we go from here?
To address the use of generative AI at McGill, the Senate Subcommittee on Teaching and Learning (STL) created a working group in May 2023. On June 13, 2023, the Academic Policy Committee (APC) Subcommittee on Teaching and Learning (STL) approved the working group's report, which underlined two key recommendations for the use of generative AI at McGill.
The first recommendation outlines five key principles to ensure the responsible adoption of generative AI tools:
The second recommendation tasks for the Office of the Provost and Vice-Principal (Academic) (OPVA) to provide clear mandates and resources to assist the development and implementation of strategic roadmaps according to the defined principles.
It is expected that the STL’s recommendations, if adopted, will guide the university community with clear direction for appropriate, thoughtful, and decisive actions in support of McGill’s academic mission.
As the university progresses and artificial intelligence keeps developing at a quicker pace, educational methodologies and policies will constantly be prompted to adapt by undergoing continuous change and improvements. The question of AI’s place in education calls for ongoing discussions between students and educators. In doing so, teachers can be intentional about either designing AI into their assessment and teaching methods or out of them.
Adopting a human-centric vision with AI means embracing the technology as a flawed yet valuable ally — cautiously. This way, as new developments emerge, the higher-education community can be empowered to navigate the evolving technological landscape with confidence.