As AI experts and leaders from around the world gathered for the first-ever AI Safety Summit earlier this month to figure out how AI should be regulated internationally. UBC scholars are at the forefront of grappling with generative AI on a smaller, but equally important, scale — in the classroom.
A June 2023 UBC report described ChatGPT as a “double-edged sword” that will take time to learn to wield safely and effectively.
According to the report, “ethical, intentional and acknowledged use has been UBC’s approach to date and seems a wise course to stay.” The only certainty is uncertainty.
In the absence of cohesive policy, UBC instructors are trying out new things on the classroom level and collaborating to share ideas.
Meet the steering committee
UBC recently formed a Generative AI Steering Committee, with AI and education experts including Dr. Christina Hendricks and Dr. Elisa Baniassad, academic director for the Centre for Teaching, Learning and Technology (CTLT). The committee also includes Dr. Jeff Clune, associate professor of computer science and previous research team leader at OpenAI — the company that created ChatGPT.
The steering committee is meant to create guidelines for UBC around safe, ethical and productive engagement with AI. This includes making sure ChatGPT goes through privacy impact assessment, which all new technology and platforms need, as well as consulting with faculty about how the technology is impacting their work.
Hendricks recognizes that it can create a challenging work environment for instructors to properly integrate the technology in their classes. Hendricks is the temporary vice-provost and associate vice-president, teaching and learning. For instructors dealing with catching and penalizing massive influxes of AI-generated work, ChatGPT might not seem like such an intriguing innovation.
“In some cases, [using ChatGPT is] not appropriate at all, where it’s very important for students to be doing the work to engage in the learning,” said Hendricks.
Hendricks said that a Teaching and Learning Subcommittee within the Generative AI Steering Committee will be established to absorb feedback from both faculty and students to learn how to provide the best support for everyone.
However, while ChatGPT is notorious as a cheating aid, it also has potential for education. Computer science professor and acting director for the Centre for Teaching and Learning Baniassad said that she uses it to help her design robust exam questions, which is a traditionally laborious task.
A learning 'crutch'
Dr. Jennifer Jenson, professor of digital languages, literacies & cultures, is excited about leveraging generative AI tools to create more equitable learning conditions for students. She said that ChatGPT can be useful for English language learners as a translation and English grammar aid.
However, she is worried about students using ChatGPT as a “crutch,” inhibiting real learning and understanding.
Jenson admitted that ChatGPT can generate some surprisingly good research questions, if often generic and repetitive. If students come to rely on this tool, they fail to learn how to form their own questions, which could lead to a deeper loss of important cognitive skills.
The many limitations of ChatGPT are veiled under the facade of human-like intelligence, though “not an ‘intelligence’ that we as humans recognize,” said Jenson. Making students aware of this misconception is important.
“What is [ChatGPT] going to take away at the same time it’s giving something? How do we measure and understand that?” asked Jenson.
Primary privacy concerns
One of the most pressing concerns with ChatGPT from a teaching standpoint is protecting students’ privacy, Baniassad said. Signing up for ChatGPT requires inputting personal information that again, no one outside of OpenAI knows where it goes.
Then, from the prompts that you give ChatGPT, what information does it retain? How and where does it store and use that information in the future? Baniassad said this isn’t something a lot of users are thinking about: “Say they have worked for five years on like a thesis, and they put their theorem into ChatGPT ... ChatGPT will remember your theorem and it will think it thought of it.”
Much of how ChatGPT works under the hood is still unclear to the general public.
Nevertheless, ChatGPT and other tools like it seem here to stay, so Hendricks is focused on supporting its use while ensuring students’ safety.
“How can we support the opportunities that this provides for teaching and learning while also managing and addressing the risks that it provides, including ethical considerations?”
Another major ethical concern for Hendricks is AI bias, which happens when an AI program is trained on or learns from inaccurate and prejudiced data. This can cause the AI to insidiously perpetuate and reinforce discrimination, stereotypes, and other inequalities. OpenAI’s Educator FAQ admits as much.
CTLT and the computer science department are currently working on a way to use ChatGPT’s application programming interface (API) to allow UBC staff and students to use ChatGPT with appropriate security measures in place. An API refers to a publicly-released instruction manual for software which allows third-party developers to easily access its features for use in other applications.
Baniassad estimates that UBC will release a safer, regulated way of interacting with ChatGPT by next fall.
'An exciting time to be an educator'
Given the novelty of ChatGPT and the unfamiliar puzzles it poses, all three education experts The Ubyssey spoke to emphasized collaboration as a key to finding successful solutions.
“I think [ChatGPT is] one of these strange new technologies where everybody can bring something from their own perspective, which is why coming together as interdisciplinary communities is so important,” said Baniassad. “I think there is so much we can learn from each other.”
For example, Baniassad said that medical school, which traditionally uses rigorous oral exams to test students’ knowledge, could be a useful examination method to test in other disciplines in the era of generative AI.
Both software engineering, which Baniassad teaches, and medicine are disciplines which hinge on actually being able to apply your learning.
“It’s nice to know that somebody asked [your doctor], how do you do this surgery? And got a verbal response,” said Baniassad. ChatGPT is sparking discussions about whether other disciplines can or should test in the same way.
“[Software engineering] is an instinct, it’s a process, it’s a practice,” said Baniassad. “So how do we assess that?”
In working together, instructors learn from each others’ stories. A guide for assessment redesign on CTLT’s website provides a platform for instructors to share how they are using ChatGPT in their classrooms. CTLT is UBC’s pedagogy resource hub, hosting weekly online group drop-in clinics, one-on-one consultations, workshops, forums and more.
This includes moving away from take-home writing assessments and towards in-class exams and oral assessments.
Jenson is one such storyteller. In an assignment for her course ETEC 511, Foundations of Educational Technology, she encourages students to use ChatGPT to answer research questions, and asks them to critically compare their own individually-researched responses to the chatbot’s. Critical thinking is especially important when engaging with ChatGPT, which can currently confidently spout falsities disguised as truths. Jenson said that she shared the assignment with other educators to experiment with by adapting it to their own classrooms.
“Often, the best resources for faculty are colleagues for advice because they can offer more directly relevant examples,” said Hendricks.
The first day of the historic AI Safety Summit concluded “with a panel discussion on the transformative opportunities of AI for public good now and in the long-term, with a focus on how it can be used by teachers and students to revolutionise education,” according to a UK government press release.
“It’s an exciting time to be an educator,” said Baniassad.