The Pandemic, GenAI, & the Return to Handwritten, In-Person, Timed Exams: A Critical Examination and Guidance for Writing Centre Support (Part 2 of 2)

Vol. 5, No. 1 (Winter 2024)

Liv Marken, Contributing Editor, CWCR/RCCR

This is Part Two of two in this series. You can read Part One, The Pandemic, GenAI, & the Return to Handwritten, In-Person, Timed, and Invigilated Exams: Causes, Context, and the Perpetuation of Ableism here. CWCR/RCCR Editor


My previous post discussed the resurgence of traditional handwritten, in-person, timed, and invigilated exams as a response to pandemic-era cheating and GenAI. While some post-secondary instructors return to these assessments, they risk moving backwards: the nature of these exams is inconsistent with commitments to equity and inclusion practices. Accommodation gains, it seems, were a corollary of pandemic remote learning, and these gains have gradually diminished due to an attitude of “returning to normal.” Continue reading “The Pandemic, GenAI, & the Return to Handwritten, In-Person, Timed Exams: A Critical Examination and Guidance for Writing Centre Support (Part 2 of 2)”

Is ChatGPT responsible for a student’s failing grade?: A hallucinogenic conversation

Vol. 5, No. 4 (Fall 2023)

Brian Hotson, Editor, CWCR/RCCR


The responsibility in using GenAI for academic pursuits in higher education is shared between the user, the tool and, in instances where the tool is part of teaching and learning processes, the institution. As such, to say that students using ChatGPT as a research to bear sole responsibility for the accuracy of the information the tools provides is unethical and unjust. In this case, this is especially the case if the student is directed by an instructor to use the tool. It can be argued that the institution bears responsibility if it doesn’t provide instruction (digital literacy) on using the tools.

ChatGPT caveats.

The anthropomorphism of GenAI writing and research tools mark their results differently from those of Google Scholar or Wikipedia, for example. GenAI, promoted as research and writing tools, bear equal and sometimes greater responsibility for not only the information they provide. These tools often position themselves within the limitations of their actions and the availability and accuracy of the data on which they draw, by providing caveats with their answers. At the same time, the anthropomorphic language that is used in providing these answers is convincing and authoritative. As a result, these tools have responsibility not only for the information they provide on the basis of its authoritative presentation. There a responsibility to those who use this information and the work that they produce as a result of the tool, especially in light of OpenAI’s own admission that ChatGPT “hallucinates” or makes up information. Continue reading “Is ChatGPT responsible for a student’s failing grade?: A hallucinogenic conversation”

ChatGPT snapshot: University of Saskatchewan

Vol. 4, No. 5 (Spring 2023)

Liv Marken,
Learning Specialist (Writing Centre Coordinator)
Writing Centre
University of Saskatchewan


In April 2023, I asked writing centre practitioners to answer 5 questions on ChatGPT and their centres’ responses. Over the next month, I’ll post the response. If you have a perspective to offer, please use this form, and I’ll post it here. Brian Hotson, Editor, CWCR/RCCR


What actions, policies, resources, or information has your institution put in place for ChatGPT?

It has been an exciting but challenging term because there has been uncertainty about who would take leadership on the issue. There wasn’t any official guidance issued, but on our academic integrity website, an instructor FAQ was published in early March, and soon after that a student FAQ. Library staff (including me and my colleague Jill McMillan, our graduate writing specialist) co-authored these with a colleague from the teaching support centre. Continue reading “ChatGPT snapshot: University of Saskatchewan”

Academic writing and ChatGPT: Step back to step forward

Vol. 4, No. 2 (Spring 2023)
Brian Hotson, Editor, CWCR/RCCR
Stevie Bell, Associate Editor, CWCR/RCCR


Sam Altman, a co-founder of OpenAI, creators of ChatGPT, said in 2016 that he started OpenAI to “prevent artificial intelligence from accidentally wiping out humanity” (Friend, 2016,  October 2). Recently, Elon Musk (also a co-founder of OpenAI) and The Woz (a co-founder of Apple) along with several high-profile scientists, activists, and AI business people, signed a letter urging for a pause in the rollout of Large Language Model (LLMs) AI tools, such as ChatGPT. The letter warns of an “out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control” (Fraser, 4 April 2023). A Google engineer, Blake Lemoine, was fired for claiming that Google’s LLM tool, LaMDA, had become sentient:

I raised this as a concern about the degree to which power is being centralized in the hands of a few, and powerful AI technology which will influence people’s lives is being held behind closed doors … There is this major technology that has the chance of influencing human history for the next century, and the public is being cut out of the conversation about how it should be developed. (Harrison, 2022, August 16)

Continue reading “Academic writing and ChatGPT: Step back to step forward”

Writing centres and ChatGPT: And then all at once

Vol. 4, No. 1 (Spring 2023)
Brian Hotson, Editor, CWCR/RCCR


A couple months ago, I asked OpenAI‘s ChatGPT to write blog post on writing centres and academic integrity. This week, I asked the new version of ChatGPT to write this piece again. For the old version of GPT, I used this prompt:

Write a five-paragraph blog post about the state of writing centres in Canada, with citations and references. The first paragraph is an overview of Canadian writing centres for 2022. The second paragraph is an overview academic integrity issues in Canada in 2022. The third paragraph is an overview of how academic integrity affects Canadian writing centres. The four paragraph provides a preview of possible academic integrity issues in Canada in 2023. The fifth paragraph is a summation of the first four paragraphs.

Continue reading “Writing centres and ChatGPT: And then all at once”

Friends don’t let friends Studiosity (without reading the fine print)

A surveillance on the ledge of a building with a cloudy sky in the background.

Vol. 4 No. 1 (Fall 2022)

Brian Hotson, CWCR/RCCR Editor
Stevie Bell, CWCR/RCCR Associate Editor

Like many teachers on a late-August vacation, education companies can see September on the horizon. The difference is that these companies aren’t relaxing. They’re sending e-mails and booking video conferences with offers of freshly printed textbooks, handy workbooks, new online tools, and easy-to-use mobile apps that promise to make student life easier and save universities and colleges money.

The business of education is very large, with total global spending estimated at $4.7 trillion (USD) (UNESCO). By comparison, the total GDP of all African nations in 2021 was $2.7 trillion (USD) (StatisticsTimes, 2021). In 2018-2019, “public and private expenditure on [postsecondary] education” in Canada was $41.5 billion. Education companies would like a share of the money. In this context, a new-to-Canada online writing and tutoring tool, Studiosity, has appeared. Continue reading “Friends don’t let friends Studiosity (without reading the fine print)”