GenAI and the Writing Process: Guiding student writers in a GenAI world (Part 2 of 2)

An abstract image of electrical waves running through a ring of wiring.

Vol. 5, No. 5 (Fall 2023)

Clare Bermingham, Director, Writing and Communication Centre, University of Waterloo

This is part two of two in this series. The part one can be found here. CWCR/RCCR Editor


How should writing centres advise students and instructors on the use of GenAI in their writing and communication processes? This question has been front of mind for many of us who manage and work in university and college writing centres and learning centres. And there isn’t a single answer.

When making decisions about how to support students with GenAI, we, as writing centre leaders and practitioners, must account for our local contexts, the knowledges and stages of the students we tutor, and the learning goals or outcomes for particular learning situations or tasks. Our guidance for undergraduate students will be different than for graduate students. And multilingual students may have different needs than those whose home language is English. In this blog post, the second in the series about guiding students through this new landscape, I share questions and ideas to help writing centre colleagues take an inventory of their centres and institutional needs and prepare their tutors for encounters with GenAI in students’ work. Continue reading “GenAI and the Writing Process: Guiding student writers in a GenAI world (Part 2 of 2)”

Is ChatGPT responsible for a student’s failing grade?: A hallucinogenic conversation

Vol. 5, No. 4 (Fall 2023)

Brian Hotson, Editor, CWCR/RCCR


The responsibility in using GenAI for academic pursuits in higher education is shared between the user, the tool and, in instances where the tool is part of teaching and learning processes, the institution. As such, to say that students using ChatGPT as a research to bear sole responsibility for the accuracy of the information the tools provides is unethical and unjust. In this case, this is especially the case if the student is directed by an instructor to use the tool. It can be argued that the institution bears responsibility if it doesn’t provide instruction (digital literacy) on using the tools.

ChatGPT caveats.

The anthropomorphism of GenAI writing and research tools mark their results differently from those of Google Scholar or Wikipedia, for example. GenAI, promoted as research and writing tools, bear equal and sometimes greater responsibility for not only the information they provide. These tools often position themselves within the limitations of their actions and the availability and accuracy of the data on which they draw, by providing caveats with their answers. At the same time, the anthropomorphic language that is used in providing these answers is convincing and authoritative. As a result, these tools have responsibility not only for the information they provide on the basis of its authoritative presentation. There a responsibility to those who use this information and the work that they produce as a result of the tool, especially in light of OpenAI’s own admission that ChatGPT “hallucinates” or makes up information. Continue reading “Is ChatGPT responsible for a student’s failing grade?: A hallucinogenic conversation”

The Pandemic, GenAI, & the Return to Handwritten, In-Person, Timed, and Invigilated Exams: Causes, Context, and the Perpetuation of Ableism (Part 1 of 2)

Vol. 4, No. 3 (Summer 2023)

Liv Marken, Contributing Editor, CWCR/RCCR

Part two, The Pandemic, GenAI, & the Return to Handwritten, In-Person, Timed Exams: A Critical Examination and Guidance for Writing Centre Support, is here. CWCR/RCCR Editor


When post-secondary institutions resumed in-person classes this year, many instructors and programs brought back handwritten, in-person, timed, and invigilated examinations (Hoyle, 2023; McLoughlin, 2023). This return to tradition was partly spurred by anxieties around the increase in student cheating during the remote phase of the pandemic (Bilen, Matros & Matros, 2021; Eaton, et al., 2023; Lancaster & Cortolan, 2023; Noorbehbahani, Mohammadi, & Aminazadeh 2022; Peters, 2023, Reed, 2023). Then, with OpenAI’s November 30, 2022 release of the artificial intelligence text generator, ChatGPT, anxieties about cheating escalated rapidly (Heidt, 2023). The AI language model’s ability to quickly generate natural-sounding text (in addition to its abilities in language tasks such as translation, summarization, and question answering) were exciting but also alarming (Cotton, Cotton, & Shipway, 2023; Susnjak, 2022), Since its release, ChatGPT’s steady improvement, as well as the proliferation of similar AI writing tools, have led to newly intensified anxieties around maintaining academic integrity (Cotton, Cotton, & Shipway, 2023; Susnjak, 2022).  AI detectors, which may seem like a silver bullet to prevent and catch plagiarism, have been shown to make false accusations (Drapkin, 2023) and show bias against non-native English speakers (Liang et al., 2023). OpenAI found that their own detection tool, AI Classifier, was just not effective at catching cheating, leading the company to “quietly” shut it down (Nelson, 2023): “As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy” (OpenAI, 2023). With pandemic and generative AI cheating concerns, and no easy solutions, post-secondary institutions are in is a race against the clock to redesign assessment before the fall semester (Fowler, 2023; Heidt, 2023; Hubbard, 2023). Continue reading “The Pandemic, GenAI, & the Return to Handwritten, In-Person, Timed, and Invigilated Exams: Causes, Context, and the Perpetuation of Ableism (Part 1 of 2)”