Vol. 4, No. 5 (Spring 2023)
Liv Marken,
Learning Specialist (Writing Centre Coordinator)
Writing Centre
University of Saskatchewan
In April 2023, I asked writing centre practitioners to answer 5 questions on ChatGPT and their centres’ responses. Over the next month, I’ll post the response. If you have a perspective to offer, please use this form, and I’ll post it here. Brian Hotson, Editor, CWCR/RCCR
What actions, policies, resources, or information has your institution put in place for ChatGPT?
It has been an exciting but challenging term because there has been uncertainty about who would take leadership on the issue. There wasn’t any official guidance issued, but on our academic integrity website, an instructor FAQ was published in early March, and soon after that a student FAQ. Library staff (including me and my colleague Jill McMillan, our graduate writing specialist) co-authored these with a colleague from the teaching support centre.
There are certainly people stepping up, especially at our teaching and learning centre, and the library. I know that our teaching support centre hosted sessions where dozens of faculty and teaching staff attended. There is a great hunger for information and guidance, whether that is motivated by fear, curiosity, or a sense of responsibility.
What actions, policies, resources, or information has your centre put in place for ChatGPT?
In the absence of an institutional policy (and evolving conversations around privacy, ethics, and labour practices around the use of a very new tool), my main goal, for now, is to protect student staff as well as students from being implicated in any kind of academic misconduct. To that end, in January, we put out a policy and a guide on AI writing tools. The policy, though, is completely tutor-focused and student-focused. In writing it, I asked for input from an academic integrity expert, a faculty member who studies rhetoric and communication, an undergraduate student doing research on AI for us, and a graduate student tutor with a background in disability studies.
The policy has now been through four revisions. It covers protection of student data and privacy; ChatGPT’s shortcomings; the need for students to verify rules in each writing context; how students can approach their professors about ChatGPT in a way that doesn’t implicate them; how generative AI relates to the definition of plagiarism at USask; recommendations around attribution; what students might think about when using generative AI, such as intention, context, and transparency; how to advise students on thinking critically about outputs; and what considerations there might be around what may be lost. Once we have some institutional guidance, I look forward to writing about what will be gained. I would love to see others’ approaches, too!
Is your centre providing training for the writing centre staff?
Yes, but I would call it “thrown together” at this point! We talk about AI writing regularly, especially this term, and especially for the drop-in and appointment tutors. Tutors know to share information with me when they encounter students asking about or using it. Before ChatGPT came out, we briefly talked about assistive technology, innovative writing approaches, & popular AI writing tools (such as Quillbot or Grammarly). These earlier discussions happened in the framework of discussions about access to tools, personal experiences, the idea of single authorship, co-creation, language learning, and assistive technology. We also talked about other moments in history when there was a massive shift like this: the creation of writing itself in various cultures, spell check (scandal!), and the internet.
I will be spending the summer thinking through how to adapt training and resources. I’m paying attention to conversations around Indigenous knowledges and copyright, and what generative AI might mean for that.
I will be spending the summer thinking through how to adapt training and resources. I’m paying attention to conversations around Indigenous knowledges and copyright, and what generative AI might mean for that. We need to adapt so much more quickly than we did with the internet — so if you are asking yourself why you’re tuckered out this term, that might be part of it.
What are your students saying about ChatGPT?
I’ve heard from my tutors, my student research intern, faculty reports of what students are saying, and my professional communication course students (fairly small sample size, but an interesting diversity of responses):
- frustrated with the lack of communication from higher up, especially their instructors; or
- oblivious to its existence;
- using it with innocent (I think) enthusiasm without knowledge of potential ethical issues;
- excited by its possibilities. When used in specific ways, it has made life easier for some folks (language learning, reverse outlining, summarizing readings, etc.);
- frustration that it is expensive to access the full version;
- afraid of using it;
- very much against it and frustrated that other students are using it to cheat; or
- worried about their careers and/or curious about how it’s being used in the workplace already.
In the library, I overheard a group of students talking about responding to a discussion board post using ChatGPT. They didn’t think highly of the discussion post question; of course, it’s known that when there are assignments that students don’t see as useful or meaningful, they are more likely to cheat.
I’m not hearing many faculty or students questioning or showing much awareness of data privacy issues, especially with detectors. We see assignments where students are required to share personal stories, and I’d hope that faculty wouldn’t enter these texts into a detector (not that they are reliable, anyhow).
I’m not hearing many faculty or students questioning or showing much awareness of data privacy issues, especially with detectors. We see assignments where students are required to share personal stories, and I’d hope that faculty wouldn’t enter these texts into a detector (not that they are reliable, anyhow).
I checked in with my professional communication students throughout the term about what they were told in other classes, and they reported that they hadn’t had a professor talk about it in other than warning terms (i.e., “don’t cheat!”), later in the term. I have only 22 students, but they are from three different professional colleges.
As a side note, I allowed my students to use ChatGPT as long as they wrote a paragraph explaining how and why they used it. They also needed to cite the prompts they used, which led to a discussion about whether AI generated text is authored, in the sense of responsibility, and therefore citable. I was clear and repetitive about boundaries around it and that not all instructors will have the same opinion. In the end, the five or so who chose to play with it found it not worth the effort as they had to work in reverse to paraphrase and find sources… but of course the tool will get better though and the technology will be (is??) integrated into research tools and other applications. We had really interesting discussions about the rhetorical, contextual issues involved– it was an amazing way to discuss the whole idea of learning objectives, authorship, purpose, integrity, thinking, relation, credibility, creativity, and more….. Especially in a rhetorical communication class. I have never had a class so excited to talk about academic integrity.
I’m noticing a difference between reactions from science and technical college students versus humanities students (small sample size, mind you – just several personal conversations and heard comments in meetings or training sessions). The latter are concerned about what will be lost. Coming from a humanities background myself, I have the same concerns, but I’m more interested in answering the question of how — without turning to control and policing — we can preserve and advocate for treasuring and valuing parts of the writing process such as happenstance, self reflection, conversation with humans, to name just a few (see point ten of our policy). What is a positive approach?
Provide any other comments, etc. that you’d like to add.
Thank you for doing this. These are exciting times, and it’s hard to keep up, especially because we are all tired from the pandemic, as well as economic and societal pressures. I have really appreciated the blog posts here. Some kind of central statement from the CWCA/ACCR would be helpful—a position or guide, which could be updated as things evolve. A community of practice would be wonderful, too, especially for resource sharing—a resource hub just for us (I found other ones, including WAC), which can be a safe place to converse about institutional frustrations and constructive strategies/ approaches.
Speaking of frustration, I’m trying to remind myself that ChatGPT dropped at an unfortunate time for us — at the end of a busy term, before final exams. For many of us, this meant that we were having to play catch up and change syllabi, policies, training etc. in January. I am not able to keep up with daily (hourly?) publications and journalistic hot takes, but I’m appreciating and am grateful for being in a scholarly environment, talking about this major historical shift with students and colleagues (especially student colleagues). We have a responsibility to learn and change our practices, just as we did with the pandemic, but we can do that with each others’ support.