GenAI and the Writing Process: Guiding student writers in a GenAI world (Part 2 of 2)

An abstract image of electrical waves running through a ring of wiring.

Vol. 5, No. 5 (Fall 2023)

Clare Bermingham, Director, Writing and Communication Centre, University of Waterloo

This is part two of two in this series. The part one can be found here. CWCR/RCCR Editor

How should writing centres advise students and instructors on the use of GenAI in their writing and communication processes? This question has been front of mind for many of us who manage and work in university and college writing centres and learning centres. And there isn’t a single answer.

When making decisions about how to support students with GenAI, we, as writing centre leaders and practitioners, must account for our local contexts, the knowledges and stages of the students we tutor, and the learning goals or outcomes for particular learning situations or tasks. Our guidance for undergraduate students will be different than for graduate students. And multilingual students may have different needs than those whose home language is English. In this blog post, the second in the series about guiding students through this new landscape, I share questions and ideas to help writing centre colleagues take an inventory of their centres and institutional needs and prepare their tutors for encounters with GenAI in students’ work. Continue reading “GenAI and the Writing Process: Guiding student writers in a GenAI world (Part 2 of 2)”

Is ChatGPT responsible for a student’s failing grade?: A hallucinogenic conversation

Vol. 5, No. 4 (Fall 2023)

Brian Hotson, Editor, CWCR/RCCR

The responsibility in using GenAI for academic pursuits in higher education is shared between the user, the tool and, in instances where the tool is part of teaching and learning processes, the institution. As such, to say that students using ChatGPT as a research to bear sole responsibility for the accuracy of the information the tools provides is unethical and unjust. In this case, this is especially the case if the student is directed by an instructor to use the tool. It can be argued that the institution bears responsibility if it doesn’t provide instruction (digital literacy) on using the tools.

ChatGPT caveats.

The anthropomorphism of GenAI writing and research tools mark their results differently from those of Google Scholar or Wikipedia, for example. GenAI, promoted as research and writing tools, bear equal and sometimes greater responsibility for not only the information they provide. These tools often position themselves within the limitations of their actions and the availability and accuracy of the data on which they draw, by providing caveats with their answers. At the same time, the anthropomorphic language that is used in providing these answers is convincing and authoritative. As a result, these tools have responsibility not only for the information they provide on the basis of its authoritative presentation. There a responsibility to those who use this information and the work that they produce as a result of the tool, especially in light of OpenAI’s own admission that ChatGPT “hallucinates” or makes up information. Continue reading “Is ChatGPT responsible for a student’s failing grade?: A hallucinogenic conversation”

Academic writing has completely changed: Turnitin forges ahead

Vol. 5, No. 3 (Fall 2023)

By Brian Hotson, Editor, CWCR/RCCR

On July 20, 2023, OpenAI, the parent company of ChatGPT and Dall·E, stopped offering its GenAI detection tool, AI classifier, saying that it “is not fully reliable” (OpenAI, 2023). There’s a short statement on OpenAI’s website:

As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated. (Kirchner, Ahmad, Aaronson, & Leike, 2023, January 1; italics in the original)

There, OpenAI describes its AI classifier:

Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives). (Kirchner, Ahmad, Aaronson, & Leike, 2023, January 1; emphasis in the original)

There was a lot of handwringing in response among the tech-class online: The Verge said, OpenAI can’t tell if something was written by AI after all and PC World, OpenAI’s ChatGPT is too good for its own AI to detect. Considering the impact of OpenAI’s announcement, there was little to no other coverage. Meanwhile, Turnitin, valued at $1.75 billion USD in 2019, continues to offer GenAI writing detection.

All change

As a result of the launch of ChatGPT in November 2022, fundamental changes to higher education have happened, and continue to happen, quickly and with unforeseen consequences. “A US poll published March 2023, found that “43% of college students have used ChatGPT or a similar AI application” and 22% “say they have used them to help complete assignments or exams,” representing  “1 in 5 college students” (Welding, 2023, March 27). Inside Higher Ed published a piece, The Oncoming AI Ed-Tech ‘Tsunami’, predicting “[t]he AI-in-education market is expected to grow from approximately $2 billion in 2022 to more than $25 billion in 2030, with North America accounting for the largest share” What was a relevant response for GenAI in December 2022 is now ancient history. The scene is fluid—there are few predictive models, and no one knows what might come next. (D’Agostino, 2023, April 18). A classroom instructor is quoted in May 2023: “AI has already changed the classroom into something I no longer recognize” (Bogost, 2023, May 16).

On April 4, 2023, Turnitin launched its AI detection tool (Chechitelli, 2023, March 16). At the time, Turnitin’s CEO, Chris Caren, wrote,

…we are pleased to announce the launch of our AI writing detection capabilities… To date, the statistical signature of AI writing tools remains detectable and consistently average. In fact, we are able to detect the presence of AI writing with confidence. We have been very careful to adjust our detection capabilities to minimize false positives and create a safe environment to evaluate student writing for the presence of AI-generated text. (Caren, 2023, April 4)

On April 3, the Washington Post, which had early access, tested the accuracy of Turnitin’s tool using 16 “samples of real, AI-fabricated and mixed-source essays.” It found the tool

…got over half of them at least partly wrong. Turnitin accurately identified six of the 16 — but failed on three… And I’d give it only partial credit on the remaining seven, where it was directionally correct but misidentified some portion of ChatGPT-generated or mixed-source writing. (Fowler, 2023, April 3)

Pieces in Rolling Stone, The Atlantic, and USA Today found similar results. Postings to r/ChatGPT began to appear with accounts by students claiming to be falsely accused as a result of Turnitin’s tool by their instructors of using GenAI to write their papers.

What about the backdoor?

At the same time, r/ChatGPT also provides information on how to skirt AI detection, which grows in sophistication. Tips appeared in May 2023 providing information on how to “pass Turnitin AI detection” using ChatGPT and Grammarly (Woodford, 2023, May 14). Students were using ChatGPT to fool the detection tools using prompts that turned ChatGPT into a ghost writer that mimicked student’s tone and voice. A student, interviewed by the New York Times, explained how they gave ChatGPT a sample of their writing, and asked ChatGPT

“…to rewrite this paragraph to make it sound like me…So, I copied [and] pasted a page of what I’d already written and then it rewrote that paragraph, & I was like, this works” (Tan, 2023, June 26).

Online web tools began to appear, such as UndetectableAI, HideMyAI, and QuillBot, designed specifically to fool GenAI detection tools.

AI detection tool will need to be taken “with a big grain of salt,” saying that, in the end, it is up to the instructor to “make the final interpretation” of what is created by GenAI and what isn’t—“You, the instructor, have to make the final interpretation”

Screen shot of UndetectableAI.

Also in May, Turnitin began to provide caveats for its AI detection tool. David Adamson, an AI scientist and Turnitin employee, says in a Turnitin produced video, Understanding false positives within Turnitin’s AI writing detection capabilities, that instructors need to do some work when using the tool. He admits that the results of submissions to the AI detection tool will need to be taken “with a big grain of salt,” saying that, in the end, it is up to the instructor to “make the final interpretation” of what is created by GenAI and what isn’t—“You, the instructor, have to make the final interpretation” (Turnitin, 2023, May 23). These false positives, according to Adamson, have different “flavours.” These flavours are specific kinds of writing that Turnitin’s tool is not good at predicting as GenAI writing. These include:

  • Repetitive writing: the same words used again and again.
  • Lists, outlines, short questions, code, or poetry.
  • Developing writers, English-language learners, and those writing at middle and high school levels.

Adamson ends the video by saying, “we own our mistakes. We want to…share with you how and when we are wrong” (Turnitin, 2023, May 23). These mistakes, Adamson states, represent ~1%, or 1 in 100, of submissions through the tool.

If we use Adamson’s rate of 1% false positives, 3.5% of 38.5 million submission is 1.3 million—1% of 1.3 million is 13,000 student papers that were found to be written in part by GenAI, when in fact they were not.

While this may be acceptable to Turnitin, this 1% represents real student assignments, written by real students. By May 14, 2023, Turnitin reported that 38.5 million submissions had been submitted for examination by their GenAI detection tool, “with 9.6% of those documents reporting over 20% of AI writing and 3.5% over 80% of AI writing” (Merod, 2023, June 7). If we use Adamson’s rate of 1% false positives, 3.5% of 38.5 million submission is 1.3 million—1% of 1.3 million is 13,000 student papers that were found to be written in part by GenAI, when in fact they were not. If Adamson’s 1% false-positive rate is applied to the 9.6% of papers reported with over 20% of AI writing (3.8 million assignments), this total is about 37,000 assignments. Together, this is approximately 50,000 false positives affecting 50,000 students. For scale, two of Canada’s largest schools, the University of Alberta has a student population is 40,100 and York University, 55,700. For the students, being accused of an academic violation can not only affect their academic record, but cause anxiety, loss of scholarship, and cancellation of student visas. Turnitin’s AI scientist Adamson say that the 1% false positive rate is “pretty good…” (Turnitin, 2023, May 23).


The University of Pittsburgh and Vanderbilt University have decided to not use Turnitin’s tool. The University of Pittsburgh

has concluded that “current AI detection software is not yet reliable enough to be deployed without a substantial risk of false positives and the consequential issues such accusations imply for both students and faculty. Use of the detection tool at this time is simply not supported by the data and does not represent a teaching practice that we can endorse or support.” Because of this, the Teaching Center will disable the AI detection tool in Turnitin effective immediately (Teaching Center doesn’t endorse, 2023, June 23).

Vanderbilt indicated that they’d “decided to disable Turnitin’s AI detection tool for the foreseeable future. This decision was not made lightly and was made in pursuit of the best interests of our students and faculty,” due to Turnitin’s lack of transparency of how it works as well as the false positive rate (Coley, 2023, August 16). Vanderbilt also did the math regarding the impact on their students due to false positives:

Vanderbilt submitted 75,000 papers to Turnitin in 2022. If this AI detection tool was available then, around 3,000 student papers would have been incorrectly labeled as having some of it written by AI. Instances of false accusations of AI usage being leveled against students at other universities have been widely reported over the past few months, including multiple instances that involved Turnitin… In addition to the false positive issue, AI detectors have been found to be more likely to label text written by non-native English speakers as AI-written. (Coley, 2023, August 16).

Vanderbilt concluded, “we do not believe that AI detection software is an effective tool that should be used” (Coley, 2023, August 16).

Canadian higher education institutions have a mixed approach to AI detectors. On April 4, 2023, University of British Columbia acted quickly to Turnitin’s AI detection tool, stating that they will not enable it. Their reasoning, among others, includes: “Instructors cannot double-check the feature results”;Results from the feature are not available to students”; and an inability “of the feature to keep up with rapidly evolving AI is unknown” (University of British Columbia, 2023, April 4).

Nipissing University, in their senate-approved, June 2023 “Generative AI Guide for Instructorsguide, mentions that “use of generative AI ‘detectors’ is not recommended” (n.p.); the University of Waterloo similarly cautions faculty, “controlling the use of AI writing through surveillance or detection technology is not recommended” (Frequently Asked Questions, 2023, July 25). Conestoga College in its guide to using Turnitin’s tool, instructs faculty to indicate that they are using the tool; “Without such notice, a student may at some point appeal.” (Sharpe, 2023, June 27).

Others, such as the University of Lethbridge and Queen’s University, use Turnitin without caveats specific to the AI detection tool that I could find on their public-facing website at the time of writing.

What is the literature saying?

A paper published this month in the International Journal for Educational Integrity, “Evaluating the efficacy of AI content detection tools in differentiating between human and AI‑generated text,” which did not include Turnitin, found the “performance” of AI detection tools[1] on GPT 4-generated content was “notably less consistent” in differentiating between human and AI-written text (Elkhatat, Elsaid, & Almeer, 2023, p. 6). “Overall, the tools struggled more with accurately identifying GPT 4-generated content than GPT 3.5-generated content” (p. 8). The findings of this study should raise questions about using GenAI detection tools in higher education:

While this study indicates that AI-detection tools can distinguish between human and AI-generated content to a certain extent, their performance is inconsistent and varies depending on the sophistication of the AI model used to generate the content. This inconsistency raises concerns about the reliability of these tools, especially in high-stakes contexts such as academic integrity investigations. (p. 12-13)

A conclusion of the paper advising “the varying performance [of detection tools on ChatGPT 3.5 and ChatGPT 4] underscores the intricacies involved in distinguishing between AI and human-generated text and the challenges that arise with advancements in AI text generation capabilities” (p. 14).

According to Turnitin, students who are English-language learners, developing writers, or a secondary-level of academic writing are at a higher risk of false positives from their tool. Adamson admits that Turnitin’s false positive rate is “slightly higher” for these students—“Still near our 1% target, but there is a difference”

International students take the brunt, again

In higher education, it is well documented that international students, who make up “approximately 17% of all post-secondary enrollments in Canada” (Shokirova, et al., 2023, August 23), are accused of academic integrity breaches at a higher rate than domestic students (See for example, Adhikari, 2018; The complex problem…, 2019; Eaton & Hughes, 2022; Fass-Holmes, 2017; Hughes & Eaton, 2022). As we see in writing centres, undergraduate international students are often English-language learners, many of whom are writing academic papers in English at post-secondary levels for the first time. As a result, many undergraduate international students’ level of writing in academic English is low. Some students that I have tutored take several years of writing practice to attain a level of academic writing many in the academy consider “polished” or at post-secondary levels.

According to Turnitin, students who are English-language learners, developing writers, or a secondary-level of academic writing are at a higher risk of false positives from their tool. Adamson admits that Turnitin’s false positive rate is “slightly higher” for these students—“Still near our 1% target, but there is a difference” (Turnitin, 2023, May 23). At the same time, Adamson also claims that Turnitin doesn’t see “any evidence” that the tool is “biased against English language learners from any country at any level” (Turnitin, 2023, May 23). Unfortunately, I was not able to find data published by Turnitin to substantiate these claims, including what the difference in the false-positive rate for these students is: What does Turnitin consider “near” their “1% target”? Is it 2%, 3.5%, 1.5%? Considering the large numbers involved, 38.5 million as of May 2023, even a 0.5% increase is significant.

What will happen in September?

Like the winter semester of 2023, it may well be that the first assignments submitted this month will start another round of changes to academic writing, academic integrity, and students’ use of Gen AI tools. It will be important for institutions to monitor and update their policies and procedures regarding AI detection tools, like Turnitin, in response to possible changes to GenAI writing. International students should be paid specific attention in these cases, as they are already vulnerable within higher education.


Adhikari, S. (2018). Beyond culture: Helping international students avoid plagiarism. Journal of International Students, 8(1), 375–388.

Bogost, I. (2023, May 16). The First Year of AI College Ends in Ruin. The Atlantic.

Caren, C. (2023, April 4). The launch of Turnitin’s AI writing detector and the road ahead. Turnitin.

Chechitelli, A. (2023, March 16). Understanding false positives within our AI writing detection capabilities. Turnitin.

Coley, M. (2023, August 16). Guidance on AI detection and why we’re disabling Turnitin’s AI detector. Vanderbilt University.

The complex problem of academic dishonesty among international students – Study International. (2019, March 25). International Study.

D’Agostino, S. (2023, April 18). The Oncoming AI Ed-Tech ‘Tsunami’. Inside Higher Ed.

Eaton, S. E., & Hughes, J. C. (2022). Academic Integrity in Canada. In S. E. Eaton (Ed.) Ethics and Integrity in Educational Contexts, (Vol. 1, pp. xi-xvii). Sprinter.

Elkhatat, A. M., Elsaid, K., & Almeer, S. (2023). Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. International Journal for Educational Integrity, 19(17).

Fass-Holmes, B. (2017). International students reported for academic integrity violations: Demographics, retention, and graduation. Journal of International Students, 7(3), 644–669. 

Frequently Asked Questions: ChatGPT and generative AI in teaching and learning at the University of Waterloo. (2023, July 25). Associate-Vice President, Academic, University of Waterloo.

Hughes, J. C., & Eaton, S. E. (2022). Student integrity violations in the academy: More than a Decade of growing complexity and concern. In S. E. Eaton & J. C. Hughes (Eds.), Ethics and Integrity in Educational Contexts (Vol. 1, pp. 61–79). Springer.

Johnson, S. (2019, March 1). Turnitin to Be Acquired by Advance Publications for $1.75B. EdSurge.

Kirchner, J., Ahmad, L., Aaronson, S., & Leike, J. (2023, January 1). New AI classifier for indicating AI-written text. OpenAI.

Sharpe, A. (2023, June 27). Using Turnitin’s Artificial Intelligence (AI) Detection Tool and the Process Guide for Navigating Potential Academic Offences. Faculty learning Hub, Conestoga College.

Shokirova, T., Brunner, L. R., Kishor Karki, K., Coustere, C., & Valizadeh, N. (2023, August 23). Reinventing the reception of students from abroad in graduate studies. University Affairs.

Tan, S. (2023, June 26). Suspicion, Cheating and Bans: A.I. Hits America’s Schools. The Daily, New York Times.

Tea Teaching Center doesn’t endorse any generative AI detection tools. (2023, June 23). University Times, University of Pittsgurgh.

University of British Columbia. (2023, April 4). UBC not enabling Turnitin’s AI-detection feature.

Welding, L. (2023, March 27). Half of college students say using ai on schoolwork is cheating or plagiarism. BestColleges.

Woodford, A. (2023, May 14). Can Turnitin detect ChatGPT? ChatGPT Prompts.

[1] The detection tools in the study were OpenAI, Writer, Copyleaks, GPTZero, and CrossPlag.

Hello! I am your AI academic writing tutor: A quick guide to creating discipline-specific tutors using ChatGPT

Vol. 5, No. 2 (Fall 2023)

By Brian Hotson, Editor, CWCR/ACCR

On August 31, 2021, OpenAI posted to their website, Teaching with AI, described as a guide “to accelerate student learning” using ChatGPT. This guide provides prompts to “help educators get started with” ChatGPT. These include prompts for lesson-planning development, creating analogies and explanations, helping “students learn by teaching,” as well as creating “an AI tutor.”

Prompts in ChatGPT are text inputs to generate responses—essentially asking ChatGPT questions. A simple prompt, such as “Write an outline for a five-page essay on sedimentary deposits of soil,” generates responses that are helpful, providing a guide to essay writing on a specific topic. However, by using a complex prompt, ChatGPT can be set up to answer questions in a specific way, turning it into an effective, discipline-specific writing tutor. Continue reading “Hello! I am your AI academic writing tutor: A quick guide to creating discipline-specific tutors using ChatGPT”

Why broadband access is an essential learning tool

Vol. 4, No. 8 (Spring 2023)
Brian Hotson, Editor, CWCR/RCCR

I recently was going into a shop in a stripmall, and one of my son’s friends from school was sitting on the sidewalk outside the store playing on their phone. I chatted with them a bit, and then asked if either of their parents was in this shop. “No. I come here because we don’t have internet at home.”

In a recent  Education Equality and Accountability Office (EQAO) survey of grade 9 students in Ontario, only 55.2% indicated that they had “access to strong internet connection at home to complete my schoolwork.” In real numbers, of the 103,816 students who responded to the EQAO survey, 3502 (3.4%) said that they do not have strong access to the internet; 1236 students (1.2%) indicated that they have strong access “hardly ever.” Eight hundred and twenty-four (0.8%) students in grade 9 in Ontario indicated that they “never” have access to the Internet (EQAO, 2023; see Figs. 1 & 2). Continue reading “Why broadband access is an essential learning tool”

ChatGPT snapshot: University of Waterloo

Vol 4, No. 7 (Spring 2023)

Clare Bermingham,
Director, Writing and Communication Centre,
University of Waterloo

In April 2023, I asked writing centre practitioners to answer 5 questions on ChatGPT and their centres’ responses. Over the next month, I’ll post the response. If you have a perspective to offer, please use this form, and I’ll post it here. Brian Hotson, Editor, CWCR/RCCR

What actions, policies, resources, or information has your institution put in place for ChatGPT?

At the University of Waterloo, the Office of the Associate Vice-President, Academic has shared several information memos and a FAQ resource, which includes guidance on the university’s pedagogy-first approach and maintaining academic integrity related to ChatGPT. Our university uses Turnitin and has just activated the ChatGPT detection option. The impact of this for instructors and students is unclear at this point in time. Continue reading “ChatGPT snapshot: University of Waterloo”

Drafting a position statement for ChatGPT and LLM writing tools for higher education

Vol. 4, No. 6 (Spring 2023)
Brian Hotson, Editor, CWCR/RCCR

Having a baseline foundation is important to building writing and tutoring programs and support for students. This is especially true when technology comes available that dramatically changes not only the way we teach, but the way we think about education. This is the case with CHatGPT and other Large Language Model (LLMs) tools. (Think: ChatGPT is to LLMs as Band Aid is to bandages, or Kleenex is to tissues.)

What is ChatGPT?

A number of writing instructors and administrators from across Canada have created a shared document, Crowdsourcing Responses to Generative AI from Canadian Writing Experts, to provide a community of practice for not only responding to ChatGPT, but for developing pedagogy and teaching and tutoring practices everyone in the community can use. One element is a position statement. If you work in writing centres in Canada, please consider participating in the Crowdsourcing document. Continue reading “Drafting a position statement for ChatGPT and LLM writing tools for higher education”

ChatGPT snapshot: University of Saskatchewan

Vol. 4, No. 5 (Spring 2023)

Liv Marken,
Learning Specialist (Writing Centre Coordinator)
Writing Centre
University of Saskatchewan

In April 2023, I asked writing centre practitioners to answer 5 questions on ChatGPT and their centres’ responses. Over the next month, I’ll post the response. If you have a perspective to offer, please use this form, and I’ll post it here. Brian Hotson, Editor, CWCR/RCCR

What actions, policies, resources, or information has your institution put in place for ChatGPT?

It has been an exciting but challenging term because there has been uncertainty about who would take leadership on the issue. There wasn’t any official guidance issued, but on our academic integrity website, an instructor FAQ was published in early March, and soon after that a student FAQ. Library staff (including me and my colleague Jill McMillan, our graduate writing specialist) co-authored these with a colleague from the teaching support centre. Continue reading “ChatGPT snapshot: University of Saskatchewan”

Writing a conference proposal: A step-by-step guide

Vol.4, No. 4 (Spring 2023)
Brian Hotson, Editor, CWCR/RCCR
Stevie Bell, Associate Editor, CWCR/RCCR

This is an expansion of the CWCR/RCCR post, Vol. 3 No. 3 (Winter 2022).

‘Tis the season, conference season. For those who have not written a conference proposal, it can seem like a daunting project. The thought of it can cause many to not submit at all. It can be difficult to know where to start and what to write, while following a conference’s CFP format and theme. We’ve had both successful and rejected proposals. As conference proposal reviewers and conference organizers, we’ve read many proposals and drafted several conference calls-for-proposals, as well. Here are some of the things that we’ve learned from experience. We hope this guide will provide you with some help to get your proposal started, into shape, and submitted. Continue reading “Writing a conference proposal: A step-by-step guide”

Academic vigilantes and superheroes

Vol. 4, No. 3 (Spring 2023)
Brian Hotson, Editor, CWCR/RCCR

“Only those safe from fascism and its practices are likely to think that there might be a benefit in exchanging ideas with fascists.” – Aleksandar Hemon, Fascism is Not an Idea to Be Debated, It’s a Set of Actions to Fight

IWCA’s theme for their 2023 conference is Embracing the Multi-Verse, a theme taken up by the CWCA/ACCR’s 2019 conference The Writing Centre Multiverse. The 2019 conference’s theoretical basis was Marshall, Hayashi, and Yeung’s Negotiating the Multi in Multilingualism and Multiliteracies (2012). The CWCA/ACCR’s call for proposals states that the authors’ study’s Continue reading “Academic vigilantes and superheroes”

Academic writing and ChatGPT: Step back to step forward

Vol. 4, No. 2 (Spring 2023)
Brian Hotson, Editor, CWCR/RCCR
Stevie Bell, Associate Editor, CWCR/RCCR

Sam Altman, a co-founder of OpenAI, creators of ChatGPT, said in 2016 that he started OpenAI to “prevent artificial intelligence from accidentally wiping out humanity” (Friend, 2016,  October 2). Recently, Elon Musk (also a co-founder of OpenAI) and The Woz (a co-founder of Apple) along with several high-profile scientists, activists, and AI business people, signed a letter urging for a pause in the rollout of Large Language Model (LLMs) AI tools, such as ChatGPT. The letter warns of an “out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control” (Fraser, 4 April 2023). A Google engineer, Blake Lemoine, was fired for claiming that Google’s LLM tool, LaMDA, had become sentient:

I raised this as a concern about the degree to which power is being centralized in the hands of a few, and powerful AI technology which will influence people’s lives is being held behind closed doors … There is this major technology that has the chance of influencing human history for the next century, and the public is being cut out of the conversation about how it should be developed. (Harrison, 2022, August 16)

Continue reading “Academic writing and ChatGPT: Step back to step forward”

Writing centres and ChatGPT: And then all at once

Vol. 4, No. 1 (Spring 2023)
Brian Hotson, Editor, CWCR/RCCR

A couple months ago, I asked OpenAI‘s ChatGPT to write blog post on writing centres and academic integrity. This week, I asked the new version of ChatGPT to write this piece again. For the old version of GPT, I used this prompt:

Write a five-paragraph blog post about the state of writing centres in Canada, with citations and references. The first paragraph is an overview of Canadian writing centres for 2022. The second paragraph is an overview academic integrity issues in Canada in 2022. The third paragraph is an overview of how academic integrity affects Canadian writing centres. The four paragraph provides a preview of possible academic integrity issues in Canada in 2023. The fifth paragraph is a summation of the first four paragraphs.

Continue reading “Writing centres and ChatGPT: And then all at once”

Friends don’t let friends Studiosity (without reading the fine print)

A surveillance on the ledge of a building with a cloudy sky in the background.

Vol. 4 No. 1 (Fall 2022)

Brian Hotson, CWCR/RCCR Editor
Stevie Bell, CWCR/RCCR Associate Editor

Like many teachers on a late-August vacation, education companies can see September on the horizon. The difference is that these companies aren’t relaxing. They’re sending e-mails and booking video conferences with offers of freshly printed textbooks, handy workbooks, new online tools, and easy-to-use mobile apps that promise to make student life easier and save universities and colleges money.

The business of education is very large, with total global spending estimated at $4.7 trillion (USD) (UNESCO). By comparison, the total GDP of all African nations in 2021 was $2.7 trillion (USD) (StatisticsTimes, 2021). In 2018-2019, “public and private expenditure on [postsecondary] education” in Canada was $41.5 billion. Education companies would like a share of the money. In this context, a new-to-Canada online writing and tutoring tool, Studiosity, has appeared. Continue reading “Friends don’t let friends Studiosity (without reading the fine print)”

ProTips for Essay Writers: From OWL Handouts to Videos

Image of Stevie Bell, a white woman with cropped hair, and Brian Hotson, a white man with a grey beard, smiling with the text: Pro Tips for Essay Writers

Vol. 3 No. 9 (Summer 2022)

This post is from the 2022 CWCA/ACCR annual conference virtual poster session. – Stevie Bell and Brian Hotson, 2022 CWCA/ACCR conference co-chairs

By Stevie Bell, York University Writing Department & Brian Hotson, Independent Scholar

The digital turn in education, part of the COVID turn, initiated by the pandemic reenergized, recentred, and reoriented asynchronous writing instruction where students engage with writing resources and connect with writing tutors on their schedule. At York University’s writing centre, where Stevie is located, renewed attention is being paid to developing a repertoire of online resources to engage students differently than traditional PDF instructional handouts or webtext pages. Stevie was given a .5 teaching credit in an experimental initiative to develop instructional videos for the Writing Centre and learn about student preferences, engagement, production processes, etc. Of course Stevie invited Brian Hotson, her writing partner, on the adventure. Together, they produced ProTips for Essay Writers. In this piece, we reflect on lessons learned and share some of the behind-the-scenes production workflow, how-tos, and video analytics. Continue reading “ProTips for Essay Writers: From OWL Handouts to Videos”

Colonial outposts in the 36th chamber: Hip hop pedagogies and writing centres

Vol. 3, No. 5 (Spring 2022)
Brian Hotson,

I recently interviewed with Casey Wong who is the keynote speaker for the 2022 CWCA/ACCR conference. Wong (he/him) is currently an Assistant Professor of Social Foundations of Education in the Department of Educational Policy Studies at Georgia State University. He is co-editing a forthcoming book, Freedom Moves: Hip Hop Knowledges, Pedagogies, and Futures?, with H. Samy Alim and Jeff Chang.

Thank you for taking the time for to speak with me. 

Wong: Thank you! I’m excited about entering into community with you and the CWCA/ACCR attendees.

First off, I’m interested in how you got to where you are now. What was your path to your PhD and UCLA? 

Casey Wong
Case Wong

Wong: I love this question, and I imagine I could begin academically, but I might start with my upbringing. When I’m thinking about the power of language and rhetorics, I think about all I witnessed growing up in communities in Southern California that were some of the poorest by size in the country. I saw a variety of literacies spraypainted across train cars that actively passed through one of my central places of upbringing, Colton, California. I consider how I grew up among interconnected and overlapping peoples from the African/Black, Latinx, Asian, and Pacific Islander diasporas. I consider how local Native peoples were actively involved in my elementary school in San Bernardino, California. I think about how White supremacy often found its way into the voices and lives of my poor and working-class White peers, but how often there were deep co-conspiracies and solidarities that went unnoticed. With so many peoples, from so many places, it made having access to multiple varieties of language a deep advantage, and their value, and beauty–even as Dominant American English was widely seen as the ideological norm in very oppressive ways. I saw this personally as my Cantonese father secretly refused to let us know he spoke Cantonese, nor let us learn–something myself, my brother and sister would not find out until he passed away while we were in high school. Continue reading “Colonial outposts in the 36th chamber: Hip hop pedagogies and writing centres”

Writing a conference proposal: A guide

an auditorium filled with people with two presenters

Vol. 3 No. 3 (Winter 2022)

Brian Hotson, CWCA/ACCR 2022 Conference Co-Chair
Stevie Bell, CWCA/ACCR 2022 Conference Co-Chair

If you’ve not written a conference proposal, it’s hard to know where to start and what to write, all while following the conference CFP format. This guide (links below) will provide you with some help as you get your proposal started, into shape, and then submitted. This is a step-by-step guide, leading you through each part of the CFP:

  • Title
  • Detailed abstract
  • Proposal description
  • Type of session
  • References

Provided are instructions on how to structure each section using examples, leading to a final Proposal Description sample. Use it for your own proposal and share it with your colleagues and tutors.

Writing a conference proposal: A guide

2022 CWCA/ACCR Conference CFP – Reckoning with Space & Safety in the COVID Turn

If you need support, please contact the conference co-chairs,
Stevie Bell,
Brian Hotson,

“I don’t know, let’s play”: Multimodal design support in the writing centre

the word "Play" in green against a brown backdrop

Vol. 3 No. 2 (Winter 2022)

Editor’s note: This is a Session Reflection. If you have a unique tutoring experience to share, submit your Session Reflection to Brian Hotson

Stevie Bell is an associate professor in the Writing Department at York University and CWCR/RCCR co-founder

A sticker with the word "essay" that looks like its meltingWriting centre tutors may be seeing an increase in multimodal writing projects (DWPs) now that students are primarily producing and submitting their work online―at least this is the case for me. Today’s students have the opportunity to use colour, sound, gifs, and video elements to enhance even traditional essays, and these elements are becoming not just common, but often expected. Students are also being assigned creative projects that require them to focus on becoming design-savvy producers of multimodal texts, using design elements and theory that isn’t always in their writing toolbox

Where on campus can students seek help with multimodal projects? In my opinion, writing centres are well positioned to extend the work they do supporting students as they use writing as a tool of thinking and communicating to include multimodal processes that do not prioritize alphabetic/linguistic modes. Writing centre tutors already know the structure of argumentation, the rhetoric of academic writing, and styles and formats required for writing at university or college levels. They also know how to think along with students, as well as to think in and through the tasks, challenges, and blocks that students come to the centre to work through.

Continue reading ““I don’t know, let’s play”: Multimodal design support in the writing centre”

Creating writing centres in neocolonialism

Vol. 3 No. 4 (Fall 2021)

Brian Hotson, Editor, CWCR/RCCR
Stevie Bell, Associate Professor, York University
Guest editor: Lauren Mackenzie

In 2008, the then CWCA/ACCR president participated in “setting up of the first writing centre” in India (Holock, 2009, p. 6) through the University of Ottawa. In a piece in the 2009 CWCA/ACCR Newsletter, Writing into India: Setting up the first Writing Centre in the country, Holock describes his experience at Parvatibai Chowgule College of Arts and Science in Gogol, Goa, India in a travel diary style recounting,

On Friday, June 27, 2008, we step off of our fifteen-hour flight in Mumbai, my boss and I, and immediately feel the weight of our endeavour. It is not only the heat and thickness of the air, but the realization that we have finally arrived to start work on Monday, in a country and an educational system that neither of us have ever been exposed to. (Holock, 2009, p. 6)

Continue reading “Creating writing centres in neocolonialism”

If you could say anything to faculty about academic integrity…

Vol. 3, No. 2 (Fall 2021)

Stephanie Bell, Associate Professor, York University Writing Centre; co-founder, CWCR/RCCR

A clear-cut strategy for undermining the writing centre’s relationship with student writers is to become reporters, adjudicators, or punishers of plagiarism and cheating (Bell, 2018).

In its heavy-handed discourse around academic dishonesty, the institution draws a divide between itself and students. Students arrive on campuses to find themselves positioned as likely criminals, and their work is policed by AI that scans it for infractions. Ironically, the institution’s academic dishonesty rhetoric can so undermine the institution-student relationship that it fosters academically dishonest student behaviour (see Strayhorn, 2012). To fulfill their missions, writing centres must carefully navigate the issue of academic dishonesty and the institution-student divide it constructs. Continue reading “If you could say anything to faculty about academic integrity…”

A Short History of CWCA/ACCR: Fifteen years on

Vol. 3, No. 1 (Fall 2021)

Brian Hotson, CWCR/RCCR Editor


Volume 1, Issue 1 Halifax Gazette, March 23, 1752

Although writing centres in Canada date to the mid-1960s (See Table 1) (Proctor, 2011, p. 418; Bromley, 2017, p. 35), writing tutoring and writing instruction, of course, didn’t begin with the first writing centres. Writing instruction has a progenitor dating to the first European colonizers in what is now called Canada (Halifax Gazette, 1752). Because the Canadian writing centre field is young, many of the key founders and figures in its development continue to add to its literature and practice. These writing centre practitioners in the past thirty years have created a significant body of work, including publications, repositories of information, modes of practice, national and regional associations and conferences, and proactive advocacy and social justice work. While there have been times in the past where shifts in writing centres in Canada have caused worries about centre funding and importance, writing centres will not disappear from  Canada’s  education field. In fact, writing centres will continue to grow in importance, as writing centres Continue reading “A Short History of CWCA/ACCR: Fifteen years on”