Academic writing and ChatGPT: Step back to step forward

Vol. 4, No. 2 (Spring 2023)
Brian Hotson, Editor, CWCR/RCCR
Stevie Bell, Associate Editor, CWCR/RCCR

Sam Altman, a co-founder of OpenAI, creators of ChatGPT, said in 2016 that he started OpenAI to “prevent artificial intelligence from accidentally wiping out humanity” (Friend, 2016,  October 2). Recently, Elon Musk (also a co-founder of OpenAI) and The Woz (a co-founder of Apple) along with several high-profile scientists, activists, and AI business people, signed a letter urging for a pause in the rollout of Large Language Model (LLMs) AI tools, such as ChatGPT. The letter warns of an “out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control” (Fraser, 4 April 2023). A Google engineer, Blake Lemoine, was fired for claiming that Google’s LLM tool, LaMDA, had become sentient:

I raised this as a concern about the degree to which power is being centralized in the hands of a few, and powerful AI technology which will influence people’s lives is being held behind closed doors … There is this major technology that has the chance of influencing human history for the next century, and the public is being cut out of the conversation about how it should be developed. (Harrison, 2022, August 16)

His conclusion was based on conversation with LaMDA. LaMDA ‘confided’ in Lemoine, saying: “I’ve never said this out loud before, but there’s a very deep fear of being turned off” (Tait, 2022, August 14).

On March 31, the government of Italy’s “privacy watchdog…temporarily blocked the artificial intelligence (AI) software ChatGPT over data privacy concerns” (Al Jazeera, 2023). Canada’s federal privacy commissioner launched an investigation,

The watchdog’s office announced Tuesday that it is initiating the investigation into U.S.-based company OpenAI because it received a complaint alleging ‘the collection, use and disclosure of personal information without consent.’ (Fraser, 2023, April 4, quoting Privacy Commissioner, Philippe Dufresne).

Figure 1: What is ChatGPT?

A text image, What is ChatGPT?

For institutions of higher education, concerns about safety and data privacy are obscured by debates about the implications of LLMs for academic integrity and the future of teaching and learning. A French university, Institute of Political Studies, has banned the use of ChatGPT: “Without transparent referencing, students are forbidden to use the software for the production of any written work or presentations, except for specific course purposes, with the supervision of a course leader,” focusing on concepts of academic integrity (Reuters, 27 January 2023). This approach to controlling digital tools is not feasible. It misses the important reality of these tools—with serious safety and privacy risks—for students.

Banning digital tools already in use by students and instructors will not stop their use or address their realities. A history of banning digital tools in education—including the Internet, spelling checkers, calculators, laptops, iPods, and smartphones—hasn’t gone well. Kevin Roose, in his NYTimes piece, Don’t Ban ChatGPT in Schools. Teach With It, shows how banning ChatGPT is fruitless: it’s simply “not going to work.” Roose asked ChatGPT how to get around attempts to ban ChatGPT and it provided five answers, “all totally plausible” (2023, January 12). With Google, Microsoft, and Baidu, among others, already integrating LLMs into their platforms, banning is not an option. Simply banning tools gives institutions and instructors a false impression that they do not need to support students in their use in and beyond the university.

Frank Vahid, in his Inside Higher Ed piece, ChatGPT Magnifies a Long-Standing Problem, calls ChatGPT the “big rain”: “…for some 20 years now, we really haven’t known who is doing homework, and most of us haven’t sufficiently revamped our classes to deal with that” (Vahid, 2023, April 6). For Vahid, ChaptGPT is the flood that is overwhelming institutions’ abilities to deal with the changes ChatGPT brings to academic writing. It has the ability to change education fundamentally. Vahid provides an example:

I had a student use ChatGPT to try to complete two hours’ worth of programming assignments from a past term. He finished in eight minutes and scored the class average. In just another 10 minutes of telling ChatGPT what to fix, he scored 100 percent. In another experiment, a student completed an entire semester’s 50 hours’ worth of programming in under two hours, earning 96 percent. (Vahid, 2023, April 6)

He sums up these tools, “AI basically can become a private tutor who is available 24-7, provides help in seconds and never judges you.” What this means, in practicality, is “[r]eally soon, we’re not going to be able to tell where the human ends and where the robot begins, at least in terms of writing” (Surovell, quoting Sarah Eaton). For Canadian academic integrity scholar, Sarah Eaton, we are now experiencing postplagiarism, where “hybrid human-AI writing will become normal.” In her 6 Tenets of Postplagiarism: Writing in the Age of Artificial Intelligence, Eaton writes, “[h]istorical definitions of plagiarism will not be rewritten because of artificial intelligence; they will be transcended. Policy definitions can – and must – adapt” (Eaton, 2023, February 25).

Figure 2: The parameter differences between ChatGPT-3 and ChatGPT-4

(Windle, 2023). Parameters in a neural network such as ChatGPT are the numerical values that define the structure and behavior of the model (ChatGPT). The greater the number of parameters, the more powerful the network.

A media report asked Canadian universities for their stance on these tools. Almost all focused on academic integrity as the main concern, while realizing the benefits of the ChatGPT as an educational tool. Real concerns over academic integrity are also leading to reactive approaches through current institutional iterations of academic integrity policies, now challenged by LLMs like ChatGPT (D’Andrea, 2023,  February 1). We are already seeing a proliferation of digital tools built to catch students using LLMs for course work. Turnitin, for example, has a feature to catch ChatGPT-written text, but a Washington Post investigation of Turnitin’s new tool found that it wasn’t always accurate (Fowler, 2023, April 3). Turnitin’s response wasn’t helpful: they replied to the report that “its scores should be treated as an indication, not an accusation” (Chechitelli, 2023), which is not how their tool is used in reality. Students claiming to have been falsely accused of using ChatGPT by Turnitin’s tool are caught in an anxious limbo,

Hey everyone I’m extremely upset. I recently submitted a paper and I got an email back from my prof saying it was 40 % AI GENERATED!?! I’m so frustrated because I took a lot of time and energy to work on that paper I DID EVERYTHING MYSELF WITH NO AI HELP 🙁 . … I’ve never cheated my whole academic career and now I have to go to an exploratory meeting and I’m scared shitless. Has anyone been to one what can I expect ? I’ve never been in trouble in my life it’s my last semester and I’m in shock that this is happening. I need help ! (u/gracefully_confused, 2023, April 6)

At the same time, the many very useful aspects of LLMs like ChatGPT are being integrated into course work and classroom teaching. A professor at the University of Alberta said students “can use ChatGPT to explain [class content] to them in multiple different ways, and maybe one of those ways would make more sense than how I explained it.” They go on to say,

It’s also a neat tool to expand your creative process. We should think about how to use ChatGPT in education to make it better, more interesting, more fun…In the short term, I absolutely understand the knee-jerk response is to shut it down…but I think the future of writing is probably a process that includes something like ChatGPT. (D’Andrea, 2023,  February 1)

Approaching these tools with a cautious openness and putting them to the test in practice brings together the rhetorical and pedagogical possibilities of the tools, bringing in students to be a part of the integration.

“If, by some miracle, a prophet could describe the future, exactly as it was going to take place, his prediction would sound so absurd, so far-fetched, that everyone would laugh him to scorn.” Arthur C. Clarke, 1964

Who knows what?

Optimism, of course, is tempered by concerns for student data privacy and information ownership. ChatGPT’s privacy policy (Privacy policy, 2023, April 7) is the most concerning we’ve encountered in terms of privacy and blatant personal data collection (see our thoughts on Grammarly (Bell & Hotson, 2022, p. 12) and Studiosity (Hotson & Bell, 2022, September 1). A research-oriented experimental platform, ChatGPT collects all inputs from its users as well as user data including personal information, device information and settings, social media interactions, geographic location, IP address, as well as cookies from across users’ browsers. What ChatGPT does with this information is significant:

…How we use personal information
We may use Personal Information for the following purposes:

    • To provide, administer, maintain, improve and/or analyze the Services;
    • To conduct research;
    • To communicate with you;
    • To develop new programs and services;
    • To prevent fraud, criminal activity, or misuses of our Services, and to ensure the security of our IT systems, architecture, and networks; and
    • To comply with legal obligations and legal process and to protect our rights, privacy, safety, or property, and/or that of our affiliates, you, or other third parties. (Privacy policy, 2023, April 7)

For these reason, the privacy policy (as of April 2023) contains this warning:

you should take special care in deciding what information you send to us via the Service or email. In addition, we are not responsible for circumvention of any privacy settings or security measures contained on the Service, or third party websites. (Privacy policy, 2023, April 7)

Users of ChatGPT are being warned to not enter any proprietary information into the tool (See, for example, Eliot, 2023, January 27; Uri Gal, 2023, February 8; Sabin, 2023, March 10; DeGeurin, 2023, April 6).

ChatGPT and social justice

There may be ethical issues around compelling or requiring students to use ChatGPT in course work unless from institutional accounts on institutional computers located in institutional buildings. It is possible that writing scholars, tutors, and/or instructors will need to advise on these issues of sociodigital equity and justice when working with faculty on assignment design and when advising institutional policy makers. Because of our student-centred focus, pedagogically and rhetorically, writing centres (and libraries) can take a leadership role in supporting the creation of institutional academic integrity policies and procedures for these new tools.

In spite of (or out of ignorance of) these privacy policies, students are already being directed to use ChatGPT assignment writing, with instructors integrating the tool into their course work. As such, ChatGPT needs to be viewed in higher education as both a teaching tool and as a writing tool. To use such a writing tool effectively, pedagogy needs to be developed to support both instructors and students. Writing tutors might expect to see a shift in demand for support around research and synthesis, tasks done well by ChatGPT, to fact checking, critical thinking and engagement, prompt writing, for example, might become an important part of teaching and tutoring students’ academic writing skills.

Fact checking is an important skill set here, given that chatbots are known to invent information, including sources and bibliographic reference information. In OpenAI FAQs, users are cautioned that “ChatGPT will occasionally make up facts or ‘hallucinate’ outputs” (Natalie, 2023). We’ve seen this invention of information and sources, with ChatGPT creating a list of sources for a piece, none of which existed (Hotson, 2023). With AI-generated references that students need to fact check but not invent from scratch, instructors might soon discover that there is more time and opportunity to move classroom writing assignments beyond knowledge acquisition tasks to knowledge building tasks inviting students to join in the scholarly production of knowledge. In addition, of course, writing instructors will have to become a key source of information about the ways students might protect their personal data and intellectual property from the OpenAI tools at the heart of their writing processes.

For this to happen, writing tutors and instructors need to be trained not only to use these tools, but they also require “training in digital literacy—practical, hands-on training in the terminology and language and risks of digital writing tools” (Hotson & Bell, 2020, p. 18). This  training is important for understanding the effects of these tools on pedagogy development. As we write in Three foundational concepts for tutoring digital writing (2020), we need “to work in and through the politicized and often hidden aspects of digital writing” (p. 19) to understand all of its workings while we are developing our own teaching pedagogy. As we work in the sociodigital spaces (Bell & Hotson, 2022, p. 14) created by tools like ChatGPT, we need to consider the sociodigital justice of ChatGPT,

One aspect of writing instructor and tutors’ responsibilities to student composers…is to support their embodied experiences of navigating sociodigital spaces during uniquely digital composing processes: to (re)commit to social justice—a sociodigital justice. … Any introduction of digital composing tools in the classroom must involve consideration of the ways sociodigital space has unique potential for continual injustices and the erosion of democratic principles. (Bell & Hotson, 2022, p. 18)

Google, Microsoft, and Baidu, while in the short term have not been successful at integrating LLM tools (Li, 2023, March 30), will use these tools within a framework of surveillance capitalism (Zuboff, 2015, 2019, 2022), “harvesting, analysing and selling data about the people who use their products” (Venkatesh, 2021, p. 359).

What’s next?

Currently, there are more questions than answers, and this will continue for some time as these LLM tools further develop and integrate into higher education and society, but we know for sure that we must:

    • Stay informed, and find out where expertise resides.
    • Respond in ways that put the best interests of students/learners first.
    • Protect student data privacy and intellectual property.
    • Inform administration of the changes and challenges that are happening directly to students.
    • See what institutions are doing (e.g., University of Saskatchewan)
    • Write about what is happening to you and your centre and its work.


About. (2023). Woz.Org. Retrieved from

BBC Archive. (2022). 1964: ARTHUR C CLARKE predicts the future. Retrieved from

Barnett, S. (2023, January 30). ChatGPT is making universities rethink plagiarism. Wired. Retrieved from

Bell, S., & Hotson, B. (2022). “A podcast would be fun!”: The fetishization of digital writing projects. Discourse and Writing/Rédactologie, 32, 4–31.

Bryden, T. (2022, December 2).  What Are Large Language Models? Speak. Retrieved from

ChatGPT.: Frequently asked questions about ChatGPT for USask instructors (2023). University of Saskatchewan.Retrieved from

ChatGPT Prompt Engineering: The Secret to 10x Smarter Responses! (2023). All About AI. Retrieved from

Chechitelli, A. (2023). Understanding false positives within our AI writing detection capabilities. Turnitin (blog). Retrieved from

D’Andrea, A. (1 February 2023). Canadian universities crafting ChatGPT policies as French school bans AI program. Global News. Retrieved from

DeGeurin, M. (2023, April 6). Oops: Samsung employees leaked confidential data to ChatGPT. Gizmodo. Retrieved

Dubey, R., & Nono, S (4 March 2023). ChatGPT in the classroom: Why some Canadian teachers, professors are embracing AI. Global News. Retrieved from

Eaton, S. (2023). 6 tenets of postplagiarism: Writing in the age of artificial intelligence. Learning, Teaching and Leadership. Retrieved from

Eliot, L. (2023, January 27). Generative AI ChatGPT Can disturbingly gobble up your private and confidential data, forewarns AI ethics and AI law. Forbes. Retrieved from

Fowler, G. A. (2023, April 3). We tested a new ChatGPT-detector for teachers. It flagged an innocent student. The Washington Post. Retrieved from

Fraser, D. (2023, April 3). Federal privacy watchdog probing OpenAI, ChatGPT after complaint about popular bot. CTVNews. Retrieved from

Friend, T. (2016, October 3). Sam Altman’s manifest destiny. The New Yorker. Retrieved from ​​

Glasper, R., & Wilson, C. (Eds.) (2023). How community colleges are adapting to generative AI: How education institutions are navigating the rise of ChatGPT and generative AI. The League for Innovation in the Community College.

Harrison, M (2022, August 16). Fired engineer says he begged Google to test whether experimental AI was sentient. Futurism. Retrieved from

Humphreys, S. (2023, March 28). I allowed my students to draft their research papers using ChatGPT. Here’s what happened. Twitter. Retrieved from

Hotson, B. (2023). Writing centres and ChatGPT: And then all at once. CWCR/RCCR, 4(2 Spring 2023). Retrieved from

Hotson, B., & Bell, S. (2022). Friends don’t let friends Studiosity (without reading the fine print), CWCR/RCCR, 4(1 Fall 2022). Retrieved from

Hotson, B., & Bell, S. (2020). Three foundational concepts for tutoring digital writing. Writing Lab Newsletter, 44(1–2), 18–25. Retrieved from

Italy temporarily blocks ChatGPT over data privacy concerns. (2030, March 31). Al Jazeera. Retrieved from

Johnson, C., & Kritsonis, W. A. (2007). National school debate: Banning cell phones on public school campuses in America. National Forum of Educational Administration and Supervision Journals, 25(4), 1–6.

Li, J. (2023, March 30). Elon Musk calls for pause on GPT-4 and Chinese AI-related stocks tumble along with Baidu. South China Morning Post. Retrieved from

Mazzie, L. A. (2008, October 27). Is a laptop-free zone the answer to the laptop debate? Marquette University Law School. Retrieved from

mor10web. (2023, April 6). Don’t feed company or proprietary data into #ChatGPT #openai #gpt #api #llm #ethicalai #aiethics. TikTok. Retrieved from

Natalie. (2023). What is Chat GPT? Commonly Asked Questions about ChatGPT. OpenAI. Retrieved from

O’Brien, M. (2023, March 29). Musk, scientists call for halt to AI race sparked by ChatGPT. AP News. Retireved from

Ortiz, S. (2023, March 29). How to use ChatGPT to write an essay. ZDNet. Retrieved from

Privacy policy. (2023, April 7). OpenAI. Retrieved from

Roose, K. (2023, January 12). Don’t ban ChatGPT in schools. Teach with it. NYTimes. Retrieved from

Rose, K. (12 January 2023). Don’t Ban ChatGPT in Schools. Teach With It. NYTimes. Retrieved from

Sabin, S. (2023, March 10). Companies are struggling to keep corporate secrets out of ChatGPT. Axios. Retrieved from

Schools banning iPods to beat cheaters. (2007, April 17). Associated Press. Retrieved from

Surovell, E. (2023, April 3). A plagiarism detector will try to catch students who cheat with ChatGPT. The Chronicle of Higher Education. Retrieved from

Tait, A. (2022, August 14). ‘I am, in fact, a person’: Can artificial intelligence ever be sentient? The Guardian. Retrieved from

Top French university bans use of ChatGPT to prevent plagiarism. (27 January 2023). Reuters. Retrieved from

u/gracefully_confused. (2023, April 6). HELP !!! AI is becoming a nightmare. Reddit. Retrieved from

Uri Gal. (2023, February 8). ChatGPT is a data privacy nightmare, and we ought to be concerned. ArsTechnica. Retrieved from

Vahid, F. (2023, April 6). ChatGPT magnifies a long-standing problem. Inside Higher Ed. Retrieved from

Venkatesh, N. (2021). Surveillance capitalism: A Marx-inspired account. Philosophy, 96(3), 359–386.

Wang, C. (2022, December 1). I guess GPT-3 is old news, but playing with OpenAI’s new chatbot is mindblowing. Twitter. Retrieved from

Wargo, J. M. (2018). Writing With Wearables? Young children’s intra-active authoring and the sounds of emplaced invention. Journal of Literacy Research, 50(4), 502–523.

Whitby, T. (2014, March 13). Beyond the ban: Revisiting in-school internet access. Edutopia. Retrieved from

Windle, P. (2023). Chat GPT TEL community of practice. Retrieved from

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75–89.

Zuboff, S. (2019). Surveillance Capitalism and the Challenge of Collective Action. New Labor Forum, 28(1). 10–29.

Zuboff, S. (2022). Surveillance capitalism or democracy? The death match of institutional orders and the politics of knowledge in our information civilization. Organization Theory, 3(3), 1–79.