Vol. 4, No. 6 (Spring 2023)
Brian Hotson, Editor, CWCR/RCCR
Having a baseline foundation is important to building writing and tutoring programs and support for students. This is especially true when technology comes available that dramatically changes not only the way we teach, but the way we think about education. This is the case with CHatGPT and other Large Language Model (LLMs) tools. (Think: ChatGPT is to LLMs as Band Aid is to bandages, or Kleenex is to tissues.)
A number of writing instructors and administrators from across Canada have created a shared document, Crowdsourcing Responses to Generative AI from Canadian Writing Experts, to provide a community of practice for not only responding to ChatGPT, but for developing pedagogy and teaching and tutoring practices everyone in the community can use. One element is a position statement. If you work in writing centres in Canada, please consider participating in the Crowdsourcing document.
Why have a position statement?
A position statement can help your centre in building a foundation for LLM tools like ChatGPT in creating resources, pedagogy development, and vision statements, for example. It can provide a clear view for those in your institution looking for direction and understanding in responding to and teaching with LLM tools. A position statement is also a grounding point for staff to turn to when questions are asked about your centre views of LLM tools like ChatGPT.
Below is a position statement from the Crowdsourcing document. This can be used for institutions, units, programs, organizations. It is created as an open creative commons (Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)) and is available to be modified and adopted as you see fit, and may not be used for commercial purposes.
A draft position statement for ChatGPT and Large Language Model (LLM) tools
Large Language Model (LLMs) tools, such as ChatGPT (Generative Pre-trained Transformer), are artificial intelligence (AI) tools pretrained to generate text in language similar to humans. LLMs generates text (answers to questions) from a large dataset of information it has been trained to access. The tool works when a user enters a question or prompt into the tool, generating responses from its dataset using machine learning, “which allows it to weigh the importance of different words in a sentence when making predictions” from the prompts (ChatGPT).
LLM-generated text, like that of ChatGPT, can provide excellent research synthesis, for example, but it does not engage in critical thinking, personalized reflection, interpretation, complex problem-solving, or context-specific applications of course material. It cannot form new thoughts or create new ideas.
Like all digital writing tools, LLMs has many uses in knowledge production and writing instruction, many of which are yet to be discovered (Greylock, 2023). At the same time, the tool has caused consternation among writing centre professionals, as well as government officials, school boards, education administrators, and faculty and instructors at all levels of in the field of education. While scholarly literature and institutional policies are now being written on LLMs, much has been written in popular media, news sources, and social media with varying degrees of rancour, as well as accuracy and predictiveness. As a means to clarify and create an understanding of LLMs such as ChatGPT and its situatedness within writing centre pedagogy, rhetoric, and policy, it is helpful to have a position as a focus and base-standard.
LLMs should not be banned
LLMs such as ChatGPT should not be banned from use in HE institutions, and any restrictions placed on their use must be based in literature, best-practices, and policies, including academic integrity policies. Many technologies that are now used in education were once banned, including the internet, laptops, and word-processing apps, among others. LLMs such as ChatGPT will only grow in importance as a digital writing tool, and should be incorporated into writing instruction and tutoring. There may be specific situations in which it is reasonable to limit or restrict the use of LLMs. Policies that prohibit the use of ChatGPT in a particular context should emerge from clear articulations of how student work aligns with important learning outcomes – not from general discomfort with this technology or a vague sense that all student work must be “original.”
LLMs should be incorporated into institutions
LLMs such as ChatGPT should be incorporated into our institutions as educational tools. Writing centres and writing course instructors can play an important role in the incorporation of LLMs into not only writing instruction and tutoring, but into institutions. By providing an understanding of the uses of the tool and being proactive for its use and moderation – in collaboration with other academic support as well as teaching and learning units – writing centres can be focal points for bringing the tool into the academy. Writing centres have a long history of embracing digital tools and supports beginning in the 1960s and 1970s (Palmquist, 2003). Integrating LLMs into the academy and academic supports can provide direction and clarification of the uses of LLMs as a positive, effective learning tool, and mitigate their negative effects.
Develop pedagogy and train staff
Pedagogy and practices relating to LLMs and writing instruction and tutoring should be developed and employed in our writing centres and writing teaching units. Administrations should provide funding and resources to writing studies professionals (instructors, writing program administrators, writing centre tutors, etc.) to develop pedagogies, rhetorics, best practices, community of practices, and resources relating to LLMs, including LLM-specific training, attending LLM conferences, access to external LLM expertise. Training must be in collaboration with students, writing faculty, tutors, IT specialists, teaching and learning units, and libraries. As this technology is developing, this training is required on an on-going basis.
Writing is the standard means for knowledge production and acquisition in higher education. As such, much of the work with which university-level faculty and staff connected to the field of Writing Studies involves the crafting of text for readers. Writing support in higher education can also be for multi-modal composition given that we support and tutor students in creating text encountered or used in a variety of media. This includes audio/video production, image creation, and graphics creation. As a result, faculty, including Writing Studies experts, should not assume that LLMs only impact traditional forms of student writing. LLMs and other AI tools are now available to summarize long texts; manage video editing with only text prompts; turn text into music (e.g., MusicLM); translate one language to another (e.g., DeepL); and create art and images (e.g., DALL.E). Instructors, scholars, and tutors, therefore, should not imagine that asking students to replace written essays, for example, with media-rich projects, makes it impossible for students to produce work free from LLM tools.
It is predicted that LLM tools like ChatGPT will be incorporated into much of how we access information on the internet in the future (Greylock, 2023). Major corporations whose digital writing tools are ubiquitous in academic writing instruction and tutoring are already in this process (OpenAI, 2023). As educators and tutors, we have a responsibility to not only be prepared for these changes, but to embrace what these digital writing tools will bring to our centres and to provide leadership in collaboration with institutional partners.
Greylock. (2022). OpenAI CEO Sam Altman: AI for the Next Era. Retrieved from https://www.youtube.com/watch?v=WHoWGNQRXb0
OpenAI. (2023). OpenAI and Microsoft extend partnership. Retrieved from https://openai.com/blog/openai-and-microsoft-extend-partnership
Palmquist, M. (2003). A brief history of computer support for writing centers and writing-across-the-curriculum programs. Computers and Composition, 20(4), 395–413. https://doi.org/10.1016/j.compcom.2003.08.013