The responsibility in using GenAI for academic pursuits in higher education is shared between the user, the tool and, in instances where the tool is part of teaching and learning processes, the institution. As such, to say that students using ChatGPT as a research to bear sole responsibility for the accuracy of the information the tools provides is unethical and unjust. In this case, this is especially the case if the student is directed by an instructor to use the tool. It can be argued that the institution bears responsibility if it doesn’t provide instruction (digital literacy) on using the tools.
The anthropomorphism of GenAI writing and research tools mark their results differently from those of Google Scholar or Wikipedia, for example. GenAI, promoted as research and writing tools, bear equal and sometimes greater responsibility for not only the information they provide. These tools often position themselves within the limitations of their actions and the availability and accuracy of the data on which they draw, by providing caveats with their answers. At the same time, the anthropomorphic language that is used in providing these answers is convincing and authoritative. As a result, these tools have responsibility not only for the information they provide on the basis of its authoritative presentation. There a responsibility to those who use this information and the work that they produce as a result of the tool, especially in light of OpenAI’s own admission that ChatGPT “hallucinates” or makes up information. Continue reading “Is ChatGPT responsible for a student’s failing grade?: A hallucinogenic conversation”→
On July 20, 2023, OpenAI, the parent company of ChatGPT and Dall·E, stopped offering its GenAI detection tool, AI classifier, saying that it “is not fully reliable” (OpenAI, 2023). There’s a short statement on OpenAI’s website:
As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated. (Kirchner, Ahmad, Aaronson, & Leike, 2023, January 1; italics in the original)
There, OpenAI describes its AI classifier:
Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives). (Kirchner, Ahmad, Aaronson, & Leike, 2023, January 1; emphasis in the original)
As a result of the launch of ChatGPT in November 2022, fundamental changes to higher education have happened, and continue to happen, quickly and with unforeseen consequences. “A US poll published March 2023, found that “43% of college students have used ChatGPT or a similar AI application” and 22% “say they have used them to help complete assignments or exams,” representing “1 in 5 college students” (Welding, 2023, March 27). Inside Higher Ed published a piece, The Oncoming AI Ed-Tech ‘Tsunami’, predicting “[t]he AI-in-education market is expected to grow from approximately $2 billion in 2022 to more than $25 billion in 2030, with North America accounting for the largest share” What was a relevant response for GenAI in December 2022 is now ancient history. The scene is fluid—there are few predictive models, and no one knows what might come next. (D’Agostino, 2023, April 18). A classroom instructor is quoted in May 2023: “AI has already changed the classroom into something I no longer recognize” (Bogost, 2023, May 16).
On April 4, 2023, Turnitin launched its AI detection tool (Chechitelli, 2023, March 16). At the time, Turnitin’s CEO, Chris Caren, wrote,
…we are pleased to announce the launch of our AI writing detection capabilities… To date, the statistical signature of AI writing tools remains detectable and consistently average. In fact, we are able to detect the presence of AI writing with confidence. We have been very careful to adjust our detection capabilities to minimize false positives and create a safe environment to evaluate student writing for the presence of AI-generated text. (Caren, 2023, April 4)
On April 3, the Washington Post, which had early access, tested the accuracy of Turnitin’s tool using 16 “samples of real, AI-fabricated and mixed-source essays.” It found the tool
…got over half of them at least partly wrong. Turnitin accurately identified six of the 16 — but failed on three… And I’d give it only partial credit on the remaining seven, where it was directionally correct but misidentified some portion of ChatGPT-generated or mixed-source writing. (Fowler, 2023, April 3)
At the same time, r/ChatGPT also provides information on how to skirt AI detection, which grows in sophistication. Tips appeared in May 2023 providing information on how to “pass Turnitin AI detection” using ChatGPT and Grammarly (Woodford, 2023, May 14). Students were using ChatGPT to fool the detection tools using prompts that turned ChatGPT into a ghost writer that mimicked student’s tone and voice. A student, interviewed by the New York Times, explained how they gave ChatGPT a sample of their writing, and asked ChatGPT
“…to rewrite this paragraph to make it sound like me…So, I copied [and] pasted a page of what I’d already written and then it rewrote that paragraph, & I was like, this works” (Tan, 2023, June 26).
AI detection tool will need to be taken “with a big grain of salt,” saying that, in the end, it is up to the instructor to “make the final interpretation” of what is created by GenAI and what isn’t—“You, the instructor, have to make the final interpretation”
Also in May, Turnitin began to provide caveats for its AI detection tool. David Adamson, an AI scientist and Turnitin employee, says in a Turnitin produced video, Understanding false positives within Turnitin’s AI writing detection capabilities, that instructors need to do some work when using the tool. He admits that the results of submissions to the AI detection tool will need to be taken “with a big grain of salt,” saying that, in the end, it is up to the instructor to “make the final interpretation” of what is created by GenAI and what isn’t—“You, the instructor, have to make the final interpretation” (Turnitin, 2023, May 23). These false positives, according to Adamson, have different “flavours.” These flavours are specific kinds of writing that Turnitin’s tool is not good at predicting as GenAI writing. These include:
Repetitive writing: the same words used again and again.
Lists, outlines, short questions, code, or poetry.
Developing writers, English-language learners, and those writing at middle and high school levels.
Adamson ends the video by saying, “we own our mistakes. We want to…share with you how and when we are wrong” (Turnitin, 2023, May 23). These mistakes, Adamson states, represent ~1%, or 1 in 100, of submissions through the tool.
If we use Adamson’s rate of 1% false positives, 3.5% of 38.5 million submission is 1.3 million—1% of 1.3 million is 13,000 student papers that were found to be written in part by GenAI, when in fact they were not.
While this may be acceptable to Turnitin, this 1% represents real student assignments, written by real students. By May 14, 2023, Turnitin reported that 38.5 million submissions had been submitted for examination by their GenAI detection tool, “with 9.6% of those documents reporting over 20% of AI writing and 3.5% over 80% of AI writing” (Merod, 2023, June 7). If we use Adamson’s rate of 1% false positives, 3.5% of 38.5 million submission is 1.3 million—1% of 1.3 million is 13,000 student papers that were found to be written in part by GenAI, when in fact they were not. If Adamson’s 1% false-positive rate is applied to the 9.6% of papers reported with over 20% of AI writing (3.8 million assignments), this total is about 37,000 assignments. Together, this is approximately 50,000 false positives affecting 50,000 students. For scale, two of Canada’s largest schools, the University of Alberta has a student population is 40,100 and York University, 55,700. For the students, being accused of an academic violation can not only affect their academic record, but cause anxiety, loss of scholarship, and cancellation of student visas. Turnitin’s AI scientist Adamson say that the 1% false positive rate is “pretty good…” (Turnitin, 2023, May 23).
The University of Pittsburgh and Vanderbilt University have decided to not use Turnitin’s tool. The University of Pittsburgh
has concluded that “current AI detection software is not yet reliable enough to be deployed without a substantial risk of false positives and the consequential issues such accusations imply for both students and faculty. Use of the detection tool at this time is simply not supported by the data and does not represent a teaching practice that we can endorse or support.” Because of this, the Teaching Center will disable the AI detection tool in Turnitin effective immediately (Teaching Center doesn’t endorse, 2023, June 23).
Vanderbilt indicated that they’d “decided to disable Turnitin’s AI detection tool for the foreseeable future. This decision was not made lightly and was made in pursuit of the best interests of our students and faculty,” due to Turnitin’s lack of transparency of how it works as well as the false positive rate (Coley, 2023, August 16). Vanderbilt also did the math regarding the impact on their students due to false positives:
Vanderbilt submitted 75,000 papers to Turnitin in 2022. If this AI detection tool was available then, around 3,000 student papers would have been incorrectly labeled as having some of it written by AI. Instances of false accusations of AI usage being leveled against students at other universities have been widely reported over the past few months, including multiple instances that involved Turnitin… In addition to the false positive issue, AI detectors have been found to be more likely to label text written by non-native English speakers as AI-written. (Coley, 2023, August 16).
Vanderbilt concluded, “we do not believe that AI detection software is an effective tool that should be used” (Coley, 2023, August 16).
Canadian higher education institutions have a mixed approach to AI detectors. On April 4, 2023, University of British Columbia acted quickly to Turnitin’s AI detection tool, stating that they will not enable it. Their reasoning, among others, includes: “Instructors cannot double-check the feature results”; “Results from the feature are not available to students”; and an inability “of the feature to keep up with rapidly evolving AI is unknown” (University of British Columbia, 2023, April 4).
Nipissing University, in their senate-approved, June 2023 “Generative AI Guide for Instructors” guide, mentions that “use of generative AI ‘detectors’ is not recommended” (n.p.); theUniversity of Waterloo similarly cautions faculty, “controlling the use of AI writing through surveillance or detection technology is not recommended” (Frequently Asked Questions, 2023, July 25). Conestoga College in its guide to using Turnitin’s tool, instructs faculty to indicate that they are using the tool; “Without such notice, a student may at some point appeal.” (Sharpe, 2023, June 27).
A paper published this month in the International Journal for Educational Integrity, “Evaluating the efficacy of AI content detection tools in differentiating between human and AI‑generated text,” which did not include Turnitin, found the “performance” of AI detection tools on GPT 4-generated content was “notably less consistent” in differentiating between human and AI-written text (Elkhatat, Elsaid, & Almeer, 2023, p. 6). “Overall, the tools struggled more with accurately identifying GPT 4-generated content than GPT 3.5-generated content” (p. 8). The findings of this study should raise questions about using GenAI detection tools in higher education:
While this study indicates that AI-detection tools can distinguish between human and AI-generated content to a certain extent, their performance is inconsistent and varies depending on the sophistication of the AI model used to generate the content. This inconsistency raises concerns about the reliability of these tools, especially in high-stakes contexts such as academic integrity investigations. (p. 12-13)
A conclusion of the paper advising “the varying performance [of detection tools on ChatGPT 3.5 and ChatGPT 4] underscores the intricacies involved in distinguishing between AI and human-generated text and the challenges that arise with advancements in AI text generation capabilities” (p. 14).
According to Turnitin, students who are English-language learners, developing writers, or a secondary-level of academic writing are at a higher risk of false positives from their tool. Adamson admits that Turnitin’s false positive rate is “slightly higher” for these students—“Still near our 1% target, but there is a difference”
International students take the brunt, again
In higher education, it is well documented that international students, who make up “approximately 17% of all post-secondary enrollments in Canada” (Shokirova, et al., 2023, August 23), are accused of academic integrity breaches at a higher rate than domestic students (See for example, Adhikari, 2018; The complex problem…, 2019; Eaton & Hughes, 2022; Fass-Holmes, 2017; Hughes & Eaton, 2022). As we see in writing centres, undergraduate international students are often English-language learners, many of whom are writing academic papers in English at post-secondary levels for the first time. As a result, many undergraduate international students’ level of writing in academic English is low. Some students that I have tutored take several years of writing practice to attain a level of academic writing many in the academy consider “polished” or at post-secondary levels.
According to Turnitin, students who are English-language learners, developing writers, or a secondary-level of academic writing are at a higher risk of false positives from their tool. Adamson admits that Turnitin’s false positive rate is “slightly higher” for these students—“Still near our 1% target, but there is a difference” (Turnitin, 2023, May 23). At the same time, Adamson also claims that Turnitin doesn’t see “any evidence” that the tool is “biased against English language learners from any country at any level” (Turnitin, 2023, May 23). Unfortunately, I was not able to find data published by Turnitin to substantiate these claims, including what the difference in the false-positive rate for these students is: What does Turnitin consider “near” their “1% target”? Is it 2%, 3.5%, 1.5%? Considering the large numbers involved, 38.5 million as of May 2023, even a 0.5% increase is significant.
What will happen in September?
Like the winter semester of 2023, it may well be that the first assignments submitted this month will start another round of changes to academic writing, academic integrity, and students’ use of Gen AI tools. It will be important for institutions to monitor and update their policies and procedures regarding AI detection tools, like Turnitin, in response to possible changes to GenAI writing. International students should be paid specific attention in these cases, as they are already vulnerable within higher education.
Eaton, S. E., & Hughes, J. C. (2022). Academic Integrity in Canada. In S. E. Eaton (Ed.) Ethics and Integrity in Educational Contexts, (Vol. 1, pp. xi-xvii). Sprinter. https://doi.org/10.1007/978-3-030-83255-1
Elkhatat, A. M., Elsaid, K., & Almeer, S. (2023). Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. International Journal for Educational Integrity, 19(17). https://doi.org/10.1007/s40979-023-00140-5
Fass-Holmes, B. (2017). International students reported for academic integrity violations: Demographics, retention, and graduation. Journal of International Students, 7(3), 644–669. https://doi.org/10.5281/zenodo.570026
Hughes, J. C., & Eaton, S. E. (2022). Student integrity violations in the academy: More than a Decade of growing complexity and concern. In S. E. Eaton & J. C. Hughes (Eds.), Ethics and Integrity in Educational Contexts (Vol. 1, pp. 61–79). Springer. https://doi.org/10.1007/978-3-030-83255-1_3
On August 31, 2021, OpenAI posted to their website, Teaching with AI, described as a guide “to accelerate student learning” using ChatGPT. This guide provides prompts to “help educators get started with” ChatGPT. These include prompts for lesson-planning development, creating analogies and explanations, helping “students learn by teaching,” as well as creating “an AI tutor.”
by Clare Bermingham, Director, Writing and Communication Centre, University of Waterloo
Note: Part two will provide the framework with some follow up information. A link to the framework will be added to this post at that time. Editor.
How institutions and course instructors are managing generative AI (GenAI), such as ChatGPT, Bard, and Dall⋅E, has been the focus of both scholarly and public-facing articles (Benuyenah, 2023; Berdahl & Bens, 2023; Cotton et al., 2023; Gecker, 2023; Nikolic et al., 2023; Sayers, 2023; Somoye, 2023), but few articles or resources have addressed students directly. And yet students are subjected to the suspicions of worried instructors and administrators caused by GenAI, and students are left to deal with the resulting surveillance and extra pressure of in-class assignments and monitored final exams (Marken, 2023). This is a critical point where writing centres can and should intervene. Our work is primarily student-facing, and we have the ability, through one-to-one appointments, to have conversations with students about what they are experiencing and what they need. Continue reading “Productive and Ethical: Guiding student writers in a GenAI world (Part 1 of 2)”→
Christin Wright-Taylor, Manager, Writing Services, Wilfrid Laurier University
This term, I seem to be meeting with more students who struggle to start the writing process. I tallied my writing appointments so far and found that 32% of them have been dedicated to helping the student generate writing for their assignments. For me, this has been an increase over previous terms. I’ve enjoyed these appointments, but I’ve also found myself hesitating on the precipice of a guided freewriting prompt, wondering: Do these work?
I can report that, yes, they do!
However, the experience of guiding my students through this formative, messy, unruly part of writing has made me reflect on what I often forget about the act of writing: it requires trust. Trust in me as the writing consultant, and both our trust in the process.
Brittany (Britt) Amell is a Visiting Research Fellow at the Digital Humanities Hub at the University of London. Her research focuses on critical, collaborative, and reparative engagements with unconventional scholarship, non-traditional knowledge production, and writing and genre studies. She can be reached at BrittanyAmell @ Cmail.Carleton.ca
As a social practice, the doctoral dissertation has been characterized as the outcome of complex negotiations that surround the entire dissertation process (Paltridge et al., 2012). Here, the word, negotiation, often implies a mutually beneficial agreement arising between two or more parties as a result of dialogue. However, the experiences of doctoral writers often reflect a different reality, one where choices are constrained and affordances are limited. Other factors, such as power differentials and assessment conditions, can also play a significant role in shaping the local contexts in which doctoral students write. Few doctoral writers, for instance, wish to risk failure when it comes to the assessment or examination of their dissertations. Given this, it is important to reflect critically on usages of negotiation that imply a natural smoothness or ease accompanies the dissertation writing cycle. Continue reading “Navigating the push and the pull: ‘Negotiating’ doctoral writing”→
Liv Marken, Rebekah Bennetch, and Brian Hotson are authoring three pieces on handwriting in academic writing. We’re beginning with Liv’s piece, which is in two parts. Here’s part one. – CWCR/RCCR Editor
Vol. 4, No. 3 (Summer 2023)
Liv Marken, Contributing Editor, CWCR/RCCR
When post-secondary institutions resumed in-person classes this year, many instructors and programs brought back handwritten, in-person, timed, and invigilated examinations (Hoyle, 2023; McLoughlin, 2023). This return to tradition was partly spurred by anxieties around the increase in student cheating during the remote phase of the pandemic (Bilen, Matros & Matros, 2021; Eaton, et al., 2023; Lancaster & Cortolan, 2023; Noorbehbahani, Mohammadi, & Aminazadeh 2022; Peters, 2023, Reed, 2023). Then, with OpenAI’s November 30, 2022 release of the artificial intelligence text generator, ChatGPT, anxieties about cheating escalated rapidly (Heidt, 2023). The AI language model’s ability to quickly generate natural-sounding text (in addition to its abilities in language tasks such as translation, summarization, and question answering) were exciting but also alarming (Cotton, Cotton, & Shipway, 2023; Susnjak, 2022), Since its release, ChatGPT’s steady improvement, as well as the proliferation of similar AI writing tools, have led to newly intensified anxieties around maintaining academic integrity (Cotton, Cotton, & Shipway, 2023; Susnjak, 2022). AI detectors, which may seem like a silver bullet to prevent and catch plagiarism, have been shown to make false accusations (Drapkin, 2023) and show bias against non-native English speakers (Liang et al., 2023). OpenAI found that their own detection tool, AI Classifier, was just not effective at catching cheating, leading the company to “quietly” shut it down (Nelson, 2023): “As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy” (OpenAI, 2023). With pandemic and generative AI cheating concerns, and no easy solutions, post-secondary institutions are in is a race against the clock to redesign assessment before the fall semester (Fowler, 2023; Heidt, 2023; Hubbard, 2023). Continue reading “The Pandemic, GenAI, & the Return to Handwritten, In-Person, Timed, and Invigilated Exams: Causes, Context, and the Perpetuation of Ableism (Part 1 of 2)”→
Thanks to all for the warm welcome to the CWCA/ACCR’s presidency. I come to this position with humility, a readiness to serve the Canadian community of writing centre professionals, and immense gratitude for the contributions of my fellow Board members. This community is near and dear to my heart. I’ve grown up in writing centres, starting my career as a peer tutor at Wilfrid Laurier’s Writing Centre back in 2004 before becoming an instructor at the University of Waterloo’s Writing Lab and English Language Proficiency Exam program as a graduate student. When I graduated from UW with a dissertation project centered on how student writers learn to engage with sources (often despite their course directors’ assignment designs and their institution’s policing of academic honesty), I was privileged to join the world of writing centres with some permanence at York University’s Writing Department. I attended the CWCA/ACCR’s first independent conference in Victoria and was eager to get involved a few years later in a leadership capacity as Digital Media Chair. Since then, I have committed countless evenings to this amazing organization working in service to my friends and colleagues across Canada. Continue reading “Precarity and pluckiness: A message from in-coming CWCA/ACCR President, Stevie Bell”→
Vol. 4, No. 8 (Spring 2023) Brian Hotson, Editor, CWCR/RCCR
I recently was going into a shop in a stripmall, and one of my son’s friends from school was sitting on the sidewalk outside the store playing on their phone. I chatted with them a bit, and then asked if either of their parents was in this shop. “No. I come here because we don’t have internet at home.”
Please join this open, participatory discussion about how writing centres are integrating, responding to, and guiding students and instructors on ChatGPT and similar large-language model (LLM) Artificial Intelligence.
(Chair) Clare Bermingham, PhD
Director, Writing and Communication Centre, University of Waterloo
Brian Hotson, MTS
Senior Manager, Program and Impact Evaluation, Dalhousie University
Michael Cournoyea, PhD
Instructor, Health Sciences Writing Centre, University of Toronto
Zoe Mukura, OCELT
Language Instructor, Saskatchewan Polytechnic
ChatGPT & LLM AI Overview / Q&A
Breakout session 1: Participants self-select based on topics
Breakout session 2: Participants self-select based on topics
Vol. 4, No. 6 (Spring 2023) Brian Hotson, Editor, CWCR/RCCR
Having a baseline foundation is important to building writing and tutoring programs and support for students. This is especially true when technology comes available that dramatically changes not only the way we teach, but the way we think about education. This is the case with CHatGPT and other Large Language Model (LLMs) tools. (Think: ChatGPT is to LLMs as Band Aid is to bandages, or Kleenex is to tissues.)
Liv Marken, Learning Specialist (Writing Centre Coordinator) Writing Centre University of Saskatchewan
In April 2023, I asked writing centre practitioners to answer 5 questions on ChatGPT and their centres’ responses. Over the next month, I’ll post the response. If you have a perspective to offer, please use this form, and I’ll post it here. Brian Hotson, Editor, CWCR/RCCR
What actions, policies, resources, or information has your institution put in place for ChatGPT?
It has been an exciting but challenging term because there has been uncertainty about who would take leadership on the issue. There wasn’t any official guidance issued, but on our academic integrity website, an instructor FAQ was published in early March, and soon after that a student FAQ. Library staff (including me and my colleague Jill McMillan, our graduate writing specialist) co-authored these with a colleague from the teaching support centre. Continue reading “ChatGPT snapshot: University of Saskatchewan”→
‘Tis the season, conference season. For those who have not written a conference proposal, it can seem like a daunting project. The thought of it can cause many to not submit at all. It can be difficult to know where to start and what to write, while following a conference’s CFP format and theme. We’ve had both successful and rejected proposals. As conference proposal reviewers and conference organizers, we’ve read many proposals and drafted several conference calls-for-proposals, as well. Here are some of the things that we’ve learned from experience. We hope this guide will provide you with some help to get your proposal started, into shape, and submitted. Continue reading “Writing a conference proposal: A step-by-step guide”→
Sam Altman, a co-founder of OpenAI, creators of ChatGPT, said in 2016 that he started OpenAI to “prevent artificial intelligence from accidentally wiping out humanity” (Friend, 2016, October 2). Recently, Elon Musk (also a co-founder of OpenAI) and The Woz (a co-founder of Apple) along with several high-profile scientists, activists, and AI business people, signed a letter urging for a pause in the rollout of Large Language Model (LLMs) AI tools, such as ChatGPT. The letter warns of an “out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control” (Fraser, 4 April 2023). A Google engineer, Blake Lemoine, was fired for claiming that Google’s LLM tool, LaMDA, had become sentient:
I raised this as a concern about the degree to which power is being centralized in the hands of a few, and powerful AI technology which will influence people’s lives is being held behind closed doors … There is this major technology that has the chance of influencing human history for the next century, and the public is being cut out of the conversation about how it should be developed. (Harrison, 2022, August 16)
Elections are coming up at the CWCA/ACCR AGM in May, and we have several board positions that will be open. Writing centre people are the best people! And CWCA/ACCR is composed of an awesome group of folks who are invested and passionate about supporting and advancing writing centre work.
Have you considered a position on our board?
I know what you’re thinking…
You’re worried you haven’t been working in writing centres for very long.
Many board members began volunteering with CWCA/ACCR when we were relatively new to writing centre work. It’s a great way to make connections, get support in your role, and become more engaged in research. There’s no such thing as too new! I was brand new when I attended my first CWCA/ACCR conference in 2014, and I was encouraged to stand for election as Secretary only a couple of years later. I had no idea what to expect. I was excited to land within a community of people who were talking about a range of questions and issues, from ideas for training peer tutors to antiracism in writing centres. They were self-reflective, curious, deeply committed to student learning and student experience, and not afraid to share their own learning journeys.
Are you worried you won’t have the time?
The commitment isn’t a huge one, depending on your role and what projects you get engaged in. Think 5-10 hours a month, on average. That’s like 10-20 minutes a day. It’s a cup of coffee, a washroom break, a… well, you get it. And the returns are so worth it.
What do you need as a writing centre professional or researcher or tutor? What support are you missing? What resources do you wish you had a few years ago, or even yesterday? Being a member of the CWCA/ACCR board is an opportunity to create those supports and resources for colleagues and student members across the country. It’s the chance to hear what people need and then find ways to deliver.
From conferences to book discussions, from workshops to panels, the range of projects that board members work on is exciting and fulfilling. I find the work so enriching for my own professional role. I’ve learned a huge amount from colleagues, and I’ve upgraded my skills in meeting facilitation and project organization. Honestly though, it just feels great to know that we’re contributing to the professional experiences of our members, regardless of where they are in their careers or learning journeys. For me, it’s been exciting that the work of the board in the last few years has overlapped with my commitment to equity, antiracism, and decolonization and reconciliation. Having the opportunity to help make space for conversations and actions on these topics and help reduce barriers to participation in the field, is something that I’m very grateful for.
What are you interested in? There will likely be ways to connect these interests to your board work and engage with others interested in similar things.
You’d love to hear what positions are up for election this year?
I’d love to share! Find out more about the following positions by reading the descriptions in our by-laws. You can also contact me or the current member in any role.
Vidya Natarajan is a first-gen immigrant whose mother tongue is Tamil, and a settler on the lands of the Anishnaabek, Haudenosawnee, Lunaapewak and Chononton Peoples (now called London, Ontario). She teaches writing and coordinates the Writing program at King’s University College.
Megumi Taguchi lives and works on the unceded, traditional lands of the Qayqayt Peoples, in a city commonly known as New Westminster, in British Columbia. A fourth generation racialized settler, she believes that because her family on her father’s side settled in the Okanagan region, home of the Syilx (say-ooks) people, they were able to avoid the worst of the racial discrimination and imprisonment by the Canadian government during WW2. She is a former peer tutor and English language tutor, and is currently services coordinator at Douglas College, where she supervises and helps run the operational side of tutoring. She is working on her master of education in TESOL at the University of British Columbia.
SIGs and Caucuses
Special Interest Groups (SIGs) have long been a way for likeminded scholars and activists to come together at conferences around subjects or projects in which they are deeply invested. As antiracism became a key node for advocacy, research, and attention among members of the International Writing Center Association (IWCA), the Antiracism Activism Special Interest Group, active since 2006 (Godbee & Olson, 2014) consolidated itself. Talisha Haltiwanger Morrison and Keli Tucker (2019) document how the IWCA’s “Antiracism Activism SIG became a standing SIG in 2017” (p. 4). They note that under their co-leadership, the SIG’s “primary goal has been to develop resources and support to help its members move toward the action invoked in the SIG’s name” (2019, p. 4). Many SIGs function on the basis of common professional and academic interests; in giving racial identity full recognition, however, IWCA’s Antiracism Activism SIG acknowledges the complex involvement of identity-based interests in social and professional interactions. Continue reading “There’s a BIPOC Caucus in the CWCA/ACCR”→
Brian Hotson, CWCR/RCCR Editor
Stevie Bell, CWCR/RCCR Associate Editor
Like many teachers on a late-August vacation, education companies can see September on the horizon. The difference is that these companies aren’t relaxing. They’re sending e-mails and booking video conferences with offers of freshly printed textbooks, handy workbooks, new online tools, and easy-to-use mobile apps that promise to make student life easier and save universities and colleges money.
This post is from the 2022 CWCA/ACCR annual conference virtual poster session. – Stevie Bell and Brian Hotson, 2022 CWCA/ACCR conference co-chairs
By Xiangying Huo & Elaine Khoo, University of Toronto Scarborough
The poster presents an innovative approach to support English Language Learners using a learner-driven and instructor-facilitated approach. Through this one-on-one support by a writing instructor, students develop their linguistic and knowledge capital required for writing in their respective courses. This risk-free approach that embraces relationality, respect and reciprocity to support students in their respective zone of proximal development can be enhanced by Culturally Responsive Pedagogy. The result of one-month of investment of time by instructor in collaboration with student have resulted in transformative impact, and opens up opportunities for further development should the student wish to pursue them. Continue reading “Reading and Writing Excellence Program: A Safe and Brave Space to Address Inequities”→