The Pandemic, GenAI, & the Return to Handwritten, In-Person, Timed, and Invigilated Exams: Causes, Context, and the Perpetuation of Ableism (Part 1 of 2)

Academic Handwriting, Part 1 of 3

Liv Marken, Rebekah Bennetch, and Brian Hotson are authoring three pieces on handwriting in academic writing. We’re beginning with Liv’s piece, which is in two parts. Here’s part one.
– CWCR/RCCR Editor

Vol. 4, No. 3 (Summer 2023)

Liv Marken, Contributing Editor, CWCR/RCCR

When post-secondary institutions resumed in-person classes this year, many instructors and programs brought back handwritten, in-person, timed, and invigilated examinations (Hoyle, 2023; McLoughlin, 2023). This return to tradition was partly spurred by anxieties around the increase in student cheating during the remote phase of the pandemic (Bilen, Matros & Matros, 2021; Eaton, et al., 2023; Lancaster & Cortolan, 2023; Noorbehbahani, Mohammadi, & Aminazadeh 2022; Peters, 2023, Reed, 2023). Then, with OpenAI’s November 30, 2022 release of the artificial intelligence text generator, ChatGPT, anxieties about cheating escalated rapidly (Heidt, 2023). The AI language model’s ability to quickly generate natural-sounding text (in addition to its abilities in language tasks such as translation, summarization, and question answering) were exciting but also alarming (Cotton, Cotton, & Shipway, 2023; Susnjak, 2022), Since its release, ChatGPT’s steady improvement, as well as the proliferation of similar AI writing tools, have led to newly intensified anxieties around maintaining academic integrity (Cotton, Cotton, & Shipway, 2023; Susnjak, 2022).  AI detectors, which may seem like a silver bullet to prevent and catch plagiarism, have been shown to make false accusations (Drapkin, 2023) and show bias against non-native English speakers (Liang et al., 2023). OpenAI found that their own detection tool, AI Classifier, was just not effective at catching cheating, leading the company to “quietly” shut it down (Nelson, 2023): “As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy” (OpenAI, 2023). With pandemic and generative AI cheating concerns, and no easy solutions, post-secondary institutions are in is a race against the clock to redesign assessment before the fall semester (Fowler, 2023; Heidt, 2023; Hubbard, 2023).

Pandemic Cheating Panic

For much of the pandemic, post-secondary programs and instructors could no longer mandate in-person assessments. Some partly replicated in-person invigilation through the unethical use of eye-tracking software and cameras (see Coghlan, Miller, & Paterson, 2021; McKenna, 2022;  Parnther & Eaton, 2021; Peters, 2023). Exams conducted with proctoring software strove to replicate a traditional in-person, invigilated exam as much as possible, deploying such cheating mitigation tactics as non-uniform and sequencing questions; surveillance; biometric authentication; and limitation of time, space, and resources (Alin, Arendt & Gurell, 2022; Chin, 2020). However, there are arguments that software perpetuates white supremacy, sexism, ableism, and transphobia (Shwauger, 2020). Proctoring software corporations were documented as putting their own interests before ethics when it came to privacy (Shwauger, 2020); in one troubling example, the CEO of Proctorio posted a Canadian student’s chat logs on Reddit (Zhou, 2020). Other instructors and programs experimented with alternative assessment methods, including oral examinations, digital submissions, or the use of assistive technologies; assessments requiring higher order thinking skills; and “take-home” examinations, [1] particularly in the early pandemic “emergency remote” context (Alin, Arendt & Gurell, 2022; Haus, Pasquinelli, Scaccia, & Scarabottolo, 2020).[2]

While some alternative assessment methods were initially praised for their potential to enhance accessibility, concerns over increased pandemic cheating and the ability to have on-campus exams have culminated in a rush to return to handwritten, in-person, timed, and invigilated examinations

While some alternative assessment methods were initially praised for their potential to enhance accessibility, concerns over increased pandemic cheating and the ability to have on-campus exams have culminated in a rush to return to handwritten, in-person, timed, and invigilated examinations (Cassidy, 2023; Heid, 2022; Hoyle, 2023; McLeod, 2023; McLoughlin, 2023). Although the regression to this type of assessment may assuage instructors’ concerns about cheating, it also raises concerns about accessibility, not to mention the overall ineffectiveness of such assessments (Tai et al., 2023). As Tai et al. (2023) point out, at the same time that institutions welcome more students from equity-seeking groups, instead of reflecting on the systemic changes that need to be made, the student is “position[ed] . . . as deficient,” which “ignores a problematic possibility: that the assessment is not fit for its multiple purposes” (n.p.).

It is important, too, to recognize that while the remote assessment environment was beneficial for some in terms of accessibility, it was not beneficial for everyone. Students at an advantage were those privileged with resources such as access to stable internet, time-zone alignment, adequate access to technology, good study space, and solid support networks. Now, as post-secondary institutions have resumed on-campus activities, we see new problems emerging as well as old problems returning, often with renewed intensity.

Perpetuation of Ableism

For disabled people who had improved accessibility in pandemic remote contexts, there has been a sense of grief, sadness, disappointment, and anger with the world so quickly forgetting so many of the accessibility gains of the pandemic (Barden, Bê, Pritchard, & Waite 2023; Nowakowski, 2022). As Nowakoski (2022) argues, when the general longing for a return to pre-pandemic norms intensified, our society continued to uphold ableism and disregard the fundamental rights of disabled individuals, even though inclusive approaches offered numerous benefits. This perpetuation of ableism extended to students returning to campus.

However, it is important to remember that while people with disabilities may have had unprecedented accessibility gains during the pandemic, many did not, and many, at the same time, were dehumanized and at a much higher risk of death due to the Covid-19 virus (Barden, Bê, Pritchard, & Waite, 2023; Saia, Nerlich, & Johnston, 2021). One report highlighted the adverse effects of Covid-19 on disabled individuals, including negative impacts on mental health, employment, finances, social care, healthcare, and community access, not to mention discriminatory attitudes leading to low priority for treatment and vaccination, and increased likelihood of “do not resuscitate” orders (Inclusion London, 2021). Attitudes towards the expendability of people with disabilities, often intersecting with other identities such as age, gender identity, race, immune status, body size, or social class, appeared to invoke implicitly eugenicist attitudes (Laterza & Romer, 2021; Yong, 2022).

…to make matters worse, the pandemic losses and the steady march of economic and social hardships have been exacerbated the release of ChatGPT, leading to a backward shift towards traditional, ableist assessment methods.

Disabled students who faced challenges or gains during the last three years may find, either way, that their lives are more difficult than they were pre-pandemic. Overall, many students, disabled and non-disabled, are contending with increased hardships, including learning-loss during remote phases that can make for a bumpy post-secondary transition (Volante & Klinger, 2023; Wong, 2023). This skills gap can be compounded by financial pressures, such as steadily rising inflation (Bank of Canada, 2023; Statistics Canada, 2023); a higher cost of living (Canadian Federation of Students, 2022; Postelnyak, 2023); tuition increases (Canadian Federation of Students, 2022); rising debt (Piper & Wong, 2022); food insecurity (Macdonald, 2022); and a Canada-wide crisis of unaffordable housing (Macdonald & Tranjan, 2023). Surrounding this is widespread hateful rhetoric targeting marginalized groups, particularly those in the 2SLGBTQIA+ community (Moran, 2023). Finally, reduced access to mental health services has come when students are needing it most (Canadian Institute for Health Information, 2022). And now, to make matters worse, the pandemic losses and the steady march of economic and social hardships have been exacerbated the release of ChatGPT, leading to a backward shift towards traditional, ableist assessment methods.

GenAI Cheating Panic

Concerns about pandemic cheating have intensified with the release of ChatGPT and a rapidly growing range of artificial intelligence (AI) tools, often referred to as GenAI (generative artificial intelligence), and their integrations (Berdahl & Bens, 2023; Mollock, 2023). A wide range of reactions to GenAI text generators has appeared, from highly skeptical and concerned, to excited and inspired, with stances everywhere in between. The creative potential and the chance for more equitable learning are exciting, but there are worrying dangers. Well before ChatGPT’s November release, many experts enumerated and elaborated on issues around the potential harms of GenAI tools, including AI corporations. Weidinger et al. (2021) classified numerous harms under the categories of discrimination, exclusion, and toxicity; information hazards (e.g., privacy leaks); misinformation; malicious uses; human-computer interaction (e.g., promotion of harmful stereotypes); automation, access, and environmental harms. Crawford (2021) explained AI as an industry of extraction: extraction of human labour, the earth, and the data of human expression. Bender, Gebru, McMillan-Major, & Mitchell (2021) detailed how large language models show and reflect bias. The more widely known example of harmful practices was written about in Time in January 2022, when OpenAI, the parent company of ChatGPT, was outed as having engaged in disturbingly exploitative labour practices, using Kenyan labourers at $2/hour (Perrigo, 2023).

For most Canadian post-secondary institutions, the release of ChatGPT coincided with the end of a busy fall term. For many institutions, its release was particularly tiring as it was the first full term back in person. Administrators, faculty, staff, and students were caught off-guard and unprepared (Nakano, 2023). With a sense of urgency that partly recalls the emergency remote teaching days of March 2020, anxieties around cheating, which already escalated during the pandemic, have further spurred the calls to return to traditional assessments, such as handwritten, in-person, timed, and invigilated examinations (Heid, 2023; Mollick, 2023). The question remains: How can writing centre professionals respond to the inevitable harms of a return to these exams?

My next post discusses the challenges of handwritten, in-person, timed, and invigilated examinations as a form of assessment, and explores how writing centres can best support and advocate for students despite a lack of control over curricular assessment choices. How can writing centre professionals respond to the inevitable harms of a return to these exams? What are the effects of general inefficacy as a form of assessment for disabled students, Indigenous students, racial and ethnic minorities, non-native English speakers, gender diverse people, student caregivers, and mature students?

[1] “Take-home” is a misnomer as students were already home. In Writing Centre submissions, we noticed that instructors and students were calling these “take-home exams,” so I use that term in this article.  An alternative term is “unsupervised testing.”

[2] We saw this shift at the University of Saskatchewan Writing Centre, where there was a surge in requests for help with take-home exams in April 2020. Now, the number of such requests has reduced dramatically since the return to in-person campus activity.


Alin, P., Arendt, A., & Gurell, S. (2022). Addressing cheating in virtual proctored examinations: Toward a framework of relevant mitigation strategies. Assessment & Evaluation in Higher Education, (48)3, 262-275. doi: 10.1080/02602938.2022.2075317

Bank of Canada. (2023, June 21). Indicators of capacity and inflation pressures for Canada. Bank of Canada/ Banque du Canada.

Barden, O., Bê, A., Prtichard, E., & Waite, L. (2023). Disability and Social Inclusion: Lessons From the Pandemic. Social Inclusion, 11(1), 1-4. doi:

Bilen, E., Matros, A., & Matros, A. (2020). Online cheating amid COVID-19 [preprint]. SSRN Electronic Journal. doi:10.2139/ssrn.3691363

Bender, E. M., Gebru, T., McMillan-Major, A., & Mitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? 🦜. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623). Association for Computing Machinery.

Berdahl, L., & Bens, S. (2023, June 16). Academic integrity in the age of ChatGPT. University Affairs.

Canadian Federation of Students. (2022, June 1). Here’s how inflation is impacting students across Canada.

Canadian Institute for Health Information. (2022, December 8). More than half of young Canadians who sought mental health services said they weren’t easy to access.

Cassidy, C. (2023, January 10). Australian universities to return to ‘pen and paper’ exams after students caught using AI to write essays. The Guardian.

Chin, M. (2020, April 29) Exam anxiety: How remote test-proctoring is creeping students out. The Verge

Coghlan, S., Miller, T., & Paterson, J. M. (2021). Good proctor or “Big Brother”? Ethics of online exam supervision technologies. Philosophy & Technology, 34(2), 389-410. doi:10.1007/s13347-021-00476-1

Cotton, D., Cotton, P. A., & Shipway, J. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education & Teaching International, 60(3), 304-313. doi:10.1080/14703297.2023.2190148

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Drapkin, A. (2023, June 25). How to detect ChatGPT plagiarism: Is it even possible?

Eaton, S. E., Stoesz, B. M., Crossman, K., Garwood, K., & McKenzie, A. (2023). Faculty perspectives of academic integrity during COVID-19: A mixed methods study of four Canadian universities. Canadian Journal of Higher Education, 52(3), 42–58.

Fowler, Geoffrey A. (2023, April 1). ChatGPT cheating detection.” The Washington Post.

Haus, G., Pasquinelli, Y. B., Scaccia, D., Scarabottolo, N., & Scarabottolo, N. (2020). Online written exams during COVID-19 crisis. E-learning, 17(5), 619-632. doi:10.33965/el2020_202007l010

Heid, M. (2022, December 29). Here’s how teachers can foil ChatGPT: Handwritten essays. The Washington Post.

Heidt, A. (2023, January 24). ‘Arms race with automation’: Professors fret about AI-generated coursework. Nature.

Hoyle, J. (2023, May 30). Developing the future of assessment. University of Oxford – Staff Gateway.

Hubbard, J. (2023). The pedagogical dangers of AI detectors for the teaching of writing. Composition Studies Journal.

Laterza, V., & Romer, L. P. (2020, April 14). Coronavirus, herd immunity and the eugenics of the market. Al Jazeera.

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 1, 100779.

MacDonald, D. (2022, September 20). Inflation in Canada has driven some people to adopt heartbreaking money-saving tactics. MTL Blog.

Macdonald, D. & Tranjan, R. (2023, July 18). Can’t afford the rent: Rental wages in Canada in 2022. Canadian Centre for Policy Alternatives.

McLeod, S. A. (2023, June 6). 5 reasons kids still need to learn handwriting (no, AI has not made it redundant). The Conversation.

McLoughlin, E. (2023, June 26). The AI assessment emergency.

McKenna, S. (2022). Neoliberalism’s conditioning effects on the university and the example of proctoring during COVID-19 and since. Journal of Critical Realism, 21(3), 317-334. doi:10.1080/14767430.2022.2100612

Mollock, E. (2023, July 1). ChatGPT and the Coming Homework Apocalypse. One Useful Thing.

Moran, P. (2023, June 19). 2SLGBTQ Canadians angry and anxious this Pride season, but determined to fight on. CBC News: The Current.

Nakano, E. (2023, June 22). ChatGPT made college final exams a free-for-all. Bloomberg.

Nelson, J. (2023, July 24). Quietly Shuts Down Its AI Detection Tool. Decrypt.

Nieminen, J. H. (2022). Unveiling ableism and disablism in assessment: A critical analysis of disabled students’ experiences of assessment and assessment accommodations. Higher Education, 84(4), 769-786. doi:10.1007/s10734-022-00857-1

Noorbehbahani, F., Mohammadi, A., & Aminazadeh, M. (2022). A systematic review of research on cheating in online exams from 2010 to 2021. Education and Information Technologies, 27(6), 5737-5761. doi:10.1007/s10639-022-10927-7

Nowakowski, A. (2023). Same old new normal: The ableist fallacy of “post-pandemic” work. Social Inclusion, 11(1), 16-25. doi:10.17645/si.v11i1.5647

OpenAI. (2023). New AI classifier for indicating AI-written text.

Parnther, C., & Eaton, S. E. (2021). Issues and problems in educational surveillance and proctoring technologies. Canadian Perspectives on Academic Integrity, 4(2), 20–21.

Perrigo, B. (2023, January 18). OpenAI used Kenyan workers on less than $2 per hour. Time.

Peters, D. (2023). From ChatGPT bans to task forces, universities are rethinking their approach to academic misconduct. University Affairs.

Piper, J. & Wong J. (2022, March 14). Rising tuition, student debt weigh heavily on post-secondary students. CBC News.

Postelnyak, M. (2023, May 23). Rising cost of living leads some high school graduates to forgo their dream universities. The Globe and Mail.

Ramming, C. H., Mosier, R. D., & P. E. Rachel Mosier. (2018). Time limited exams: Student perceptions and comparison of their grades versus time. Engineering Mechanics: Statics. 10.18260/1-2–31144.

Reed, M. (2023, June 2). Cheating traps, large classes, and the AI assessment emergency. Inside Higher Ed.

Saia, T., Nerlich, A. P., & Johnston, S. P. (2021). Why not the “new flexible”?: The argument for not returning to “normal” after COVID-19. Rehabilitation Counselors and Educators Journal, 11(1).

Shwauger, S. (2020, August 7). “Software that monitors students during tests perpetuates inequality and violates their privacy.” MIT Technology Review.

Statistics Canada. (2023, July 10). Historical (real-time) releases of Consumer Price Index (CPI) statistics, measures of core inflation – Bank of Canada definitions.

Susnjak, T. (2022). ChatGPT: The end of online exam integrity? 10.48550/arXiv.2212.09292

Tai, J., Ajjawi, R., Bearman, M., Boud, D., Dawson, P., & Jorre de St Jorre, T. (2023). Assessment for inclusion: Rethinking contemporary strategies in assessment design. Higher Education Research & Development, 42(2), 483-497. doi: 10.1080/07294360.2022.2057451

Volante, L. & Klinger, A. (2023, April 26). COVID-19 and the learning loss dilemma: The danger of catching up only to fall behind. EdCan Network.

Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P. S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., Isaac, W., Legassick, S., Irving, G., & Gabriel, I. (2021). Ethical and social risks of harm from Language Models. DeepMind. arXiv preprint arXiv:2112.04359.

Wong, J.(2022, November 12). Post-secondary transition classes aim to get students on track. CBC News.

Wong, M. (2023, July 26). America already has an AI underclass. The Atlantic.

Yong, E. (2022, February 16). The millions of people stuck in pandemic limbo. The Atlantic.

Zhou, N. (2020, July 1). CEO of exam monitoring software Proctorio apologises for posting students’ chat logs on Reddit. The Guardian.