Trying to capture the full story: Making a post-tutoring session survey

Volume 2, No. 1 Summer 2021
by Emma Sylvester

Emma Sylvester is Coordinator, Writing Centre and Academic Communications, Saint Mary’s University.

Introduction

As Writing Centre (WC) practitioners, how do we know that students are actually benefitting from our work? Plenty of research has shown that WC use improves students’ grades (e.g., Driscoll, 2015; Thompson, 2006; Trosset et al., 2019, Dansereau, et al., 2020), but how do I know that translates to my own unique institution or to the session I had with a tutee this morning? As a tutor, the immediate feedback of seeing a student’s “lightbulb moment” or hearing their expressions of gratitude gives me some indication that I’m doing something right. Unfortunately, these experiences aren’t reliable or comprehensive indicators of the benefits of the WC, and they don’t tell me about the student’s full emotional experience in session or their long-term learning. Further, in the post-covid era, ripe with asynchronous sessions and cameras left off, these moments are potentially fewer and farther between.

https://docs.google.com/spreadsheets/d/1Zt4bRIYDrOHFxtgO8iRzuJAZ4DyUp5YLbsA4oAWTrY0/edit

Post-session surveys are widely used across WCs not only to learn about how students value writing tutorials, but also to inform program development, assess and refine tutor practice, collect data for study and publication, and even to justify the existence of the centres themselves (Bromley et al., 2013). The need to collect, analyse, and apply data related to students’ experience in session is obvious and inherent in the ongoing development of WC practice, but taking a rigorous approach to this process is often forgotten amidst other seemingly more important (and let’s just say it, more interesting) work.

The aims of our post-session survey are to get a sense of students’ overall valuation of tutorials, to assess their motivation, affect, cognition, and self-efficacy throughout their engagement with WC tutoring (Linnenbrink, 2006) and to be able to identify trends between the student experience of the WC and other factors (such as accessibility, frequency of use, and type of tutoring appointment). When we analyzed our survey in light of these aims, we discovered not only that our survey didn’t meet these needs, but also that it was limited by unclear phrasing and questions that didn’t serve to inform any of our objectives. It was obvious we needed an update. Student and session metadata is readily available through our WC’s booking platform, WCOnline, which offers a good starting point and a platform for creating a concise survey to meet our objectives.

Given the condition of user surveys being highly important but often disregarded, in addition to improving our own survey, we also wanted to develop a base from which other WCs could assess and build their own surveys to match their unique services and objectives. While there is a need to refine surveys to match the programs and clientele specific to each WC, there is evident overlap in the objectives of WC surveys, and student motivation and response to services across institutions (Bromley et al. 2013). We thus endeavoured to develop and apply a comprehensive, collaborative survey ‘database’ from which to build.

Methods

To collect survey questions, we sent out a request for writing centre exit surveys via the CWCA/ACCR and IWCA listservs. We received responses from Canadian, American, and UK writing centres.[1] We also searched for open-access WC surveys online. From this, we compiled 10 surveys (including our own). Each question (a total of 114) was assigned to one of 11 categories indicating the objective(s) of the question:

  1. Accessibility: accessibility of tutoring services or technology
  2. Affect: emotional experience of student within or after session
  3. Assessment of tutor
  4. Cognition: student understanding of concepts discussed or introduced in session
  5. Marketing: questions relating to marketing of the writing centre
  6. Metadata: session metadata (such as specific tutor met, number of previous appointments, etc.)
  7. Motivation: understanding student motivation or objective of session
  8. Objectives met: whether or not the student felt that their objectives were met
  9. Open feedback: short answer questions giving students the opportunity to provide unstructured feedback on their session or writing centre services
  10. Self-efficacy: indication of self-efficacy, feasibility of independently applying concepts from session to future writing tasks
  11. Value of visit: overall value of visit (including likelihood of returning)

We then assessed questions by category and selected unique questions that met the objectives of our survey (Driscoll & Brizee, 2010). This process resulted in 17 questions, which were further reduced by assessment of redundancy in objective and rephrasing questions to condense and meet our specific objectives.

Question phrasing was further amended for clarity and avoidance of bias (Driscoll & Brizee, 2010; Jefferson, 2011). One major drawback of our original survey was that question answer types varied and alternated, potentially disorienting survey takers. By rephrasing questions to correspond to a Likert scale response, we reduced the potential for this confusion. While it is advised to avoid double-barreled questions (Driscoll & Brizee, 2010), conjunctions were occasionally included to clarify and specify meaning, rather than expand or cloud interpretation.

We also aimed to incorporate neutral (not leading) phrasing, but there remains the potential for bias inherent in the consistent use of positively oriented questions. This approach, however, is somewhat necessary (Roszkowski & Soven, 2010). When presented with a Likert scale response, the question, “I did not feel supported and/or respected” takes more time to think through to process a double negative than “I felt supported and respected.” This is exacerbated if a combination of positively and negatively phrased questions are included in the survey. This could particularly lead to misunderstanding from English language learners, a prominent demographic of WC tutees. Lastly, and most importantly, negatively phrased questions don’t necessarily provide a response that meets the question objectives. Using the above example, a response of ‘strongly disagree’ for “I did not feel supported and/or respected” does not tell us that they felt supported and respected, only that they did not feel the opposite, which doesn’t allow for conclusive interpretations of data.

The order of questions was decided based on three factors: chronology, increasing complexity, and clustering question types together. Chronology was applied such that the order of events was reflected in the order of questions (motivation to attend session, accessibility of registration, assessment of in-session affect, and assessment of session value). This aligned well with the principle of increasing complexity and clustering questions by answer type, with simple, fact-based, multiple-choice questions being presented early in the questionnaire, followed by Likert scale response questions, and open-ended feedback (short answer) requested to close the survey.

In a final step we connected with an expert in surveys from our Psychology Department, Dr. Kevin Kelloway, who provided feedback on specific survey questions, the survey overall, and the distribution platform, further improving the survey. This process led to a final 12-question survey

Next Steps

Evidence suggests that surveying students immediately following a session tends to bias responses towards positive feedback, with student satisfaction declining over time post-session (Bell, 2000). It’s therefore likely that a comprehensive assessment of WCs would be best acquired by supplementing these post-session feedback surveys with semesterly or annual surveys, though these approaches suffer from response rate bias compared to their more frequent counterparts (Bell, 2000). Although this extends beyond our current survey development, we have also included a ‘survey frequency’ column in the ‘database’ to allow future use to include various survey types.

Building on this idea, our approach to the development of this database was directed by our survey objectives, and there are likely numerous other factors that could be considered in the process of survey creation. There is potential for expansion of this database not only longitudinally, as additional contributors add their own surveys, but also laterally. Moving forward, we plan to maintain this database with room for expansion of other factors and potential uses as they arise.

An important part of this process is to provide an open-source database (admittedly, a spreadsheet) of exit survey questions for the writing centre community. One of our goals is to not only to develop our own survey, but also to provide a resource for other WCs aiming to improve and streamline their survey writing process. This database may serve as a starting point for survey development with wide usability, and we encourage other WCs to build on and use this resource.

While we are satisfied with the survey development thus far, we are keenly aware that post-session surveys, for all our efforts to assess key aspects of learning and self-efficacy in addition to student satisfaction, cannot capture the full story of how students benefit from WC services (Thompson, 2006). With services being centralized online, there is a wealth of data available through post-session reports completed by tutors, and examples of student writing submitted across multiple sessions that could be examined for evidence of improvement within the work itself, likely a far more reliable measure of student learning than self-assessment. With consideration for student privacy, these sources could also be implemented for extensive evaluation of WCs.


References

Bell, J. H. (2000). When hard questions are asked: Evaluating writing centers. Writing Center Journal, 21(1), 7-28.

Bromley, P., Northway, K., and Schonberg, E. (2013). How important is the local, really? A cross-institutional quantitative assessment of frequently asked questions in writing center exit surveys. Writing Center Journal, 33(1), 13-37.

Dansereau, D., Carmichael, L. E., & Hotson, B. (2020). Building first-year science writing skills with an embedded writing instruction program. Journal of College Science Teaching, 49(3), 36-45.

Driscoll, D. L. (2015). Building connections and transferring knowledge: the benefits of a peer tutoring course beyond the writing center. Writing Center Journal, 35(1), 153-181.

Driscoll, D. L., and Brizee, A. (2010). Creating good interview and survey questions. OWL writing resources: Research and citation. Purdue Online Writing Lab. https://owl.purdue.edu/owl/research_and_citation/conducting_research/conducting_primary_research/interview_and_survey_questions.html

Jefferson, J. (2011). Tutoring survey and interview questions: A tangible lesson in audience. The Writing Lab Newsletter, 36, 3-4.

Linnenbrink, E. A. (2006). Emotion research in education: Theoretical and methodological perspectives on the integration of affect, motivation, and cognition. Educational Psychology Review, 18, 307-314.

Roszkowski, M. J. and Soven, M. (2010). Shifting gears: consequences of including two negatively worded items in the middle of a positively worded questionnaire. Assessment & Evaluation in Higher Education, 35(1), 113-130.

Thompson, I. (2006). Writing center assessment: Why and a little how. Writing Center Journal, 26(1), 33-61.

Trosset, C., Evertz, K., and Fitzpatrick, R. (2019). Learning from writing center assessment: Regular use can mitigate students’ challenges. The Learning Assistance Review, 24(2), 29-51.


[1] Thank you to Coventry University (UK), Douglas College, Pasadena City College (US), Queen’s University (Kingston), Simon Fraser University, University of Alberta, and Westminster College (US) for responding to our query.