Katherine Harrington

  • A qualitative analysis of staff-student differences in understandings of assessment criteria

    Katherine Harrington, London Metropolitan University
    James Elander, London Metropolitan University
    Lin Norton, Liverpool Hope University College
    Peter Reddy, Aston University
    Edd Pitt, Liverpool Hope University College

    Research paper

    Theme: supporting learners

    Writing in an academically literate way is central to the culture of Higher Education, and assessment is often the point at which students are required to demonstrate their cultural competence in this area. Difficulties with writing thus represent significant obstacles to achievement, and helping students succeed involves providing learning opportunities that support the development of an academic understanding of and approach to writing.

    Engaging in one of the most common forms of writing used for assessment purposes, the undergraduate essay, is something that many students find difficult, and they often struggle to know exactly what is required. One important reason for this is that students and tutors often interpret the language of assessment criteria differently (Higgins et al, 2002; Merry et al, 1998, 2000; Lea and Street, 1998). Making assessment criteria explicit to students can help (Norton 1990; O’Donovan, et al, 2000; Price and Rust 1999); however, as Gibbs and Simpson (2003) have argued with respect to effective feedback, tutor explanations will be interpreted in the light of students’ expectations and understandings, which can be less sophisticated or appropriate than those of their tutors, and this can leave large scope for misunderstanding and incomprehension.

    The purpose of the study reported here was to identify mismatches in students’ and tutors’ understandings of the meanings of common assessment criteria used for undergraduate essays, and some of the reasons for these differences, in order to inform teaching interventions designed to help students improve their essay writing , such as those to be developed by the London Metropolitan and Liverpool Hope CETL in Scientific Literacy.

    A series of semi-structured interviews with psychology tutors at a post-1992 university and semi-structured focus groups with students enrolled on the psychology degree programme at the same institution were conducted. Participants were selected using purposive sampling to represent tutors across a wide range of teaching and marking experience and students across all levels of the degree course. Open-format questions were used, and tutors and students were asked to describe their understandings of seven common assessment criteria (Elander et al, forthcoming). A thematic analysis of the transcripts was undertaken in relation to each criterion.

    We found that many of the mismatches in understanding could be attributed to students’ adopting a surface approach to the task of essay writing, whilst their tutors expected a deep approach. However, we also found that, with respect to certain criteria such as “developing an argument”, in spite of evidence of students’ adopting a deep approach to learning, significant differences in understanding existed. Reasons for this kind of mismatch can be elucidated with the concept of the student ‘voice’ in academic writing, as developed in the work of Read, Francis, and Robson (2001). We conclude that helping students adopt an academically literate approach to writing can be facilitated best not only by helping students understand how staff interpret the terms used in assessment criteria, but also through the development of staff sensitivity to and awareness of the specific ways students may be likely to (mis)understand these criteria.

    * This study is part of a larger HEFCE-funded FDTL4 consortium project, Assessment Plus, which is developing materials and methods to support student learning using assessment criteria. For details, see  http://site.assessmentplus.com/


    • Branthwaite, A, Trueman, M and Hartley, J (1980), Writing essays: the actions and strategies of students, in Hartley, J (ed), The Psychology of Written Communication: Selected Readings, London: Kogan Page.
    • Elander, J, Harrington, K, Norton, L, Robinson, H, and Reddy, P (forthcoming), Complex skills and academic writing: a review of evidence about the types of learning required to meet core assessment criteria, Assessment and Evaluation in Higher Education.
    • Gibbs, G and Simpson, C (2003), Does your assessment support your students’ learning?  http://artsonline2.tki.org.nz/documents/GrahamGibbAssessmentLearning.pdf (accessed 21 April 2003)
    • Higgins, R, Hartley, P and Skelton, A (2002), The conscientious consumer: reconsidering the role of assessment feedback in student learning, Studies in Higher Education, 27, 1, 53-64.
    • Lea, MR and Street, B (1998), Student writing in Higher Education: an academic literacies approach, Studies in Higher Education, 23, 2, 157-172.
    • Merry, S, Orsmond, P and Reiling, K (1998), Biology students’ and tutors’ understanding of ‘a good essay’, in Rust, C (ed), Improving Student Learning: Improving Students as Learners, Oxford: The Oxford Centre for Staff and Learning Development.
    • Merry, S, Orsmond, P and Reiling, K (2000), Biological essays: how do students use feedback?, in Rust, C (ed), Improving student learning: Improving student learning through the disciplines, Oxford: The Oxford Centre for Staff and Learning Development.
    • Norton, LS (1990), Essay writing: What really counts? Higher Education, 20, 4, 411-442.
    • O’Donovan, B, Price, M and Rust, C (2000), The student experience of criterion-referenced assessment through the use of a common criteria assessment grid, Innovations in Learning and Teaching International, 38, 1, 74-85.
    • Price, M and Rust, C (1999), The experience of introducing a common criteria assessment grid across an academic department, Quality in Higher Education, 5 (2), 133-144.
    • Read, B, Francis, B, and Robson, J (2001), ‘Playing Safe’: undergraduate essay writing and the presentation of the student ‘voice’, British Journal of Sociology of Education, 22 (3), 387-399.
    • Scouller, K (1998), The influence of assessment method on students’ learning approaches: Multiple choice question examination versus assignment essay, Higher Education, 35, 453-472.
    • Tynjala, P. (1998), Traditional studying for examination versus constructivist learning tasks: do learning outcomes differ? Studies in Higher Education, 23 (2), 173-189.