Go to the Undergraduate section
Go to the Postgraduate section
Go to the MBA section
Go to the Student Life section
Go to the Research section
Go to the Employers section
Go to the About section
Bh Business and Management
HE institutions persistently seek to increase student engagement and satisfaction with assessment feedback, but with limited success. This study identifies the attributes of good feedback from the perspective of recipients. In a distinctive participatory research design, student participants were invited to bring along actual examples of feedback that they perceived as either ‘good’ or ‘bad’ to 32 interviews with student researchers. Findings highlight the complex interdependency and contextual nature of key influences on students’ perspectives. The feedback artefact itself, its place in assessment and feedback design, relationships of the learner with peers and tutors, and students’ assessment literacy all affect students’ perspectives. We conclude that standardising the technical aspects of feedback, such as the feedback artefact or the timing or medium of its delivery is insufficient: a broader consideration of all key domains of influence is needed to genuinely increase student engagement and satisfaction with feedback.
In a context of international concern about academic standards, the practice of external examining is widely admired for its role in defending standards. Yet a contradiction exists between this faith in examining and continuing concerns about standards. This article argues that external examining rests on assumptions about standards which are significantly open to challenge. Six assumptions relating to the conceptual context, the operation and the nature of examiners themselves are analysed drawing on a review of the available evidence. The analysis challenges the notion of a consensus on standards and the potential to vest in individuals the ability to represent that consensus when judging the comparability of academic standards in a stable and appropriate way. The issues raised have relevance to the UK and to other national systems using external examiners or seeking to guarantee academic standards by, in some cases, adopting quality assurance approaches developed in the UK
It is clear from the literature that feedback is potentially the most powerful and potent part of the assessment cycle when it comes to improving further student learning. However, for some time, there has been a growing amount of research evidence that much feedback practice does not fulfil this potential to influence future student learning because it fails in a host of different ways. This dilemma of the disjuncture between theory and practice has been increasingly highlighted by the UK National Student Survey results. This paper uses a model of the assessment process cycle to frame understandings drawn from the literature, and argues that the problem with much current practice resides largely in a failure to effectively engage students with feedback. The paper goes on to explore how best to effectively engage students with assessment feedback, with evidenced examples of feedback strategies that have successfully overcome this problem.
Unreliability in marking is well documented yet we lack studies that have investigated assessors’ detailed use of assessment criteria. This project used a form of Kelly’s Repertory Grid method to examine the characteristics that 24 experienced, UK assessors notice in distinguishing between students’ performance in four contrasting subject disciplines: that is their implicit assessment criteria. Variation in the choice, ranking and scoring of criteria was evident. Inspection of the individual construct scores in a sub-sample of academic historians revealed five factors in the use of criteria that contribute to marking inconsistency. The results imply that whilst more effective and social marking processes that encourage sharing of standards in institutions and disciplinary communities may help align standards, assessment decisions at this level are so complex, intuitive and tacit that variability is inevitable. It concludes that universities should be more honest with themselves and with students and actively help students to understand that application of assessment criteria is a complex judgement and there is rarely an incontestable interpretation of their meaning.
Grade descriptors, marking guides and exemplars are generally accepted to have a positive impact in assisting students’ understanding of assessment task requirements and standards, but little is known about how students use such resources. This paper provides insight into the perceptions of first-year students of the usefulness of grade descriptors, marking criteria and annotated exemplars. Of the 119 students who provided their reflections on the resources, 87% found the resources to be useful. Students’ responses about the usefulness of the resources revealed two main standpoints: those (1) seeking precise guidance and (2) happy with ‘an idea’ of standards. A thematic analysis identified 10 subthemes which were related to these standpoints. There was a strong relationship between students seeking precise guidance and requests for even more exemplars, while those happy with only an ‘idea’ of standards felt the resources largely gave them what they needed and assisted their learning. We discuss some implications of these findings.
Within many higher education systems there is a search for means to increase levels of student satisfaction with assessment feedback. This article suggests that the search is under way in the wrong place by concentrating on feedback as a product rather than looking more widely to feedback as a long-term dialogic process in which all parties are engaged. A three-year study, focusing on engaging students with assessment feedback, is presented and analysed using an analytical model of stages of engagement. The analysis suggests that a more holistic, socially-embedded conceptualisation of feedback and engagement is needed. This conceptualisation is likely to encourage tutors to support students in more productive ways, which enable students to use feedback to develop their learning, rather than respond mechanistically to the tutors' ‘instruction'.
Assessment is currently in the spotlight for its poor ratings in student satisfaction surveys and"under performance" in quality reviews. Consequently, a variety of initiatives and projects are being undertaken aimed at improving assessment. However, many of the concepts and theories underpinning assessment practice are complex and interrelated, which can mean that relatively simple and apparently minor changes can have major, and often unintended, consequences. This paper was initially prepared to foreground an internal document providing diagnosis and recommendations for change to assessment strategy and policy in a post-1992 university. It draws on a wide body of literature and research studies to distil and discuss key issues, which should inform assessment decisions. These key issues provide a framework to examine assessment policy and practice and enable the alignment of assessment policy with the beliefs and values of an institution.
Constraints in resourcing and student dissatisfaction with assessment feedback mean that the effectiveness of our feedback practices has never been so important. Drawing on findings from a three-year study focused on student engagement with feedback, this paper reveals the limited extent to which effectiveness can be accurately measured and challenges many of the assumptions and beliefs about effectiveness of feedback practices. Difficulties relating to multiple purposes of feedback, its temporal nature and the capabilities of evaluators reveal that measuring effectiveness is fraught with difficulty. The paper argues that the learner is in the best position to judge the effectiveness of feedback, but may not always recognise the benefits it provides. Therefore, the pedagogic literacy of students is key to evaluation of feedback and feedback processes.
This article considers the methods and difficulties of establishing, sharing and applying assessment standards within module teams working in a business school. Against a background of increasing reliance on explicit knowledge to establish standards in the HE sector the study looks at ways in which staff within module teams attempted to reach a common view of assessment standards through sharing tacit knowledge in order to make consistent judgements about student work. In so doing it identifies and questions the assumptions, myths and beliefs that bolster the culture around standard setting and marking/grading. The paper questions whether academic communities of practice provide an adequate framework to support the sharing of assessment standards. In particular it argues that the scholarship of assessment would support the development of a specialist assessment and assessment standards discourse within communities that could, in turn, support the sharing of assessment standards.