Go to the Students section
Go to the Staff section
Go to the Alumni section
Go to the Study here section
Go to the International section
Go to the About section
Go to the Research section
Go to the Business and Employers section
Go to the Support us section
Evelyn Brown , Graham Gibbs , Chris Glover  and Sally Jordan 
For the diverse students the Open University and Sheffield Hallam University attracts and teaches, assessment and feedback play a crucial role in retention. This paper builds on a conceptual framework presented at the 2003 ISL Symposium (Gibbs, 2003) concerning the ways in which assessment supports, or does not support student learning. Diagnosis of potential learning problems with assessment on a range of science courses in two universities has involved three stages: exploratory open ended interviews (Simpson, 2002), the use of the Assessment Experience Questionnaire with nearly 1,500 students (Gibbs et al, 2003), and three parallel studies following up specific issues in more depth. The Symposium will explain the theoretical and empirical background (Gibbs) and then focus in detail on the three parallel studies (Brown, Jordan and Glover)
The symposium will summarise the conceptual framework used, the research tools developed, the research designs, and the findings.
Back to top
Evelyn Brown, Open University
A three-year study at two universities in the UK has been exploring the effectiveness of the formative elements of assessment in science, i.e. how well these elements support students’ learning in relation to the influences identified by Gibbs and Simpson (2003a). This paper focuses on the timeliness, quantity and quality of the feedback received by students on their written assignments at one of these universities, and the subsequent use they make of this feedback.
Initial data were collected through an Assessment Experience Questionnaire (AEQ), which identifies the extent to which assessment supports student learning (Gibbs and Simpson, 2003b). The AEQ was administered to a total of 1050 students across 7 physics, astronomy, chemistry and bioscience modules, and 492 responses were received and analysed. Subsequently a further 112 students across 6 of the modules were interviewed by telephone to explore further how they had responded to the feedback received on a specific current assignment, which types of feedback they had found most useful and what they intended to do with the feedback. The types of feedback provided on the assignments by the students’ tutors were also analysed, using a coding system (Brown, Gibbs and Glover, 2003).
The results of the telephone interviews revealed some discrepancies between students’ general perceptions of the timeliness and value of the feedback they received and the subsequent use to which it was put (elicited through the AEQ), and reality in relation to the feedback on specific assignments (revealed by the telephone interviews). Analyses of the assessment tasks, of the types of feedback provided, and of the marking guidance given to tutors, help to shed light on the discrepancies.
The findings of this evaluation are being used to change the ways in which feedback is provided on the modules investigated, in order to increase its effectiveness as part of the student learning process.
Sally Jordan, Open University
A three-year study at two universities in the UK has been exploring the effectiveness of the formative elements of assessment in science, i.e. how well these elements support students’ learning in relation to the influences identified by Gibbs and Simpson (2003a). This paper focuses on the impact of online formative and summative assessment in an introductory ‘Maths for Science’ course at one of these Universities. The assessments were designed with the specific aim of providing students with feedback on their answers that is instantaneous, detailed and targeted in response to the answer given by the student. In addition, given that students have three attempts at each question with increasing advice being given after each attempt, the feedback serves a teaching function, even in the summative ‘end of course assessment’.
Analysis of student responses to the AEQ indicated that many students appear to interpret the word ‘feedback’ as being linked with their overall performance, rather than with the teaching feedback provided to increase their understanding of mathematics.
The End of Course Assessment is available for a limited time at the end of each presentation of the course, but a purely formative and optional ‘Practice Assessment’ is available throughout the course. Analysis of electronic records has revealed the importance of use of the Practice Assessment to student progress. Students who attempt the Practice Assessment are considerably more likely to complete the course. Analysis of student responses to the AEQ indicated poor scores for distribution of effort throughout the course; this may be linked to the fact that many students only used the Practice Assessment shortly before attempting the End of Course Assessment, rather than throughout the course. Changes have been made to the assessment structure, principally to enable students to access the summative assessment for longer and to encourage students, by proactive telephone contact, to engage with both assessments throughout their student of the course. The expectation is that these changes will improve students’ distribution of effort and increase course completion rates. Initial findings of the impact of this intervention will be reported.
Chris Glover, Sheffield Hallam University
A three year research study at two UK Universities entitled “Improving the effectiveness of formative assessment in Science Teaching has been examining the potential for improving student learning by making changes to the way formative assessment and feedback is presented.
Data were collected using questionnaires, focus groups, and individual semi-structured interviews. SPSS was used for the analysis of quantitative (questionnaire) data, and NVivo for qualitative (interview) data. Initial analysis and findings from the data collected from both universities outlines similarities and differences in perceptions of the two institutions (see Gibbs 2002; Gibbs, Simpson and Macdonald 2003) this paper focuses on Biosciences and Physical Sciences staff and students at one, a post 1992 university, and 6th largest in the UK.
Interviews with both staff and students at this University revealed a mismatch between the perceptions of feedback, and the use to which it was put. Supported by external examiners' reports and subject reviews, tutors argued that they were providing high quality written feedback, supplemented by a wealth of oral feedback given in lectures, laboratory and workshop sessions, and more informally through ad hoc, or casual contact within the institution. They believed, however, that much of this was not received and acted upon by students. What was clear in the research was that, contrary to tutors' beliefs and evidence from within associated literature (see Hounsell, 1987; Lea and Street, 1998; Ding 1998; Wotjas 1998), these students argued strongly that they did attend to, and act on feedback. However, face to face, oral feedback was not perceived as feedback by the majority of students. In general, students only counted something as feedback if it was written down. Consequently oral feedback was not necessarily attended to and acted upon, and the impact it had on student learning was uncertain.
In order to further explore these issues, a questionnaire identifying many possible sources of feedback and its impact was developed and administered to a cohort of year two and year four Science students. The paper presents an analysis of these students’ perceptions of the levels and relative effectiveness of the many different sources of feedback . Specific areas where feedback helped students least are identified, providing insights into possible changes in the nature of provision of feedback to students.
The research identifies the potential for improving student learning and provides the basis of a framework which will help to develop assessment methods which are not only transferable to conventional HE contexts, but are already being embedded into the University's Learning, Teaching and Assessment strategy.