Experiences with faculty wide course evaluation using a Course Experience Questionnaire

  • Mattias AlvetegChemical Engineering, Lund University, Lund, Sweden

    The purpose with CEQ is to give a basis for problematizing course design, teacher attitudes etc (Ramsden 2003). Survey results needs to be interpreted within its context and together with other evidence (Kember et al 2004). It might e.g. be hypothesized that a well designed education programme cause a development in the students’ “vision of knowledge and their expectations of teachers” (Perry 1985). Such a development is likely to change how a student responds to course evaluations.

    At the Faculty of Engineering, Lund University an adopted Course Experience Questionnaire (CEQ) has been used as a faculty wide course evaluation tool since 2003. This CEQ comprises 28 questions divided into 6 scales, two free text questions and two separate questions. By the summer of 2011, more than 137 000 questionnaires had been filled in and these have been analysed in this study. Crucial in the design of the system are course level meetings between student representatives, teachers and programme directors where CEQ-results are analysed and where each part can bring additional information. In this study, however, the focus is on analysing CEQ-data.

    What is clear at our faculty is that there exist clear trends in the data when analyzing e.g. on programme level and faculty level. As an example, three of the four questions in the “Clear goals” scale show a clear decreasing trend on faculty level for the first three years of study. Somewhat surprisingly the fourth “Clear goals” question, that mentions the teacher, does not show a decreasing trend. An hypothesis is that this is a result of a progression in complexity from first year to third year students and student development as they go through their education. To test that hypothesis, however, other kinds of data would be needed.

    Attempts have also been made to identify “consistently bad courses” as outliers in the data. On course level there seems to be a substantial between year variability among the 20 courses at the faculty that gets the lowest CEQ-scores on different questions and scales. Based on the data and information available to me it seems that course outliers in the CEQ database rather frequently are associated with mishaps and/or attitude problems. Among these courses examples can be found e.g. of teachers who express themselves as “The students feel that … but they are wrong” and teachers that on their first lecture tell the students “I didn’t want to teach this course, but I was forced to”. Looking for negative outliers might thus be a useful tool on the faculty level to look for problems that need to be looked into.

    To find good examples to spread, I would however argue based on Kember et al (2004), that positive outliers might not be the only place to look as some dissonance (compare Prosser et al 2003) is likely needed to support the student in developing their views of what learning and knowledge is.