Assessing student learning outcomes across science and mathematics curriculum

  • Marcia Mentkowski, Educational Research and Evaluation, Alverno College Milwaukee, USA


    J Abromeit, Academic Affairs/Sociology, Alverno College, Milwaukee, USA

    H Mernitz, Physical Sciences, Alverno College Milwaukee, USA

    K Talley, Assessment Center, Alverno College Milwaukee, USA

    C Knuteson, Nursing, Alverno College Milwaukee, USA

    W Rickards, Educational Research and Evaluation, Alverno College Milwaukee, USA

    I Kailhofer, Mathematics, Alverno College Milwaukee, USA

    J Haberman, Assessment Center, Alverno College Milwaukee, USA

    S Mente, Instructional Services, Alverno College Milwaukee, USA


    Developing assessments for European and U.S. science and mathematics curricula in the last 20 years has become even more complex (Ericsson, Charness, Feltovich, & Hoffman, 2006; Shavelson, 2010). As learning outcomes, constructs often defy definition because they appear holistic, disciplines teach different dimensions, and unique world views underlie educators’ efforts at synthesis. For example, disciplinary educators are often more interested in opening students’ minds by developing their perspective-taking. In contrast, professional school educators often show more interest in students’ use of best practices that meet or exceed professional standards (Rogers, Mentkowski, & Reisetter Hart, 2006).

    One problem is whether faculty-designed assessments across disciplines and professions are fair to students, judge what they claim to judge, and consider that students do need to learn to demonstrate competence whether or not they pass an assessment (Messick, 1989, 1994). Presenting author reports on an assessment design by a council for student assessment (Alverno College Faculty, 1979/1994; 2000).

    Thankfully, instructors who teach often serve as assessors, who may evaluate competence across some disciplines, to help them capture breadth and depth of constructs and essential role performances. So this council designed an assessment for faculty/staff assessors, who judge whether and how students can connect and integrate content and competencies across mathematics and science courses, expecting students to perform in outside-of-class settings. Students are asked to identify and solve unfamiliar problems, because this may be evidence that students can transfer learning outcomes across a curriculum and over time (Mentkowski & Sharkey, 2011). 

    Transfer does not always happen. Faculty/staff design team listened to colleagues who taught professions, who observed students not demonstrating their learning. Business professor observed: “Too many of our students avoid using quantitative evidence to make arguments, even when it is right in front of them.” This assessment requires all students at college midpoint to integrate scientific reasoning, quantitative literacy, analysis, and problem solving across science and mathematics courses, outside regular coursework, although students were successful on course assessments.

    During interactive training of staff assessors from across disciplines and professions, researchers recorded/categorized questions raised about validity and reliability of faculty/staffs' own judgements (Hammond, 1996) and whether assessment policies and procedures were fair (Messick, 1989). Issues included: achieving purposes for out-of-class assessments; establishing design-team’s expertise (disciplinary/assessment); ensuring relationships between content/competencies assessed and courses completed. During assessor training, assessors established consensus on judgments in relation to criteria across disciplines and professions. Two council members independently judged a random sample of 40 performances, achieved inter-judge agreement of 95%, and council resolved other issues through action research before subsequent assessor trainings (Reason & McArdle, 2008).

    Seventy-four percent of students pass, so departments use assessment results for curriculum evaluation. Humanities faculty/staff coached council to create workshops for students who did not succeed and provide for re-assessment: only four of 457 students have not passed re-assessment so far. History professor commented, “Assessor training challenged staff comfort levels in quantitative literacy. We question how to better prepare our students to analyze and present statistical information.” Bransford, Brown, and Cocking (2000) would question whether the workshop enables students to adapt and transfer learning outcomes to the re-assessment, which implies further research.