Skip to main content

Using Rubrics

Rubric Calibration/Norming
Norming (also called calibration) is the process in which a group of raters decide collectively how to use a rubric to evaluate student work in a consistent manner. Raters are usually faculty and staff, but can also include students or external supervisors who have a role in students’ learning of program-level outcomes. This rubric training process is usually overseen by an assessment coordinator or facilitator prepared to guide the group in discussion. Preferably this is someone practiced in norming using a rubric or similar tool.

Norming for Assessment of Program-level Student Learning Outcomes 
Key to assessment of program-level outcomes is training the raters to understand that this process is not the same as grading student work as part of a class. Raters will be examining the student work products for evidence of learning specific to one or more program-level outcomes in order to help the program understand how well it is meeting its goals for student learning. (Individual grades in a class may be a measurement of many dimensions of student learning, such as specific course learning outcomes, effort, improvement over time, participation, submission on time, etc. )

Basic Guidelines for Norming/Calibration

  1. Raters should practice applying the rubric on several examples of student work. (Ideally, the facilitator will select examples of varying quality or content to allow the raters to apply the rubric to disparate samples.
  2. It is important to discuss scores initially applied and come to a consensus on how ratings will be applied by the group. For instance, if a single rater has an alternate approach to ratings, it is important to surface this in the norming session and for the group to reach agreement on the process going forward. 
  3. Sometimes, two raters simply cannot agree on a rating of a given student work product. Encourage them to explain their rationale using direct evidence from the work product (Each rating should be defensible with evidence from the work product. It should not be based on “the feeling” of a score.) Occasionally, raters may split by 1 point on a rubric. If raters are consistently splitting or splitting by more than one point, consider whether your raters need more norming and/or whether the rubric itself needs to be modified in order to better align with the program’s learning outcome(s). 
  4. Faculty/Staff discussions over the meaning and purpose of a program-level outcomes can be deeply valuable both to surfacing differences and, ultimately, finding common ground to strengthening a program.
  5. While it is not always possible, having 2 raters independently score each student work product will increase the validity of rubric scores. If scores differ by more than one point, either reconvene raters to reconcile their scores OR invite a third scorer to evaluate the artifact and then record the mean of the three. As noted above, if a large number of scores are split by more than one point, consider renorming and/or the robustness of the rubric.

Educational Assessment staff are happy to address questions as well as help facilitate norming/calibration sessions.