Skip to main content

Indirect Measures

Indirect Measures of student learning ask students to reflect on or self-assess their learning rather than demonstrate it. This can include measures of  students’  knowledge, skills, attitudes, and learning experiences. These indirect measures allow for inference that learning has taken place, but it does not provide the same level of  tangible, visible, and measurable evidence as a direct assessment of student learning. 

In some cases, though, it may be the only source of data available. Outcomes such as attitudes or beliefs are difficult to measure using direct methods. At times, indirect measures may be entirely sufficient for measuring some program outcomes. For instance, when assessing students’ sense of belonging, attitude, or values, we cannot measure directly. Instead, we can apply a form of self report (hence, indirect) data. 

 In order to increase our confidence in the data collected through these indirect methods, it’s important to ask clearly and concretely about the student’s experience, often using multiple measures. Indirect measures include surveys, focus groups, and other activities that gather impressions or opinions about the program and/or its learning outcomes. Educational Assessment staff are happy to consult on instrument selection and design, interpretation, and sharing results of indirect assessment. 

Faculty and staff often ask whether one method of assessment is better than the other for their program. Educational Assessment advises the use of direct assessments when possible as they provide more compelling evidence of student learning: the evaluator is looking directly at the student work or performance to determine what they’ve learned. They are not extrapolating what students may have learned. However, there is an important role for indirect assessments as well, even when using direct assessments as well. 

As noted by St. Olaf’s College in their assessment resources, it can be useful to combine the two methods. For example, it can be valuable to “survey students about their perceived skill development as well as assess their demonstration of that skill to check for alignment or misalignment between what students think they know or can do and their actual performance.” Furthermore, they observe, “It’s also important to note that particular methods do not always strictly align with one type of assessment versus the other. For instance, a survey is typically considered an indirect assessment method, but it can also function as a sort of ‘quiz’ if some questions ask students to demonstrate knowledge (e.g., “Name three resources for academic support on campus”) rather than rate their understanding (e.g., “How familiar are you with academic support resources on campus?”). As another example, a reflection paper might function as a direct assessment method if reflection itself is a skill articulated in the course or program learning outcomes.”

 

Direct Assessment vs. Indirect Assessment: Examples of Evidence

Direct Evidence of Student Learning

Indirect Evidence of Student Learning

  • Capstone course products; course embedded content; signature assignments
  • Portfolios
  • Presentations
  • Performances (music, theater, art show, poster session, dance, recitals, etc.)
  • Embedded questions on exams
  • Locally developed exams
  • Standardized tests
  • Scores and pass rates from licensure certificate exams, or other national tests;
  • Scores of locally designed tests in courses, qualifying exams, or comprehensive exams accompanied by descriptions of what the tests assess 
  • Score gains on pre/post tests
  • Employer or field supervisor ratings of student skills
  • Systematic observations of student behavior (using a rubric to evaluate presentations, group discussions, etc.)
  • Student surveys on perceptions of learning, campus climate, or satisfaction

  • Focus groups 

  • Exit interviews 

  • Alumni surveys, employer surveys

  • Curriculum, assignment, and syllabus analysis 

  • Case studies, informal observations of student activities, student services, or student involvement

 

Making them Effective 

  1. Will the type of evidence gathered help the program understand what it can do to improve?
  2. Consider whether there are existing measures that can be used as indirect measures of a student learning outcome. This might include exit or senior surveys, alumni surveys, end-of-quarter evaluations that include items about perceptions of learning for specific student learning outcomes, or possibly, surveys given at the university level such as the NSSE (National Survey of Student Engagement). Given the program’s student learning outcome, can the assessment be based entirely on these indirect measures or can they be used to supplement a direct assessment of student learning?
  3. If new measures need to be developed, consider the merits of surveys, focus groups, or interviews. Which approach will provide the most meaningful and manageable results?
  4. Can the same indirect measure provide information on more than one student learning outcome?
  5. In addition to indirect measures for students, can this approach be used productively with faculty? Are there other materials (e.g., syllabi, assignment prompts, etc.) that can provide indirect support for questions the program wishes to answer? For example, if the program is assessing students' written and/or oral communication, will an analysis of syllabi help inform the assessment by showing what faculty are asking students to do?

Once your indirect measures have been developed, an important next step is to determine your Targets and Benchmarks.