Skip to main content


Why do my SET ratings differ from my typical ratings on the instrument we used in the past?

The ratings will differ because the items (i.e., the questions) in the new SET differ from those in the former instrument.

What can I do to improve my ratings on SET items?

Visit the DRT (Digital Resources for Teaching) website managed by SCU’s Faculty Development Program, attend a CAFE hosted by the Faculty Collaborative for Teaching Innovation; consult the Teaching Resources Page on the Faculty Development site; or contact Eileen Elrod, Associate Vice Provost for Faculty Development for a consultation.

I can see my SET results. Who else can? What do they see?

Consistent with current practice, faculty members will receive a report for each class they teach, and a copy will be sent to the dean’s office. Consistent with current practice in the College of Arts and Sciences, Leavey School of Business, School of Engineering, and the School of Education and Counseling Psychology, Basic SET

Reports will be available to the university community at The SET report posted online will not include responses to the open‐ended item.

Consistent with current practice, Law School SET results will not be posted online.

How do I access student responses to the open‐ended question?

Student comments in response to the open‐ended item (“Is there anything else you would like to add about this instructor or course?”) will appear at the end of the report PDF sent to the instructor.

How and by whom are SET results used?

Faculty can use SET results for self‐reflection on their teaching, course planning and redesign, assignment and learning activity design, and overall professional development.  

Faculty include SET results in their Activities Reports and other documents related to regular performance evaluation, reappointment, mid‐probationary review, tenure and promotion.

Chairs and committees use SET results along with other sources of evidence (for example, syllabi, course materials, sample assignments, activities and exams, statements of teaching philosophy, student work, peer reports on classroom visits, and the like) in evaluating faculty teaching performance.

SET results, in the context of other indicators of teaching effectiveness, support the evaluation process by helping faculty, chairs, and committees identify teaching strengths and areas needing attention.  

Some departments consider multiple sources of evidence when evaluating teaching performance while others rely mainly on the SET, right?

Yes. Some departments and programs regularly include multiple sources of evidence when evaluating faculty teaching. In addition to the numeric SET, these have included narrative questions designed by a department, program or instructor; peer observation letters; representative syllabi; significant assignments, class activities and exams; and personal statements or descriptions of teaching philosophy.

I prefer that my colleagues’ evaluation of my teaching draw from multiple sources of evidence, rather than exclusively from the numeric averages generated by the SET. How do I make that happen?

Consult with your Chair, Program Director, or Dean to be sure you have a clear understanding of current practices for annual or multi-year evaluation in your department or school.

As a general rule, if a faculty member submits relevant, substantive evidence of teaching performance (including materials beyond the SET), it will be included in the evaluation. See the DRT resource on teaching portfolios for guidelines and advice about portfolio approaches to the documentation of teaching performance.

For evaluations for promotion to Associate Professor, Professor, or Senior Lecturer, see the University’s guidelines to candidates for tenure and promotion, and non-tenure-track appointment policies, which emphasize multiple sources of evidence in the evaluation of teaching effectiveness.

What’s the advantage of multiple sources of evidence in the evaluation of teaching?

As the report from the 2012 SCU Task Force on the Evaluation of Teaching pointed out, peer review and multiple sources of evidence lead to the most accurate and informed evaluation of teaching, just as in the evaluation of scholarship.

As a chair, I need to interpret the ratings of my faculty on the new SET. Should I compare the new SET ratings of my faculty to the ratings on the old instrument?

No. Since the items are different, the ratings will be different.

What are the University-wide average ratings on the new SET?

In fall 2014, students submitted SET responses for 1,491 class sections. Average ratings for the nine items are provided in this table: Item

Rough Description



Communicated expectations clearly



Organized course effectively



Challenged me to think rigorously



Helped me reach a clear understanding



Managed class time to advance learning



Respectful learning environment



Useful feedback



Instructor availability and willingness to help



Excellent teacher


On average, students in upper division and graduate courses reported spending more time on coursework and rated their courses as somewhat more challenging than students in lower division classes.

Where can I find departmental averages?

Check with your Chair or Program Director who can access this information through the dean’s office. At the conclusion of each term, shortly after reports of SET results are sent to instructors throughout the university, a set of reports

based on each school/college as a whole, and a set of reports based on subject areas are sent to the appropriate dean’s office.

What do departmental SET reports look like?

The structure of school/college and subject reports is identical to that of reports sent to individual instructors. For school/college reports, item statistics (average, standard deviation, median, etc.) are based on all students who responded to the SET for a class section taught within the school/college. For subject reports, item statistics are based on all students who responded to the SET for class sections from the subject area (e.g., MECH or SPAN).

My department/program uses its own paper forms. How do the two evaluations work together?

Some departments have used and will continue to use narrative or other supplemental forms for student evaluation of teaching. Often these are completed in class. Consistent with current practice, those departments will determine how to integrate multiple forms of student feedback.

What is the response rate for the SET instrument?

For the 1,491 class sections evaluated by students at the end of fall 2014, the average response rate for the SET instrument was 81%, which is quite high.

What is the response rate for the open-ended item, Item 3?

In fall 2014, the average response rate for Item 3 was approximately 28%. This tells us that on average 8.1 in 10 students in a class section responded to the SET instrument, but only 2.8 students on average responded to the open-ended item.

How can I improve the response rate of my students?

Some faculty encourage students to respond to the survey by telling the class how valuable the feedback will be. Others use a few minutes during the last day of class, asking students to use laptops or mobile devices to complete the instrument before they leave the classroom.

Where can I see a more complete report on the fall 2014 study?

Find the complete SET report here