A word to the wise: End-of-quarter course evaluations are more important than you think.
In fact, they’re probably a lot more dangerous than you think.
Course evaluations have become a facet of student culture at UCLA. Some students wait eagerly for the evaluation forms to open up ninth week to critique their professors. Most others, though, find little reason to give feedback to their professors and teaching assistants.
But these evaluations help determine hiring and promotion decisions at UCLA. This is in spite of the fact that several studies have shown course evaluations can illustrate more about students’ biases than the content and quality of a course.
Evaluations are written into the University of California’s decision-making process when it comes to academic employees. But given the lack of standardization and the heavy presence of bias in conventional course evaluations methods, it’s high time the University changed its policies to use evaluations exclusively for feedback purposes, not for its personnel matters.
The UC’s Academic Personnel Manual specifies that all department chairs are responsible for submitting evidence demonstrating an instructor’s teaching effectiveness at different levels of the University. This evidence can and normally includes, in addition to other factors, student evaluations for all courses since a candidate’s last review, and the percentage of students completing evaluations.
It’s difficult to ascertain how big an impact student voices have on the development of classes as there is no consistent way they are used. Adrienne Lavine, the faculty director of UCLA’s Office of Instructional Development, said departments are free to use evaluations as they wish, most commonly passing information to instructors and using data to determine who should be hired and promoted.
But the latter function can be concerning, seeing as studies have shown that some students base their opinions more on an instructor’s identity than their knowledge and delivery. A 2014 study found that when online instructors disguised a woman as a man and a man as a woman, the female identity received lower performance reviews. A 2015 study looking at student reviews on the website RateMyProfessors.com found that instructors with Asian last names were rated lower on “clarity” and “helpfulness” than instructors with Western names.
And in January, former UCLA psychology professor David Jentsch tweeted about an evaluation that complains about not the content of his course or teaching style, but that “It’s disgusting that UCLA allows gay people to teach our courses.”
This demonstrates that students don’t see instructors in a vacuum. Identity politics, not to mention other nonacademic factors, can play a role in how they evaluate a course.
That’s not to say evaluations are inherently a broken system. But when study after study shows students are likely to be biased in evaluations, it’s clear that the UC should not be lending too much credence to this information.
Furthermore, other avenues to determine hiring decisions, such as peer reviews from other faculty members, exist. While these are still subjective accounts subject to bias, their more intimate, long-term and professional nature makes them less of a target of racial or gender bias.
Lavine said UCLA is aware of the potential harm evaluations have due to bias, explaining that a faculty committee is already piloting a new evaluation method that would shift focus from a student’s opinion of an instructor toward a more direct evaluation of their teaching style.
This development is promising. But that doesn’t preclude the fact that the current evaluation system can be changed to protect faculty and TAs from its pitfalls.
Evaluations can be helpful in shaping programs, such as online courses and freshman cluster series. But they are most useful when they remain between the instructor and the student.
And UCLA should keep it that way, not allow some student’s homophobic comment determine a professor’s employment options.