What Good Are Student Evaluations?

I repudiate alliance to any clubs whose constitution mandates blasting undergraduate student evaluations indiscriminately. I care ardently about my students and their benefit from our work together. I respect them as learners, and I know that there are ways of improving my teaching based on information from them. Someone will probably ignore the preceding avowals, but I thought I should put them on the table at the outset — because although I’ve been troubled by student evaluations (as practised by my home institutions for years of teaching), they never seemed to add up to an interesting problem, the sort of thing that has sufficient jagged angles and reflections and refractions and beneficial and baneful consequences that I needed to think more about them.
 
This morning, though, I may have crossed the threshold. This morning it occurred to me to wonder why we put great stock in critical assessments from students whose aggregate capacity for forming critical assessments often lingers in the range of second-class honours (B to C range). If you were making important decisions, would you seek out the aggregate wisdom of a pool of informants whose judgment shakes out as about average? Or to take this from a different angle: how influential is general student input in forming the policy of educational administrators? There’s obviously a student-appeal angle to certain administrative decisions, often having to do with facilities and student services (‘New dorms!’ ‘Better food!’), but is that a deliberative interest or a calculated pitch for popularity? How much does a senior management group weight student feedback when it considers, for instance, promoting/retaining/firing a vice-president of academic affairs? These are genuine questions — I simply don’t know. My general sense, however, is that senior admin think that they actually know something about running a university that undergraduates don’t, and that any feedback from students must be weighed against the very different perspective that expertise and experience warrant.
 
So, on what basis would one regard the tabulated results of student course evaluations with more gravity than one would a student poll of top ten English novels, or favourite musical compositions? The strongest case I can think of would rest on students being, in effect, experts on learning and teaching, since they’ve been immersed in teaching/learning activities for years before they get to university. But they’ve been immersed in video and film, and that doesn’t make them (again, in the aggregate, the mode in which student evaluations most commonly come to us) reliable film critics. They give a valuable perspective on what sorts of film undergraduates enjoy, and of course some are insightful critical observers of cinema — but that set amounts to only a small proportion of the number who actually voice their preferences about films.
 
One can’t top-slice the undergrads with the best marks and listen only to them; we have especially much to learn from students who didn’t thrive in particular classes. One can’t exclude high-flyers. We know that ‘attractiveness’ makes a significant difference in evaluations, but we can’t simply lower the evaluations of handsome lecturers and raise the evaluations of the homely. Some staff bake cookies at evaluation time. Some staff count hostile evaluations as a badge of honour and court criticism (then discount it). Some staff elicit more negative evaluations by cueing students to emphasise ‘ways the course could be improved’. The environmental variables, the differences among evaluation forms, the timing of evaluations, and (once again) the reliability of student informants, and the relatively small sample sizes all tell against student evaluations serving as a sound meter for ascertaining the quality of teaching.
 
How could we elicit better information? I know there’s a lot of research out there on student evaluations, among which some must shed useful light on the problem of improving the quality and usefulness of feedback. No doubt we could learn a great deal if, for example, master teachers conducted interviews with each student in a class, but (ouch) we don’t care enough to go to that expense. We could put more stock in observation, but the inclusion of an observer changes the he classroom dynamic, doesn’t it?
 
Students and their families spend vast sums for their university courses (and will be spending much more under the Con-Dems’ privatisation of tuition). Lecturers are under increasing pressure to do things other than ‘teach well’. The usual data on how courses proceeded, especially when standardised across the whole spectrum of university offerings, provides only scant benefit for assessing how a course proceeded. I reckon, this morning, that the results of the course — the essays and exams — might themselves be a better ‘evaluation’ except that they’re too information-rich in some ways (who wants to re-read a whole stack of papers or exams to get a sense of how the students did?), and insufficiently revelatory in others. Our Staff-Student semester reviews are very good; maybe there’s a way to build on that.
 
Teaching is too important, and the persons of undergraduate students are too important, to assess what’s going on in a class by tick-boxes and ‘hot or not’ polls. Our teachers need more useful (non-threatening) feedback. Our students need to be assured that their experience matters, without being elevated to an authority that their discernment doesn’t warrant. The educational system isn’t so much broken as it is a cut-price makeshift, sustained by partial measures of affordable approximations of what would be best for all concerned, one ill-suited to produce optimal learning or teaching. The role and characteristics of most student evaluation forms symptomatise the incoherence of a budgetary, policy-oriented, ideological, impasse of conflicting interests. Sadly, undergrads and their families lose the most.
 
Or maybe I just didn’t get enough sleep last night.
 

3 thoughts on “What Good Are Student Evaluations?

  1. Great post. it would be nice to think that there is a perfect system, but life is much messier than that with way more grey than black or white. So I think you look for ways to make it better, but understanding that in a world of many competing viewpoints (students, teachers, admins, parents, society) the best you can do is a balance with a fairly large bubble of acceptability.

  2. I spend a lot of time thinking about student evaluations, mostly out of self-interest because as a Teaching Prof my career progress depends critically on them in all their unreliability. So it’s good for me to be reminded that the current system also doesn’t work particularly well for the students themselves. I can make my classes enjoyable enough to earn high evaluations, but it’s better for the students and for the world if I give their *learning* more weight than their *enjoyment*. The current system makes that counter to my interest and theirs.

Leave a Reply

Your email address will not be published. Required fields are marked *