Noah Carl’s piece today for the Daily Sceptic describes, accurately, the widespread phenomenon of grade inflation at universities – both in the U.S. and the U.K. He reasons, drawing on the work of economist Stuart Rojstaczer, that this is attributable to a “consumer-based approach to teaching” in which academic pay and promotion are linked to student-based course evaluations. As Carl puts it:
Basically: if they’re too stingy with their grades, they’ll receive lousy evaluations, and in addition to the stress of dealing with irate students, they’ll be less likely to advance in their careers.
Now, I don’t know about the situation at American universities. But as regards British ones, this explanation is dead wrong – for two simple reasons, which are themselves highly instructive as regards the state of higher education in the U.K.
The first reason is that academic promotions – certainly from the middle of the university league tables upwards – are almost exclusively based, not on teaching quality, but on research quality. Your average Russell Group Vice-Chancellor couldn’t give two hoots about what happens in the classroom; come rain or shine, they are guaranteed to get all the bums on seats that they need in order to keep the show on the road in terms of student fees. VC pay is linked to performance, and performance generally means improvements in league table rankings, which brings prestige and (usually) an increase in the number of high-paying international students. What improves league table positions? Well it’s down to a range of factors, but the only one that directly relates to individual staff performance is the quality of their research. (Something called ‘teaching quality’ is also often included, but this, importantly, does not actually mean teaching quality in a way a lay person would understand it – more on that below.) Naturally, this makes research quality the only really relevant metric in who gets promoted and who doesn’t – although some universities do have promotional pathways for staff who just want to be good teachers, mainly to keep those staff members happy.
The idea, then, that staff are concerned about student evaluations is laughable. The only effort that most research-active academics at U.K. universities put into their teaching consists of finding ways to avoid having to spend time in the classroom – and this is almost entirely the product of how they are incentivised.
The second reason the Rojstaczer/Carl hypothesis is wrong is that, for related reasons to those discussed above, student evaluations generally take place way before students actually get their grades. Students evaluate course content typically in the last session of the semester, and get their exam results months later. Similarly, they fill in the National Student Survey (their single opportunity to assess their university experience in a neutral forum) roughly in the middle of their final year of study – i.e., months before they get their final degree classifications. The idea that staff are worried about student evaluations when they mark exams simply misunderstands the process – at least as far as common practice in the U.K. goes.