How do you win the research game? Hide the results you don’t like!

Head of School, Professor Simon Lilley and Director of Research, Professor Martin Parker, discuss the problems of comparing apples, pears and potatoes, in the ranking of business and management research.

We live in a world of rankings nowadays. There are league tables for schools, washing machines and doctor’s surgeries. In a complicated world, it’s not surprising that customers, policy makers and citizens would want to turn to some sort of list which appears to give them the answers that they need. Rankings simplify the world for us and for this we are thankful.

There are, of course, lots of ways of criticising rankings. Head teachers might suggest that comparing a diverse school in an inner city with the leafy suburbs is unfair, just as washing machine manufacturers might debate the importance of spin speeds and temperature levels. But the most basic idea of a ranking is that you are comparing the same things, in the same ways, in a manner which all players may not like but will nevertheless respect, at least in principle.

And this is what makes the ranking of research so odd. The recent Research Excellence Framework (REF), a review of research capacity and results at UK universities, reported just before Christmas. You would have thought that research about research, of all things, would have a robust methodology, but this is very far from the case. In fact, each institution was allowed to exclude all those researchers who it didn’t believe were going to gain them a high ranking. This meant that some institutions submitted very high numbers of staff, whilst others submitted very low numbers. Imagine if a school only submitted data about the pupils who got ‘A’s, or a hospital could choose not to report death rates, or a local-council could mention all the emails they’d sent while ignoring all the bins they didn’t collect. Then you get the basic idea.

In the Business and Management panel, the submission rates varied from 95% of the staff at London Business School and Imperial College, down to less than 10% in many cases, and less than 5% in a few. Absurdly, this means that an institution which deems the vast majority of its staff to be non-credible researchers could, in principle, nevertheless perform remarkably well in a research assessment exercise, as indeed was the case. Cass Business School, for example, which submitted 50% of its staff, finished about twenty places above the Open University, which submitted 87%. The Said Business School at Oxford University excluded 50% of its staff, whilst the Northampton Business School excluded 5%: they finished at opposite ends of the league table. We, for our part, submitted 85% of our staff yet finished below our neighbours, De Montfort University, which submitted less than a third of theirs. Not only is this a crazy way to compare, it is also damaging to those academics who are left out of the REF to bump their institutions up rankings.

This perverse situation has rightly generated a great deal of discussion about the problems with a straight comparison. In other tables, employing alternative methodologies which take into account the percentage submitted, we finished 14th out of over a hundred schools. Over the coming weeks Leicester, like everyone else, will pick the list that flatters it most and this is hardly an objectively reliable scenario. The sheer scope for game-playing only goes to show the extent to which the exercise’s present methodology is deeply flawed. How can any prospective student or funder make meaningful decisions surrounding league tables, given how the rankings are constructed? We aren’t the first to suggest that such exercises must be replaced, as a matter of urgency, with more logical – or at least less insane – methodologies. Nor will we be the last.

Originally published at http://staffblogs.le.ac.uk/management/

Leave a Reply

Your email address will not be published. Required fields are marked *