#195      33 min 58 sec
Contenders cum laude: Universities competing in the global rankings game

International higher education expert Professor Simon Marginson discusses the increasingly influential phenomenon of global ranking of universities, and what it means for students, governments, researchers, and the business of running institutions of higher learning. Presented by Eric van Bemmel.

"Research rankings don't fully capture the spirit of scholarship, the creativity of the most important work, a sense of space and time for new innovations to emerge - those sorts of things which are so crucial." -- Prof. Simon Marginson




Prof Simon Marginson
Prof Simon Marginson

Simon Marginson is a Professor of Higher Education at the University of Melbourne, where he works in the Centre for the Study of Higher Education. He specializes in higher education policy and in global and international aspects of higher education, including university rankings. He is one of the Editors of the world journal Higher Education, and the author or editor on seven different books on global aspects of higher education published in 2010 and 2011, including International Student Security (written with three colleagues) and Higher Education in the Asia-Pacific (edited with two colleagues from Southeast Asia). Four of his books have been translated and published in China

Publications

Centre for the Study of Higher Education, University of Melbourne

Credits

Presenter: Eric van Bemmel
Producers: Kelvin Param, Eric van Bemmel
Audio Engineers: Gavin Nebauer
Voiceover: Nerissa Hannink
Series Creators: Eric van Bemmel and Kelvin Param

View Tags  click a tag to find other episodes associated with it.

 Download mp3 (31.1 MB)

ERIC VAN BEMMEL

I'm Eric van Bemmel. Thanks for joining us. The business of ranking universities around the world has become, of late, very conspicuous in the higher education landscape. Companies now compete to create, in effect, league tables of universities from all corners of the globe and the universities themselves can be relied on to trumpet their standings in their promotional and advertising campaigns. Naturally, prospective students and their parents are taking notice when choosing where to apply; governments, in deciding where to direct higher education dollars, are paying attention as well. But is the world university rankings game a good thing? Can rankings really capture what universities are doing or have to offer and what is their impact over time likely to be for higher education as a whole?

To help us with these questions we're joined in the studio by Simon Marginson, Professor of Higher Education in the Centre for the Study of Higher Education at the University of Melbourne. Simon researches and writes extensively on the globalisation of higher education and he also serves on the editorial board of Times Higher Education, one of the major rankers of universities, as well as on the advisory committee of the Academic Ranking of World Universities, yet another of the best known in the rankings industry. And full disclosure, Simon and I are both employees of the University of Melbourne, and the University of Melbourne sees itself as very much a contender in the world rankings.
Simon, thanks for joining us on Up Close.


SIMON MARGINSON

My pleasure.


ERIC VAN BEMMEL

Simon, we'll have a look at some of the big players in some detail in this world uni rankings industry, but the notion of rankings isn't new. It's existed for some time, particularly at a national level with a number of companies scrutinising their own higher education systems. But these global or world rankings are recent, in fact, less than a decade old; we're speaking in April 2012. Why weren't they around 20 or 25 years ago?


SIMON MARGINSON

I don't think there was the same interest in cross country comparisons as there is now. There are a number of reasons for this but the underlying one, I think, is that in policy circles and economic policy circles, there's a view that noise intensive production and innovations in noise intensive production are the key to economic competitiveness of nations, of firms. So research has become economically sexy in a way that it wasn't in previous years and comparisons between countries in terms of research performance and in terms of universities that produce research have become increasingly important.


ERIC VAN BEMMEL

Is that, in a word, the "globalisation" of higher education?


SIMON MARGINSON

Yes. Well, it's an aspect of globalisation. When you're talking a world level comparison between countries, between institutions, between systems of research and education, you're talking globalisation. Most of our frames of reference were, until recently, national ones. We did compare countries, particularly in terms of their military strength, when we thought about international affairs and we did compare economic growth rates. But now I think we are comparing a lot more than those things. We're comparing which cities are the most liveable to live in or the most liveable to visit, as a tourist, we're comparing detailed industries and a whole range of areas and we're comparing education and research.


ERIC VAN BEMMEL

Broadly speaking, I imagine there are, for these rankings, a number of stakeholders -- the universities themselves being, of course, the primary ones. How has the business of running a university changed since these world rankings came about?


SIMON MARGINSON

I think for a vice chancellor or rector or university president, depending on how the CEO of a university is described, lifting the institution's performance in the ranking is the most important single indicator. That's true of research intensive universities and most serious universities see themselves as in the research game. It's not necessarily true of universities whose only mission is good quality teaching to their local communities or technical and further education institutions, but it is true of research universities.


ERIC VAN BEMMEL

Is it ever the case where the world rankings have too distorted an impact, the tail wagging the dog, so to speak?


SIMON MARGINSON

Well, I think that ranking has got out of control in terms of its effects, and its size in our world looms very large and underneath that looming large are some problems which we need to confront. The nature of rankings themselves, they can only cover a small portion of the full range of activities of complex, large scale, social institution like universities. Even research rankings don't fully capture the spirit of scholarship, the creativity of the most important work, a sense of space and time for new innovations to emerge – those sorts of things which are so crucial.

They tend to measure volume of output, they tend to measure volume of output at high levels sometimes – that's what we call the qualitative element, but really it's a volume measure as well – don't necessarily capture the spirit of creativity and the applications of research. Rankings don't capture much about teaching and learning, they don't tell us much about the best places to study, but those rankings have got a great discursive power, a tremendous symbolic impact. Everyone wants to know which universities are on top.


ERIC VAN BEMMEL

So for the consumer it's a good thing?


SIMON MARGINSON

Well, yes and no. If your concern is the value of your degree, it's sort of fire power in the professional labour markets, then the higher the position of your university, the more value your degree has and that usually means the higher the position of a university in research rankings, because they're the most credible and widely used. In a strange kind of way, research performance of universities in a quantitative sense has come to set the framework for valuing whole institutions, including their educational functions, through the market, through the function of branding, because research performance and rankings level that flows from that determines the value of the brand.
So you've got a situation where in a simple take, rankings tell you quite a lot about where the land lies in higher education, it tells you who's on top and it tells you what degrees are most valuable in a narrow sense. But it doesn’t necessarily tell you where you'll get the most formative experience, where you'll learn the most, where you'll grow the most, where you'll have the best access to resources, but it does tell you about the fire power of your degree.


ERIC VAN BEMMEL

Governments as well seem to be affected by these rankings. In France in 2007, for example, their universities didn't do very well in some of the major rankings. There was a national debate about higher education in that country and even a new law was introduced ensuring greater freedoms for universities. So it seems that government policy is affected as well.


SIMON MARGINSON

Yes. That's a good example. Germany has been through something of the same process as well and established a program which they call Excellence Initiative, which is feeding additional resources to a select group of strong universities to try and push their position up in global league tables. I mean, I think the idea in both Germany and France is to improve national capacity in research and rankings are seen as the measure, if you like, the ongoing performance measure that tells you whether you are improving. There is a question about whether rankings accurately do pick up rates of improvement and so on and I think there's a long lag there and as I said, rankings are partial and only cover certain things.

Be that as it may, about half the governments in the world are focusing on rankings as an indicator and are driving policy, especially on investment in R&D – in research and development – in relation to rankings levels. The other half are not doing so. They're sitting back and they're maybe driving their institutions to perform better in other ways but they're not officially giving a nod to global rankings as the indicator. Now, the Australian Government is in that category. The Australian Government doesn’t talk about world class universities or having a top group of universities in the way that the Chinese Government, the French Government and the German Government do.

The British Government also tends to find it difficult to talk about top universities and genuflects to a more egalitarian view but I think that British policy, unlike Australian policy, has surreptitiously boosted the position of the top eight or 10 universities partly to maintain the credibility of the British system in world rankings.


ERIC VAN BEMMEL

Is it fair to say that rankings have really been a part of this discussion?


SIMON MARGINSON

Yes, but in the case of the UK and Australia, rankings are a bad word, but in the case of Germany, France and China, rankings are a good word. So it depends where you are. But my sense is that what rankings have done is made everything more competitive, has increased, if you like, the international arms race in an innovation aspect of world higher education policies so that everyone is trying to lift their competitive position. Whether they're doing that in terms of formal indicators acknowledging the rankings or not, they're aware of the rankings. All the research we have shows that policy makers know about rankings and as I said, about half the systems in the world have made them explicit performance indicators.


ERIC VAN BEMMEL

I'm Eric van Bemmel and on Up Close this episode we're speaking with global higher education expert, Professor Simon Marginson, about the increasingly influential phenomenon of world university rankings, how they're tabulated and how seriously we should take them. Up Close comes to you from the University of Melbourne, Australia.

Now, Simon, I think it's probably worth picking apart some of the big rankings. One of them is the Academic Ranking of World Universities, established in China at Shanghai Jiao Tong University originally in 2003. It was created to gauge the academic and research gap between the world's top universities and China's own. It's often called the Shanghai Rankings and it's now issued by the Shanghai Rankings Consultancy, and it's issued every year. I know that you were on the advisory board for that. Can you tell us exactly how they go about their business?


SIMON MARGINSON

Shanghai pioneered what you might call the objective approach to ranking. All of their data are on the public record. They use comprehensive data sets that include all universities in all countries and they work off recognised databases. There's not much scope for gaming the Shanghai Ranking. I mean, individual universities can only advance their position in relation to one of the indicators, which constitutes 10 per cent of the ranking which is the measure of performance on a per staff member basis, a per capita basis


ERIC VAN BEMMEL

What does performance here mean actually?


SIMON MARGINSON
Performance, which is the other 90 per cent of the Shanghai Ranking, consists of the number of Nobel prizes that alumni of the university have won, the number of Nobel prizes held by current members of the academic staff, the faculty, in the American sense, the number of articles published in the key journals Science and Nature each year, the total number of articles that are in the Science and Social Science Publication Index that's essentially your volume of scientific output, and the number of high citation researchers you have, people who are in the top 250 in terms of recognition of their work by others.


ERIC VAN BEMMEL

Can we just pick apart these citations? Exactly, for listeners who aren't familiar, what are they and why are they important?


SIMON MARGINSON

Well, there's two key indicators of research performance and they can be jigged in all kinds of ways and you can look at bits of them rather than the whole of them but there are essentially two indicators. One is the volume of scientific papers, the number of papers produced in recognised journals in the whole range of science, physical science, life science, engineering, agriculture, all the applications of science and also in selected social science areas, mainly that use a mathematical calculation at their base, like economics and quantitative social science and demography, but not the humanities and not the other social sciences and not some of the professional fields like nursing or education.

So, essentially the science based and science like fields expressed in major world journals. That's volume of science. Sometimes rankings focus on the top one per cent most desirable journals to be in, the most highly regarded. Mostly these counts just take the whole journal set and there are two major collections of those journal paper numbers by Elsevier and by Thomson ISI. One is called Web of Science that's the Thomson ISI one, the other one is called Scopus. So they're the two very large scale volume collections of the amount of science.

The other thing that matters is citation. Citation is when a paper is published and then recognised by other scientists who mention that paper in their papers. So that you look to the references at the back of a paper and you see what papers there have been listed, they're citations. The more citations a paper has the more it's been recognised by other scholars, scientists, scientist researchers, hypothetically the better it must be.


ERIC VAN BEMMEL

They seem to have impact?


SIMON MARGINSON

The more important it is. Yes, it seems to have academic impact in the sense of being part of the knowledge set that people take as significant and important in their own work. Certainly very good science, very important science, achieves higher citation. What's interesting though is some of the most important breakthrough papers are not initially recognised with higher citation numbers. It may sometimes be even five or 10 years before a major piece of work, which is path breaking and changes everyone's thinking, really gets going in the literature. So one of the anomalies of citations is they don't recognise that. So if you're taking a cite count on the last five years, you might have missed some of the most important work which will be cited in the next five years.

But leaving that problem aside, citation is a reasonable way, I think, to assess academic impact. So you've got publication numbers and you've got citation numbers and one of the most important indicators of quality is those who do the counts, the two companies that do the major data collections here that I mentioned before, Thomson ISI and Elsevier, they take the top one per cent or the top 10 per cent most cited articles and look where those came from and that tells you where the sexiest science is. Going to the most sought after journals to be in, the journals that are read the most and regarded as most influential, tells you quite a lot about where the most important work is and also going to the most cited papers tells you a lot about where the most important work is.


ERIC VAN BEMMEL

I've read that the Shanghai Rankings are criticised a bit because there is a bias toward the natural sciences over the social sciences and humanities, and also that the science journals that they look at are English language only. Is that correct?


SIMON MARGINSON
All true. The social sciences included in the Shanghai Ranking are those that are quantitative, inform more science like economics and psychology and quantitative sociology and demography, but not the more humanistic social sciences like a lot of what's done in political science or history, which might be a humanity, it might be a social science, depending on who you're talking to, and the humanities aren't there. There's a number of reasons for this. Humanities tend to be more nationally specific, more focused on national culture, often publication is in non-English languages. Humanities also spread across a huge range of journals. There's not a clear set of definitive major journals in all disciplines as there is in say physics or even psychology or medicine, clinical medicine, where you get a clear idea of what the most important journals are, the ones that everyone is reading.
That's not the case in the humanities and it's not the case in some of the social sciences and of course there's the creative arts as well, which never lend themselves very well to these sorts of research publication counts. They don't really fit the template. Where people are doing works of art, how do you fit that in to a ranking, how do you compare them across borders? So there is, if you like, an exclusion bias in relation to all those disciplines, and what tends to happen is if a country or a university is focused on lifting its rankings performance and it's very tightly focused on that, it may pour all its resources into those areas which appear in the rankings and start bleeding the activities in areas that don't appear in the rankings.

This is a serious problem in terms of maintaining a broad range of disciplines and I think everyone agrees that the liberal arts are an important part of research university culture and their contribution to society, but rankings don't recognise the liberal arts. So there is a tendency at present to move resources away from liberal arts, and unless universities are strong willed enough to shift those resources back in, then that can actually weaken the overall output of universities.


ERIC VAN BEMMEL

Let's look briefly at a few other of the rankings and their different methodologies. One of the other major ones is the QS World University Rankings, started in 2004, and they rank about 700 universities. How is their methodology different?


SIMON MARGINSON

Well, QSU's is subjective as well as objective indicators and that's where they differ from those systems of ranking which really sit on the basis of large scale research data collection like Shanghai Jiao Tong. What QS does is they survey academics around the world and they ask them what their most highly ranked universities are for purposes of teaching and research. They also have a survey of employers which is 10 per cent of the index but the major surveys, 40 per cent of the index, and that's the one of academic reputation. They also collect data on some other areas, staff/student ratios, so that the smaller the number of students per staff member, the better your ranking will be,; internationalisation, so that the more international students you have and the more international staff members you have, the better your ranking will be. So they spread their interests across this larger range of things.

They also include a citation count, research citations per head of staff, which is a research quantity indicator which is, I suppose, focused on the highest impact research so it has a quality dimension. The main controversy with QS is about the surveys amongst the academic communities around the world. Response rates are very low and the surveys tended to favour, in terms of the balance of returns, certain countries where QS has got a higher profile, like the English speaking countries, particularly the UK, Australia, Hong Kong, where QS has a big base in Asia and tends to under represent, in terms of survey returns, the US and a lot of non-English speaking countries. So QS is, to some extent, an English language country valuation of the world higher education system just by dent of the survey returns.

The survey returns, because they’ve got low response rates and they've got these problems about where they're coming from, not being balanced, and because surveys are surveys and they're subjective, the results tend to differ from year to year pretty wildly. It's a bit like if you took a football competition halfway through the season and asked everyone to say what teams they thought were the best teams. You'd get some pretty funny outcomes and they'd look pretty different every 12 months. I think that's the problem you have here, that people are shifting their evaluations around constantly. What you get with QS is universities sometimes rise or fall very dramatically from year to year, even though their objective performance hasn't changed. That has tended to undermine the credibility of QS's ranking. So it's probably not as influential as it was three or four years ago.


ERIC VAN BEMMEL

That notion of an academic staff member filling out a survey to name the best institutions in their field of research or study, isn't there some room for manipulation there? I mean, people will be familiar with the rankings anyway. If they know that a certain university is near the top, somehow they've already internalised that it's a great university, they'll include it.


SIMON MARGINSON

I think that's a good point. This is called the "halo effect" that an existing reputation tends to reproduce itself in people's minds. They don't want to make a silly answer so although they may not know much about Stanford or Harvard or MIT, they'll tend to put them near the top because that's where everyone puts them. I think the greater problem here is probably ignorance though. I mean, when you're filling out a form like that, you really only know about your own institution and perhaps where you were trained and that could be out of date. You might know about the competitor institution where you've got friends and collaborators. Or where you might have done some team teaching at some stage but you won't know about 10 or 20 institutions let alone the world of institutions. So it's a bit of a nonsense asking individuals, unless they're working for ranking agencies and across all of the data to make judgments about the whole world picture.


ERIC VAN BEMMEL

Just briefly, Simon, the third of the big three world university rankings is Times Higher Education World University Rankings, of which you're a member of the editorial board. They started in 2004. They worked with QS for a number of years and then went their own way and are working with Thomson Reuters. The Globe and Mail in 2010 labelled them "the most influential of the rankings". Times Higher Education World University Rankings rank the top 200 universities and then plus another 200 also-rans underneath that. Their methodologies again are a different sort of mix. Can you explain that one to us?


SIMON MARGINSON

Yes. Well, Times were with QS for a while but I think the criticisms of QS in the end forced Times to break its links with QS and to pull Thomson Reuters, who were producing one of the two major databases in terms of research and citation, in as their data crunching partner. Thomson Reuters have used the opportunity to build a big database about each individual university, not just in terms of research but in terms of a whole range of resources and capacities that universities have, which they use for their own separate commercial purposes. So it's been a happy marriage from both points of view. Times have boosted the credibility of their ranking by getting a better set of data and the data crunchers have used the opportunity to build their commercial roles. So this is a good example of how there are business incentives at work in the whole rankings and comparison and data gathering industry in universities now.


ERIC VAN BEMMEL

They look at teaching as well -- it's a large component -- don't they?


SIMON MARGINSON

Yes. Well, what they do is they have preserved, to a degree, the idea of a survey but not as QS does with 50 per cent of their total ranking being decided by a subjective survey. It's a much smaller percentage in Times and they separately survey on research and teaching outcome. They also include student/staff ratios and indicators in relation to connectivity of institutions, internationalisation, industry funding and so on. They have a component in there for publication and citations. So they've got a research component and they also have numbers on postgraduate research. When you boil it all down, the majority of the Times Higher ranking now is about research.

Teaching is in there but only really in terms of the subjective survey of who's a good teaching institution and in terms of student/staff ratios. If you think that that indicator pertains to teaching, I suppose the staffing levels tell you something about the conditions under which teaching occurs. They don't really tell you where the teaching is good or bad but they are one indicator of capacity to teach. The fact is that there are no good indicators about teaching outcomes, about learning achievement of students, so that's why student/staff ratios are still being used by both QS and Times.


ERIC VAN BEMMEL

This is Up Close, coming to you from the University of Melbourne, Australia. I'm Eric van Bemmel. In this episode we're speaking with global higher education expert, Professor Simon Marginson, about the rise of world university rankings, what they do and don't tell us about universities and how the rankings themselves may be ranked.

Now, Simon, we talked about these big three rankings. There are others. There's probably others on the horizon as well but these are the three that tend to get the most attention. They have different mixes in terms of their methodologies but are they comparable in any way?


SIMON MARGINSON

I preface my remarks by repeating what you said earlier that I'm connected to two of these three highly publicised rankings in an advisory role on two of the three. The third one is Talk to Me but we've also being feuding publicly for a while so I wouldn't claim an advisory at this stage. But I think that Shanghai, The Academic Ranking of World Universities, really because it doesn't stray from the well-established territory of research information, where the data are fairly strong, is the most credible in policy circles. I mean, it's the one that people take more seriously at government level. The Times now, I think, probably has the most salience in the marketplace. It's the most likely to be influencing decisions about where international students should study with both Shanghai and Times and to a lesser extent QS are all playing a role there.

When universities do well in the ranking, they promote the one which they do best in. So all three are being promoted by the universities themselves, the top hundred or top 150, some of the time so all three are playing a role. What we do know is that rankings information is becoming more rather than less influential over time at all levels, whether we're talking about government or employers or industries that they want to invest in research in universities or students and families themselves, particularly education agents working in the international education market will generally preface their advice to prospective students about where they should enrol by saying this is the position in the rankings of the different institutions. So really they're playing a major regulatory role in the market.


ERIC VAN BEMMEL

I'm interested in the degree of concordance among these rankings. I mean, universities like Harvard, I imagine, would be in the top half dozen across all the rankings, but say a more mid-level university -- I won’t name any names, perhaps you might like to. Is there a real difference in where they stand in these three rankings?


SIMON MARGINSON

Well, you get some variation. You're right to say that the top universities tend to reappear in all of them, in all the indicators, both objective and subjective, tend to coincide there, that everyone knows about Harvard and Cambridge and Oxford, Stanford, University of California, institutions like Berkeley and UCLA, even the others, San Diego and Santa Barbara and so on, are reasonably well known at a world level. The European universities aren't as well known, although they probably should be. It's interesting to see though that MIT in some rankings comes out on top rather than Harvard and I think that what tends to happen with rankings is that rankings often get framed to produce certain effects. The indicators and the weighting of indicators makes a lot of difference as to what the pecking order is going to be.

Some rankers like the idea of an Institute of Technology. It appeals to them and I think it appeals to the business press and to the general community, to a degree. It sounds more practical, it sounds more vocationally useful, it sounds more like applied research of value to the human race rather than esoteric ivory tower research. So MIT often does well, but then MIT is a fantastic research university, which on some impact indicators is better than Harvard. So it's not unjustified. But you do see, yes, the top group is stable. Tokyo and Toronto and Canada and a few others break into the American monopoly, but the top group is generally, in the objective rankings, almost all American. The US is overwhelmingly strong whenever you're looking at research indicators and that's what produces that effect.

When you go to the Times and the QS, which take a broader range of elements into account and do use surveys and provide a broader range of judgments than just research performance, what tends to happen is the strongest universities in each country start to appear in the top hundred. Whether they get there on an objective basis or not, you may get the top university in Thailand, which arguably might be Chulalongkorn at the moment, more likely to appear in the top hundred or top 150 in the Times or QS than it would be in an objective ranking based on research. The Chinese universities, which are in the long term clearly coming up, up and up and up, and the number of Chinese universities in the objective top 500 is much higher than it was five years ago, it's doubled. But the Chinese universities push into the top 100 when the surveys are coming in to the rankings, in QS and Times, at this stage Chinese universities don't figure in the top 100 when a more objective measure is made, although I think, as I said in the longer term, they will.


ERIC VAN BEMMEL

So, really objective measures versus reputation?


SIMON MARGINSON

Well, that's how I see it. I don't think the public discussion is aware of that distinction. All rankings seem to be objective. Once they're stated in league tables, they've got tremendous power and impact so that even a very shonky ranking, and there's been some amazing ones over the years, the Asian Business Valuation of top universities in Asia consisted of the Asian business journalists ringing up the vice chancellor or rector or president of the top university in each country and asking them what they thought the other best three were and then just adding up the numbers. That was the Asian business ranking of top universities in Asia.

But even so once it came out, everyone knew about it, everyone wanted to be number one, everyone was aware of what it meant. They've got tremendous power, whether they're objective or subjective, whether they're reputation driven or they're driven on research numbers. They've still got a lot of power.


ERIC VAN BEMMEL

The media has a role here as well. The mainstream media, as you say, how journalists, they report the rankings or produce their own. I mean, the caveats aren't communicated, are they really?


SIMON MARGINSON

Well, I've seen my role as an expert in this area as one of trying to clean up ranking to make sure that it's a fair competition, make sure as far as possible it's an objective competition, although we may want to have a separate ranking on subjective lines. Reputational rankings are relevant to the market but we should separate those from objective data. That's how I've approached it and so I try and educate the media to take a similar approach and to some extent the debate has got more sophisticated. You are seeing journalists increasingly aware of how these things are put together but the more public discussion there is, I think the better. The more we scrutinise rankings and we separate the good rankings, the clean rankings, the fair competitions, from the ones which are much more arguable and questionable and have suspect methodologies. We must do that.


ERIC VAN BEMMEL

And also to get the public to understand that a whole-of-institution approach, which is what they're seeing in these league tables, has its limitations.


SIMON MARGINSON

I think that's true, although, of course, everyone wants to know where the best university is and that's the great driver. I mean, early on in the rankings, some people were arguing that we should try and keep whole-of-institution rankings off the agenda all together and I don't think that's realistic. I think people want league tables and so we've got to try and make them as good as possible. But I personally, as a social scientist who studies universities, regard rankings that are based on single indicators as much more useful, much more explanatory. So I tend to use in my own work in trying to study where the worldwide knowledge economy is going, the kind of information produced by Leiden Centre for Information in Science and Technology where they just produce single measures of paper numbers and single measures of citation quality and single measures of top one per cent and top 10 per cent papers and so on.

I find single measures much more useful. You can trust them. Whenever you've got a combined rankings index which takes into account several qualities, you've got to have a weighting system which says that the reputation is 10 per cent and research citations are 20 per cent and staff/student ratios are 10 per cent and so on. All of that is essentially nonsense.


ERIC VAN BEMMEL

It's arbitrary?


SIMON MARGINSON

It's arbitrary, entirely self-selected and it's never theorised. It's just guesswork, say, we think this is important, we like this indicator, we're going to make that 50 per cent, which is nutty really. I think the public would be better served by providing separate tables on each of the separate indicators than they are with a composite ranking. That's the approach I tend to take on my own work but as I said, I think the pressure is there to provide holistic tables and given that that pressure won't go away, we've both got an obligation to produce better single indicator ranking but also try and clean up the composite rankings as well.


ERIC VAN BEMMEL

Are you aware of any sort of trajectory looking into the future, say five years, where the rankings may be going?


SIMON MARGINSON

Well, one development in Europe that's very important is what they call U-Multirank where they're collecting data on a huge range of indicators, some objective and some market evaluation subjective indicators produced by students who say what courses they like or don't like and how much they like them or how good they think the research is in other institutions, that sort of thing. Fifty or 100 indicators and that kind of system can be tweaked by the user. You can go into a system like that using web access and say all right, I'm going to rank institutions for myself on the basis of things that I think are really important, like student services, campus safety, availability of databases and IT that I need for communication and investigation and perhaps the quality of engineering and business studies, because they're the two things I'm going to do and just look at the ranking on that basis, ignoring all the other indicators. That actually tells you quite a lot about things that pertain to your own choice. The idea of customer driven ranking, I think as things get more sophisticated, people are selecting which single indicators they'll use and how much they'll weight them. I think you'll see a lot more of that but it's really Western Europe where the discussion is more sophisticated and where people make more informed choices generally about public matters where this approach has got most salience.

The rest of the world is still in a somewhat more simplistic mode, I think, so it'll take a while to catch on but what you will see is a lot more specialist ranking and quite a bit more single indicator usage as well with some improvement in the quality of the composite rankings of the three major ones will probably all get overhauled over the next five, 10 years and others may emerge.


ERIC VAN BEMMEL

Simon, thank you very much.


SIMON MARGINSON

Pleasure, Eric.


ERIC VAN BEMMEL

That was Simon Marginson, Professor of Higher Education in the Centre for the Study of Higher Education at the University of Melbourne. We've been speaking about the rise of world university rankings and their impact on the business of higher education.

Relevant links, a full transcript of this and all our episodes can be found at our website at upclose.unimelb.edu.au. Up Close is a production of the University of Melbourne, Australia. This episode was recorded on 19 April 2012 and produced by Kelvin Param and me, Eric van Bemmel. Audio engineering by Gavin Nebauer. Up Close is created by me and Kelvin Param.

Thanks for joining us, until next time, goodbye.


show transcript | print transcript | download pdf