Share this article
Don't take university rankings at face value, students should do their own research on which university would suit them best. Photo / Jason Oxenham
COMMENT
“Auckland University inches up global rankings but Waikato takes a tumble”, read this week’s headline. If you are a parent of a high school student thinking about tertiary study, this story probably caught your eye. So how are these rankings determined and what does that mean for the education of your children?
Selecting a university to study at is a stressful and complicated process. Even if a student has decided on a subject, trying to determine the differences between universities is complex. There are big-city international universities with huge research budgets, centres of research excellence and high-rise dormitories and there are small regional universities, which focus more on teaching, can have big agriculture and horticulture centres and student living tends to be more like that of the locals. The best one to pick will depend on the quality of the teaching, the size of the classes, the extracurricular activities offered, and whether big city life is for you.
READ MORE:
• Auckland University inches up global rankings but Waikato takes a tumble
You would think that a global university ranking system could help people to assess the quality of a university from a student’s perspective to help them find the best one for them.
They don’t.
The QS World University Rankings that were published this week are one of several global rankings published annually by for-profit companies. They rank universities on six different metrics.
Forty per cent of the weighting for the QS score comes from the results of an internal global academic survey. This is where purchased mailing lists send forms to academics from around the world asking them to nominate up to 30 universities that they think are good. There is no payment for filling out the survey and the academic is not allowed to nominate the university that they work at. The results have a huge effect on the overall score and basically comes from other people’s perceptions about a university with no need for them to have any experience with it. Unless they are a high-profile university like Harvard or MIT then the chances are that academics don’t know much about another university unless they went there as a student or have an active collaboration with somebody at that university. This automatically puts smaller or less research intensive universities that might be great at teaching at a disadvantage.
Twenty per cent of the weighting goes towards the faculty to student ratio, which seems self-explanatory. However, larger universities tend to have more staff for research; this ratio doesn’t tell you how many of those teach, what those class sizes are or what the quality of the teaching is.
Another 20 per cent goes towards the score of research impact as measured by the number of citations from other peers. This counts how many times another academic cited the research of an academic in the university being assessed. It will always skew towards faculty researching subjects that are highly active and therefore have lots of citations. Important but much more niche research that we do well in New Zealand is disadvantaged with this scoring. Again, the opinion of one academic on another academic’s research in the same field is probably not of interest to a potential student looking for a place to study.
Ten per cent of the score is based on a survey around the thoughts of a university from employers, even if they have no personal experience of that university. The final two 5 per cent categories count towards how many international students and international staff are at the university.
While these rankings might be used by marketing departments to entice you that they offer a better education for your children, the categories show that they mostly just measure the opinions of others.
Share this article