World University Rankings

World University Rankings 2015-2016 methodology

Change for the better: fuelled by more comprehensive data, the 2015-2016 rankings probe deeper than ever

World University Rankings
The Times Higher Education World University Rankings are the only global performance tables that judge research-intensive universities across all their core missions: teaching, research, knowledge transfer and international outlook. We use 13 carefully calibrated performance indicators to provide the most comprehensive and balanced comparisons, trusted by students, academics, university leaders, industry and even governments. The basic methodology for this year’s rankings is similar to that employed since the 2011-12 tables, but we have made important changes to the underlying data.

The performance indicators are grouped into five areas:

  • Teaching (the learning environment)
  • Research (volume, income and reputation)
  • Citations (research influence)
  • International outlook (staff, students and research)
  • Industry income (knowledge transfer).

Note on comparisons with previous years
Because of changes in the underlying data, we strongly advise against direct comparisons with previous years’ World University Rankings.

Universities are excluded from the World University Rankings if they do not teach under-graduates or if their research output amounted to fewer than 200 articles per year over the five-year period 2010-14. In exceptional cases, institutions below the 200-paper threshold are included if they have a particular focus on disciplines with generally low publication volumes, such as engineering or the arts.

Data collection
Institutions provide and sign off their institutional data for use in the rankings. On the rare occasions when a particular data point is not provided – which affects only low-weighted indicators such as industrial income – we enter a low estimate between the average value of the indicators and the lowest value reported: the 25th percentile of the other indicators. By doing this, we avoid penalising an institution too harshly with a “zero” value for data that it overlooks or does not provide, but we do not reward it for withholding them.

Getting to the final result
Moving from a series of specific data points to indicators, and finally to a total score for an institution, requires us to match values that represent fundamentally different data. To do this we use a standardisation approach for each indicator, and then combine the indicators in the proportions indicated below.

The standardisation approach we use is based on the distribution of data within a particular indicator, where we calculate a cumulative probability function, and evaluate where a particular institution’s indicator sits within that function. A cumulative probability score of X in essence tells us that a university with random values for that indicator would fall below that score X per cent of the time.

For all indicators except the Academic Reputation Survey, we calculate the cumulative probability function using a version of Z-scoring. The distribution of the data in the Academic Reputation Survey requires us to add an exponential component.

source :