You need cookies enabled

Cookies

You need cookies enabled

The four UK funding bodies that manage the Research Excellence Framework (REF), through which £1.6 billion of research funding is distributed each year, have been told by an independent review that ‘no set of numbers is likely to be able to capture the nuanced judgments that the REF process currently provides’, and that it is not currently feasible to assess research outputs or impacts in the REF using quantitative indicators alone.

The findings of the Independent Review of the Role of Metrics in Research Assessment and Management are based on 15 months of evidence-gathering and consultation, including the most comprehensive analysis to date of the correlation between REF scores at the paper-by-author level and a set of 15 bibliometrics and altmetrics, undertaken by HEFCE with data provided by Elsevier. This analysis covered 149,670 individual outputs, and found only weak correlations between REF scores and individual metrics, significantly lower correlations for more recently published works, and highly variable coverage of metrics across subject areas. The analysis concludes that that no metric can currently provide a like-for-like replacement for REF peer review.

In addition, over 150 responses to the review’s call for evidence uncovered considerable scepticism among researchers, universities, representative bodies and learned societies about the broader use of metrics in research assessment and management. Concerns include the ‘gaming’ of particular indicators, uneven coverage across individual disciplines, and effects on equality and diversity across the research system.

The review was chaired by James Wilsdon, professor of science and democracy at the University of Sussex, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and research administration. Its report, ‘The Metric Tide’, takes a closer look at the potential uses and limitations of research metrics and indicators, exploring the use of metrics within institutions and across disciplines.

Other findings of the review include the following:

  • Peer review, despite its flaws, continues to command widespread support as the primary basis for evaluating research outputs, proposals and individuals. However, a significant minority are enthusiastic about greater use of metrics in these contexts, if appropriate care is exercised and data infrastructures improved.
  • Carefully selected indicators can complement decision-making, but a ‘variable geometry’ of expert judgement, quantitative indicators and qualitative measures that respect research diversity will be required. Greater clarity is needed about which indicators are most useful for specific disciplines, and why.
  • Inappropriate indicators create perverse incentives. There is legitimate concern that some indicators can be misused or ‘gamed’: journal impact factors, university rankings and citation counts being three prominent examples.
  • The data infrastructure that underpins the use of metrics and information about research remains fragmented, with insufficient interoperability between systems. Common data standards and transparent processes are needed to increase the robustness and trustworthiness of metrics.
  • In assessing impact in the REF, as with outputs, it is not currently feasible to use quantitative indicators in place of narrative case studies, as it may narrow the definition of impact in response to the availability of certain indicators. However, there is scope to enhance the use of data in assessing research environments, provided data are sufficiently contextualised.

The review has identified 20 specific recommendations for further work and action by stakeholders across the UK research system. The recommendations, provided in full in the report, propose action in the following areas: supporting the effective leadership, governance and management of research cultures; improving the data infrastructure that supports research information management; increasing the usefulness of existing data and information sources; using metrics in the next REF; and coordinating activity and building evidence.

Key recommendations include the following:

  • Leaders of higher education institutions (HEIs) should develop a clear statement of principles covering their approach to research management and assessment. Research managers and administrators should champion these principles within their institutions, clearly highlighting that the content and quality of a paper is much more important than the impact factor of the journal in which it was published.
  • HR managers and recruitment or promotion panels in HEIs should be explicit about the criteria used for hiring, tenure, and promotion decisions, and individual researchers should be mindful of the limitations of particular indicators in the way they present their own CVs and evaluate the work of colleagues.
  • Publishers should reduce emphasis on the journal impact factor as a promotional tool, and data providers, analysts and producers of university rankings should strive for greater transparency and interoperability.
  • The UK research system should take full advantage of ORCID [Note 4] as its preferred system of unique identifiers. ORCID should be mandatory for all researchers in the next REF.
  • Further investment into improving the research information infrastructure is required. Funders and Jisc should explore opportunities for making additional strategic investments, particularly to improve the interoperability of research management systems.
  • A Forum for Responsible Metrics should be established to bring together research funders, HEIs and representative bodies, publishers, data providers and others to work on issues of data standards, interoperability, openness and transparency.

Professor James Wilsdon, who chaired the review, said:

‘Metrics touch a raw nerve in the research community. It’s right to be excited about the potential of new sources of data, which can give us a more detailed picture of the qualities and impacts of research than ever before. But there are also real concerns about harmful uses of metrics such as journal impact factors, h-indices and grant income targets. A lot of the things we value most in academic culture resist simple quantification, and individual indicators can struggle to do justice to the richness and diversity of our research.

'The metric tide is rising. But we have the opportunity – and through this report, a serious body of evidence – to influence how it washes through higher education and research. We are setting out a framework for responsible metrics, which I hope research funders, university leaders, publishers and others can now endorse and carry forward.’

David Sweeney, Director of Research, Education and Knowledge Exchange, HEFCE, said:

‘This review provides a comprehensive and soundly reasoned analysis of the current and future role of metrics in research assessment and management, and should be warmly welcomed. The findings and recommendations of this review are clearly far-reaching, with implications for a wide range of stakeholders, including research funders, governments, higher education institutions, publishers and researchers.

'We will discuss the specific REF-related findings and recommendations with the other UK HE funding bodies to agree next steps, including as part of preparations for consulting on a future exercise later in 2015. We will also be looking to work actively with other stakeholders, where noted in the recommendations, to address specific challenges and to take forward this broader agenda as part of a collective effort.’

Notes

1. The four UK HE funding bodies that manage the REF are: HEFCE, the Higher Education Funding Council for Wales, the Scottish Funding Council, and the Department for Employment and Learning (Northern Ireland).

2. The Independent Review of the Role of Metrics in Research Assessment and Management was set up in April 2014 at the request of the Rt Hon David Willetts, then the UK minister of universities and science. Full details of the review.

3. Professor Wilsdon was supported by an independent steering group with the following members:

Liz Allen (Head of Evaluation, Wellcome Trust)

Eleonora Belfiore (Associate Professor of Cultural Policy, University of Warwick)

Sir Philip Campbell (Editor-in-Chief, Nature)

Professor Stephen Curry (Department of Life Sciences, Imperial College London)

Steven Hill (Head of Research Policy, HEFCE)

Professor Richard Jones FRS (Pro Vice-Chancellor for Research and Innovation, University of Sheffield) – representative of the Royal Society

Professor Roger Kain FBA (Dean and Chief Executive, School of Advanced Study, University of London) – representative of the British Academy

Simon Kerridge (Director of Research Services, University of Kent) – representative of the Association of Research Managers and Administrators

Professor Mike Thelwall (Statistical Cybermetrics Research Group, University of Wolverhampton)

Jane Tinkler (London School of Economics and Political Science)

Ian Viney (Head of Evaluation, Medical Research Council) – representative of Research Councils UK

Professor Paul Wouters (Centre for Science and Technology Studies, University of Leiden)

4. ORCID is a non-proprietary alphanumeric code to uniquely identify academic authors. Its stated aim is to aid ‘the transition from science to e-Science, wherein scholarly publications can be mined to spot links and ideas hidden in the ever-growing volume of scholarly literature’. ORCID provides a persistent identity for individual people, similar to that created for content-related entities on digital networks by digital object identifiers (DOIs).

Read The Metric Tide report.