Metrics, Funding and The REF: 5:AM Panel Session 

Wednesday 26 September

 

By Josh Clark, Marketing Executive at Altmetric

 

With the use of metrics in the research evaluation process being a hot topic at this year’s 5:AM conference, it’s interesting to hear how Funders approach the use of traditional and altmetrics when evaluating projects. The panel was chaired by Cat Williams, COO of Altmetric. Speaking on the panel were:

Kevin Dolby from the Medical Research Council (MRC). The MRC currently use a variety of metrics to evaluate the projects they fund, including information from ResearchFish. Currently altmetrics aren’t used in their evaluation, however, there is a movement towards using this data as a way of ‘filling the gaps’ in the information gathered from ResearchFish.

Mark Taylor from the National Institute for Health Research (NIHR). The NIHR is a £1.1bn a year funder. They are very broad in how and what they fund with many of their projects involving work done with the NHS. In Mark's introductory statement he spoke about how the NIHR need to think more broadly about the metrics they use, as traditional metrics cannot give them the full picture of how research is being received by patients and research professionals. Mark explained that they do not use metrics to make funding decisions, but do use them for post-award evaluation.

Juergen Wastl from Digital Science, speaking in his previous role at the Research Strategy Office at the University of Cambridge. Although he was speaking from an institutional point of view, Juergen explained that he believed institutions were in fact ‘meta-funders’ as they need to distribute the funding effectively and to do this they need a set of metrics.

The first question for the panel was around how the emergence of new metrics had an influence on their organization and internal workflows. Both Kevin and Mark explained that for their organization it hasn’t changed things. Although they can see the value in altmetrics data to provide a different perspective, both organizations are some way off from actually using the data in their evaluations. Cambridge University, however, have started using the Altmetric Explorer for Institutions to analyze and compare against their research with other institutions.

The discussion then moved on to whether there are any other sources that could be brought together to better tell the story behind metrics and altmetrics data. Mark explained that within the NIHR they would like a measure of the patient experience at the endpoint to inform future research. For the MRC the priority would be further context around the mentions of research within policy documents, such as where the citation appears and how it is being mentioned. At the University of Cambridge, there is a requirement for altmetrics sources tracking mentions of research in international sources.

A question then came from the audience around how each of the panelist's organizations measures public engagement. At the MRC they use ResearchFish to get a view if the public engagement around their research. They are interested in altmetrics but need further context on what the altmetrics data show and the story behind mentions from sources like Twitter. At the NIHR they would like for public engagement to be part of the research planning stage to both focus the aims of the research and to be used a measurement of success. Juergen expressed the need for further definition around what public engagement is and whether different measurements are needed for different subject areas.

The negative views of metrics within the publishing and funding industries and the reliance of funders on peer review for evaluating publications were queried next - with the observation that in many other industries it is the opposite: people are increasingly turning to numerical data to make decisions. The panel acknowledged the reliance on peer review as a conservative way of evaluating research but that it is currently seen as the most effective method.

The final shared thought of the panel was that if the funding industry were to look into other metrics and altmetrics for evaluation purposes there would need to be a wider cultural shift -- although it would be interesting to see where the challenge, posed by the audience, for funders to be ‘more experimental’ might lead!