New thinking in altmetrics

Wednesday 26 September

The third session of the day moved on to a different perspective: ‘New thinking in altmetrics’. Chaired by Andrea Michalek from Plum Analytics, the line up included talks on data, metrics for research evaluation, and altmetrics and open access impact.

After a slightly shaky start (never mind missing data - we lost a panelist!) Nick Sheppard from the University of Leeds gave an overview of the approach they take to managing research data, software and code at his institution - and highlighted some of the obstacles and challenges associated with doing so.

Describing altmetrics as a ‘low barrier method to tracking engagement with data sets’, Nick posed the question of how repository managers and others in the library could play a role in driving engagement and visibility for these research outputs.

Nick moved on to offer some insights into the role of research data in the context of initiatives like the REF - and the shift from simply accompanying publications to being a valid and useful research output in its own right.

Repositories such as Zenodo, Dryad and figshare have contributed a lot: providing the necessary metadata to ensure that data can be cited and more easily discovered (and many also incorporate altmetrics for the items they host).

Citing supplementary data remains complex - often necessary information is missing or not complete; there is a role for altmetrics to play here.

Moving on to look at the IRUSUK database from JISC, Nick shared the results of an investigation he’d done to explore the metrics he could find for the items hosted there. The results were disappointing - the database hosts content for 27 UK university repositories, but many of the datasets Nick used in his research lacked DOIs or other unique identifiers. This was reflected in low citations and little Altmetric data.

So what does Nick recommend? A culture of data sharing, an ecosystem that promotes the publication and identification of different types of research material, a move to open research, and an emphasis on sharing these outputs more openly o social media!

Next up was Rodrigo Costas from CWTS at Leiden University - asking how we can establish different perspectives on metrics, and particularly those that deal in social media activity.

Starting with he dichotomy between descriptive and evaluative or comparative bibliometrics, Rodrigo summarised the differences between the two - the contrast between looking at high v low to asking exploratory questions like who why, where and when.

He then looked at the different landscapes that make up these approaches - both thematic and geographic. A cluster analysis of publication authored by African scholars threw up some interesting findings - big topics stood out and seemed to be getting a lot of traction on Twitter. A look at European-authored research produced different patterns - with the social focus on a broader range of topics.

Altmetrics, said Rodrigo, enable us to explore these differences and all the stories of how this research is being discussed.

But how can we best incorporate the more evaluative approach to this data? Examining the social vs scholarly focus of available social media platforms, Rodrigo discussed the contrasts in the users and audiences of each - noting that platforms such as Wikipedia, Mendeley and post-publication peer review platforms provide a forum for activities that are closer to what we traditionally associate with useful for research evaluation.

Those that are more dissimilar to traditional citations, such as Twitter or Facebook shares, do not fit with traditional ways of evaluating science, Rodrigo argued - but that does not make them irrelevant or not useful. Instead, he suggested, that they are better suited to understanding more about the ways in which we communicate science - and enable us to challenge our current approach to solving challenges; are we communicating them in the right way? Are we asking the right questions?  What can an institution learn about how wider communities are engaging with their research?

Perhaps even more pertinently: do open access initiatives help drive the communications, and lead to a more accurate understanding of scholarly research?

Rodrigo wrapped up by saying that the study of relationships between social media and science opens a new research agenda that moves beyond a simplistic undertaking of counting tweets or other interactions, and focus more on the social aspects of the communication of research.

Mithu Lucraft from Springer Nature concluded the session, with a talk that explored the dynamic between altmetrics and open access research. Mithu provided an overview to the open access publishing programmes at Springer Nature - which spans journals, books, and other outputs - with over 30% of their total output now published OA.

‘Authors don’t really know what they’re getting out of open access’ noted Mithu, saying that there is a lot we can do with metrics to help authors tell a story about their published research.

Accelerated discovery, innovation, lower costs and increased data reuse are goals that will be familiar to many in the scholarly space - but how can altmetrics help?

Broadness, diversity, a multi-faceted approach, and speed, suggests Mithu.

All of this helps to tell a story around what publishing Open Access means - for an author or for a funder. Showing the inclusion of the Altmetric badges and with reference to the Bookmetrix project, Mithu shared the findings of a report recently published by Springer Nature, that found OA books were typically achieving over 10 times more attention online than those which are paywalled.

Working closely with several authors, Mithu demonstrated how altmetrics have helped them determine the extent to which they have reached their goals, and to tell their impact story.

Hybrid journals present a controversial topic for publishers and institutions alike. So what is the benefit of publishing OA in a hybrid journal? According to the results of the report - OA publications in those titles receive 4x more citations, 1.6x more citations, and significantly increased online attention.

All metrics are flawed, and what constitutes research impact is still very much up for debate, says Mithu - noting that in a recent study 108 definitions of the latter were uncovered.

The session ended with an idea: what can we learn from new ways of thinking about how we share research, use metrics, and evaluate outcomes?