Annotations: The Next Altmetric?

by Cat Williams, COO at Altmetric
Thursday 27 September
For all the talk of Wikipedia, Twitter, Policy and Facebook that we’ve heard over the last few days, there are still many yet-untracked sources of online engagement that create excitement amongst the altmetrics crowd - and annotations are near (if not top) of the list!

With this in mind we were excited to see what the Thursday afternoon session on Annotation in Researcher and Publishing workflows.

Heather Staines from Hypothes.is opened the session, giving an introduction to the history of their work. Heather described annotation as ‘layers’, where there can be multiple conversations happening at the same time (potentially both public and private). With over 3.7 million annotations, Hypothes.is has analyzed their data to find about 25,000 collaboration groups commenting on research across the web.

Hypothesis have worked with publishers, such as elife, to deliver annotation capabilities within their article pages. This has also enabled them to add extra information on corrections and updates, boosting visibility of those.

A key benefit of annotations are that they enable the publisher to provide updates to a paper without needing to publish another paper. Annotations are added by experts and approved by a publication committee.

Annotations in peer review are yet another potential application - here reviewers can use tags to enhance their feedback as they work through the review process.

Further internal publisher use cases have emerged as well - where the functionality is used as a way to communicate and share notes through complex transitions or migrations of content.

Next up was Aravind from Europe PubMed Central. Discussing the work they do in this area, Aravind spoke about the importance of a literature-data integration, to enable people to draw links between the different content types. This, he noted, takes a lot of curation, and can be a particularly cumbersome task.

The Europe PMC annotations platform was developed to relieve some of this pain, with over 500 million different annotation types included (things like gene function statement, molecular interactions and genetic mutations, amongst others!), all of which are made available via an API. Users can search this database based on relationships and types (so, as an example, you can look for a specific chemical and find where it is mentioned in the ‘methods’ section of papers).

Europe PMC use this data themselves, and showcase it through their platform — users can click on the highlighted terms in a text and go to the corresponding record on the database.

Form here, Aravind says, they are keen to expand their work - conducting user research and experimenting with layering automated annotations with automated annotations.

Annotations and Metrics on Cambridge Core followed, presented by Senior Digital Development publisher Nisha Dosi (who has been doing a splendid job of tweeting the conference!). Nisha broke the idea of annotation down into 3 key areas: discussion (for example between authors and readers, lectures and students etc), open research, and lastly author notes (authors adding further detail to their own publications).

Working with the authors, Cambridge create an analytic note that explains more of the background and why particular decisions were made within their study. This appears on the article page via a widget next to the article.

Why are authors doing this? Nisha cited drivers including adding rigor to their research (for the benefit of their peers), for funders (to help meet their requirements for research transparency), and for the interested public and students (to improve understanding and trust in scholarship).

Photo credit: @s_abuelbashar

So how does this link to altmetrics? CUP plan to add a count of the number of annotations to their journals - and will make the distinction between author and public annotations.

This, says Nisha, will likely be of particular interest to their society partners - and demonstrates a commitment to initiatives such as DORA to adopt a responsible and fair approach to metrics across all disciplines.

Closing the session was Alex from PaperHive (who had some very nice stickers out in the conference venue lobby, the author notes). Alex began by exploring the idea of collaboration in academia - demonstrating PaperHive’s key role in enabling researchers to manage, discuss and annotate their literature.

PaperHive, said Alex, has 3 key elements: communication (public and group discussion), documents (literature management) and people (connecting communities and teams via personal profiles).

Use cases for PaperHive included interactive teaching and community proof-reading.

Turning to the challenges involved in annotation approaches, Alex highlighted the fact that you often need a community and clear goals to spark usage, that there is a time commitment and coordination needed from editorial and marketing teams, and that ‘not all annotations are created equal’. On this last point Alex elaborated; the expertise of the annotator and the purpose of their comment leads to distinct variations between annotations.

Wrapping up, Alex moved on to the concept of annotations as a usage metric. Annotations, he noted, have the advantage of showing that someone is engaged with the content and also provide insights into the reader community background. Where they face obstacles are in the efforts needed to collect and standardize the data, and the variable quality of annotations.

Overall, a really interesting session and some useful insights for the rest of us to take into account when thinking about how or if we might consider annotations as ‘altmetrics’!