X Close

Open@UCL Blog

Home

Menu

Understanding Research Metrics: UCL’s New LibGuide

By Rafael, on 29 May 2024

Guest post by Andrew Gray, UCL Bibliometrics Support Officer

The UCL Research Support team has recently launched a comprehensive new LibGuide on Research Metrics. This resource covers a range of topics, from how to use and understand bibliometrics (citation metrics and altmetrics) to guidance on specific tools and advice on handling publications data. Learn more about this guide to enhance your research impact and better understand the world of research metrics!

Illustrative image: A desk with various open files, an open laptop, and a notebook. The open files on the desk contain several papers with notes. On the laptop screen, a data report visualization is displayed.

Image by Calvinius (own work), CC BY-SA 3.0

Bibliometrics

The core of the new guide is focusing on guidance for using and understanding research metrics, such as bibliometrics, citation metrics, and altmetrics. It explains how to access citation counts through Scopus and Web of Science, and more complex normalised metrics through InCites. It also gives guidance on how to best interpret and understand those metrics, and advice on metrics to avoid using. The guide also covers the UCL Bibliometrics Policy, which governs the use of bibliometric data for internal assessments at UCL, and sets some limits on what should be used.

Guidance for Tools

Within the LibGuide, you will also find guidance pages for how to use specialised services like InCites, Altmetric, and Overton to measure research impact. Additionally, the guide offers advice on using other tools that UCL does not subscribe to but may be beneficial for research support. This includes three freely available large bibliographic databases—Lens, Dimensions, and OpenAlex—which provide broader coverage than Web of Science and Scopus. It also outlines how to use a range of tools for citation-network based searching like Research Rabbit, Connected Papers, and Litmaps, as well as modern AI-supported search and summarising tools such as Scite, Keenious, and Consensus.

These are of course not the only tools available – especially with AI-supported tools, there are frequently tools being released – but these are ones we have been asked to investigate by students and researchers. If you would like feedback on another tool you are considering using, please get in touch.

Publications data

The LibGuide also addresses broader questions about using publications data. It outlines how to download publication and metrics datasets from Web of Science, Scopus, InCites, and Altmetric, and gives some guidance on how to link datasets from different sources together. Learn more about using publications data.

Additionally, the guide also explains how best to interpret data drawn from UCL-specific sources such as RPS, data ensuring you can make the most of the data available to you.

This new LibGuide is an important resource for anyone looking to expand their understanding of research metrics and manage their publications data. Visit the guide today to explore these tools and resources in detail.

Further support

We offer regular online or in-person training sessions as part of the Library Skills program. Please see the Library Skills calendar for dates and bookings. There are also three self-paced online sessions available through the Library Skills Moodle.

For any enquiries about bibliometrics, please contact us on bibliometrics@ucl.ac.uk 

Get involved!

alt=""The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on X, formerly Twitter, LinkedIn, and join our mailing list to be part of the conversation!

 

 

Open Science & Scholarship Awards Winners!

By Kirsty, on 26 October 2023

A huge congratulations to all of the prize winners and a huge thanks to everyone that came to our celebration yesterday! It was lovely to hear from a selection of the winning projects and celebrate together. The OOSS team and the UKRN Local leads Sandy and Jessie had a lovely time networking with everyone.

Just in case you weren’t able to join us to hear the prize winners talk about their projects, Sandy has written short profiles of all of the winning projects below.

Category: Academic staff

Winner: Gesche Huebner and Mike Fells, BSEER, Built Environment

Gesche and Mike were nominated for the wide range of activities that they have undertaken to promote open science principles and activities in the energy research community. Among other things, they have authored a paper on improving energy research, which includes a checklist for authors, delivered teaching sessions on open, reproducible research to their department’s PhD students as well as staff at the Centre for Research Into Energy Demand Solutions, which inspired several colleagues to implement the practices, they created guidance on different open science practices aimed at energy researchers, including professionally filmed videos, as well as developed a toolkit for improving the quality, transparency, and replicability of energy research (i.e., TReQ), which they presented at multiple conferences. Gesche and Mike also regularly publish pre-analysis plans of their own research, make data and code openly available when possible, publish preprints, and use standard reporting guidelines.

Honourable mention: Henrik Singmann, Brain Sciences

Henrik was nominated for their consistent and impactful contribution to the development of free and open-source software packages, mostly for the statistical programming language R. The most popular software tool he developed is afex, which provides a user-friendly interface for estimating one of the most commonly used statistical methods, analysis of variance (ANOVA). afex, first released in 2012 and actively maintained since, has been cited over 1800 times. afex is also integrated into other open-source software tools, such as JASP and JAMOVI, as well as teaching materials. With Quentin Gronau, Henrik also developed bridgesampling, a package for principled hypothesis testing in a Bayesian statistical framework. Since its first release in 2017, bridgesampling has already been cited over 270 times. Other examples of packages for which they are the lead developer or key contributor are acss, which calculates the algorithmic complexity for short strings, MPTinR and MPTmultiverse, as well as rtdists and (together with their PhD student Kendal Foster) fddm. Further promoting the adoption of open-source software, Henrik also provides statistics consultation sessions at his department and uses open-source software for teaching the Master’s level statistics course.

Honourable mention: Smita Salunke, School of Pharmacy

Smita is recognised for their role in the development of the The Safety and Toxicity of Excipients for Paediatrics (STEP) database, an open-access resource compiling comprehensive toxicity information of excipients. The database was established in partnership with European and the United States Paediatric Formulation Initiative. To create the database, numerous researchers shared their data. To date, STEP has circa 3000 registered users across 44 countries and 6 continents. The STEP database has also been recognised as a Research Excellence Framework (REF) 2021 impact case study. Additionally, the European Medicines Agency frequently refer to the database in their communications; the Chinese Centre for Drug Evaluation have also cited the database in their recent guidelines. Furthermore, the Bill and Melinda Gates Foundation have provided funds to support a further 10 excipients for inclusion in STEP. The development and evaluation of the STEP database have been documented in three open-access research papers. Last but not least, the database has been integrated into teaching materials, especially in paediatric pharmacy and pharmaceutical sciences.

Category: Professional Services staff

Winner: Miguel Xochicale, Engineering Sciences and Mathematical & Physical Sciences

Miguel hosted the “Open-source software for surgical technologies” workshop at the 2023 Hamlyn symposium on Medical Robotics, a half-day session that brought together experts from software engineering in medical imagining, academics specialising in surgical data science, and researchers at the forefront of surgical technology development. During the workshop, speakers discussed the utilisation of cutting-edge hardware; fast prototyping and validation of new algorithms; maintaining fragmented source code for heterogenous systems; developing high performance of medical image computing and visualisation in the operating room; and benchmarks of data quality and data privacy. Miguel subsequently convened a panel discussion, underscoring the pressing need of additional open-source guidelines and platforms that ensure that open-source software libraries are not only sustainable but also receive long-term support and are seamlessly translatable to clinic settings. Miguel made recording of the talks and presentations, along with a work-in-progress white paper that is curated by them, and links to forums for inviting others to join their community available on Github.

Honourable mention: Marcus Pedersen, PHS

The Global Business School for Health (GSBH) introduced changes to its teaching style, notably, a flipped classroom. Marcus taught academics at their department how to use several mostly freely available learning technologies, such as student-created podcasts, Mentimeter, or Microsoft Sway, to create an interactive flipped classroom. Marcus further collected feedback from students documenting their learning journey and experiences with flipped teaching to evaluate the use of  the tools. Those insights have been presented in a book chapter (Betts, T. & Oprandi, P. (Eds.). (2022). 100 Ideas for Active Learning. OpenPress @ University of Sussex) and in talks for UCL MBA and Master’s students as well as at various conferences. The Association of Learning Technology also awarded Marcus the ELESIG Scholar Scheme 23/24 to continue their research.

Category: Students

Winner: Seán Kavanagh, Chemistry

Séan was nominated for his noteworthy contribution to developing user-friendly open-source software for the computational chemistry/physics research community. They have developed several codes during their PhD, such as doped, ShakeNBreak and vaspup2.0 for which they are the lead developer, as well as PyTASER and easyunfold for which they are a co-lead developer. Séan not only focuses on efficient implementation but also on user-friendliness along with comprehensive documentation and tutorials. They have produced comprehensive video walkthroughs of the codes and the associated theories, amassing over 20,000 views on YouTube and SpeakerDeck. It is important to note that software development is not the primary goal of Séan’s PhD research (which focuses on characterizing solar cell materials), and so their dedication to top-quality open-source software development is truly commendable. Additionally, Séan has consistently shared the data of all his publications and actively encourages open-access practices in his collaborations/mentorship roles, having assisted others in making their data available online and building functionality in their codes to save outputs in transferable and interoperable formats for data.

Honourable mention: Julie Fabre, Department of Neuromuscular Diseases

Julie is recognized for developing the open-source toolbox bombcell, that automatically assesses large amounts of data that are collected simultaneously from hundreds of neurons (i.e., groups of spikes). This tool considerably reduces labour per experiment and enables long-term neuron recording, which was previously intractable. As bombcell has been released under the open-source copyleft GNU General Public License 3, all future derived work will also be free and open source. Bombcell has already been used in another open-source toolbox with the same licence, UnitMatch. The toolbox’s code is extensively documented, and Julie adopted the Open Neurophysiology Environment, a standardised data format that enables quick understanding and loading of data files. In 2022, Julie presented bombcell in a free online-course. This course was attended by over 180 people, and the recorded video has since been viewed over 800 times online. Bombcell is currently regularly used in a dozen labs in Europe and the United States. It has already been used in two peer-reviewed publications, and in two manuscripts that are being submitted for publication with more studies underway.

Honourable mention: Maxime Beau, Division of Medicine

Maxime is recognized for leading the development of NeuroPyxels, the first open-source library to analyze Neuropixels data in Python. NeuroPyxels, hosted on a GitHub public repository and licensed under the GNU general public license, is actively used across several neuroscience labs in Europe and the United States (18 users have already forked the repository). Furthermore, NeuroPyxels relies on a widely accepted neural data format; this built-in compatibility with community standards ensures that users can easily borrow parts of NeuroPyxels and seamlessly integrate them with their application. NeuroPyxels has been a great teaching medium in several summer schools. Maxime has been a teaching assistant at the “Paris Spring School of Imaging and Electrophysiology” for three years, the FENS course “Interacting with Neural Circuits” at Champalimaud for two years, and the UCL Neuropixels course for three years where NeuroPyxels has been an invaluable tool to get students started with analysing neural data in Python.

Honourable mention: Yukun Zhou, Centre for Medical Image Computing

Yukun was nominated for developing open-source software for analysing images of the retina. The algorithm, termed AutoMorph, consists of an entire pipeline from image quality assessment to image segmentation to feature extraction in tabular form. A strength of AutoMorph is that it was developed using openly available data and so its underlying code can be easily reproduced and audited by other research groups.Although only published 1 year ago, AutoMorph has already been used by research groups from four continents and led to three new collaborations with Yukun’s research group at UCL. Moreover, AutoMorph has been run on the entire retinal picture dataset in the UK Biobank study with the features soon being made available for the global research community. Yukun has been complimented on the ease with which any researcher can immediately download the AutoMorph tools and deploy on their own datasets. Moreover, the availability of AutoMorph has encouraged other research groups, who are conducting similar work, to make their own proprietary systems openly available.

Category: Open resources, publishing, and textbooks 

Winner: Talia Isaacs, IOE, UCL’s Faculty of Education and Society

Talia is recognized for their diverse and continuous contributions to open access publishing. As Co-Editor of the journal Language Testing, they spearheaded SAGE’s CRediT pilot scheme, requiring standardized author contribution statements; they approved and supported Special Issue Editors’ piloting of transparent review for a special issue on “Open science in Language Testing”, encouraged authors to submit pre-prints, and championed open science in Editor workshops and podcasts. Additionally, in 2016, Multilingual Matters published Talia’s edited volume as their first open access monograph. Talia also discussed benefits of open access book publication in the publisher’s blog. As a result, the publisher launched an open access funding model, matching funding for at least one open access book a year. Further showcasing their dedication to open science, Talia archived the first corpus of patient informed consent documents for clinical trials on UK Data Service and UCL’s research repository, and delivered a plenary on “reducing research waste” at the British Association for Applied Linguistics event. They have also advocated for the adoption of registered reports at various speaking events, Editorial Board presentation, in a forthcoming article, editorial, and social media campaign. 

Honourable mention: Michael Heinrich and Banaz Jalil, School of Pharmacy

Banaz and Michael were nominated for co-leading the development of the ConPhyMP-Guidelines. Ethnopharmacology is a flourishing field of medical/pharmaceutical research. However, results are often non-reproducible. The ConPhyMP-Guidelines are a new tool that defines how to report the chemical characteristics of medicinal plant extracts used in clinical, pharmacological, and toxicological research. The paper in which the guidelines are presented is widely used (1613 downloads / 8,621 views since Sept 2022). An online tool, launched in August 2023 and accessible via the Society for Medicinal Plant and Natural Product Research (GA) website, facilitates the completion of the checklist. Specifically, the tool guides the researchers in selecting the most relevant checklists for conducting and reporting research accurately and completely.

Honourable mention: Talya Greene, Brain Sciences 

Talya is recognized for leading the creation of a toolkit that enables traumatic stress researchers to move toward more FAIR (Findable, Accessible, Interoperable and Reusable) data practices. This project is part of the FAIR theme within the Global Collaboration on Traumatic Stress. Two main milestones have so far been achieved: 1) In collaboration with Bryce Hruska, Talya has collated existing resources that are relevant to the traumatic stress research community to learn about and improve their FAIR data practices. 2) Talya also collaborated with Nancy Kassam-Adams to conduct an international survey with traumatic stress researchers about their attitudes and practices regarding FAIR data in order to identify barriers and facilitators of data sharing and reuse. The study findings have been accepted for publication in the European Journal of Psychotraumatology. Talya has also presented the FAIR toolkit and the findings of the survey at international conferences (e.g., the International Society for Traumatic Stress Studies annual conference, the European Society for Traumatic Stress Studies Biennial Conference).

Save the Date: UCL Open Science Conference 2022

By Kirsty, on 23 February 2022

We are pleased to announce that the UCL Open Science conference 2022 will be taking place on the 6th and 7th April 2022. As last year the doors will be open to all and we ae looking forward to seeing you!

The programme design is in its final stages but across the two days we will be presenting a combination of online and in person sessions across a variety of themes:

Wednesday 6th April

Morning session (10am – 12.30pm): Online

  • What does Open Science mean to me? – Panel discussion
  • Kickstart your research with technology and Open Software – Series of talks to introduce technical tools for everyone!

Afternoon session (1.30 – 4pm): In Person – UCL campus

  • How does Citizen Science change us?

Thursday 7th April

Morning session (10am – 12.30pm): Online

  • UKRI Town Hall – Discussion hosted by David Price (UCL VP Research) and featuring Sir Duncan Wingham and Rachel Bruce
  • Open in the Global South – Series of talks on the theme, featuring Sally Rumsey and Ernesto Priego

Registration will be opening soon, but please save the date and watch this space!

Research Data at UCL – meet the teams!

By Kirsty, on 14 February 2022

Welcome to Love Data Week!

While you have probably heard of the work of the Research Data Management Team, who support you with decisions made during the research lifecycle to handle the data you work with, use or generate, from the planning stage of your project up to the long-term preservation of your data. Good data management practices are essential to meet UCL standards of research integrity.

To summarise, planning Research Data Management effectively helps you to ensure data quality, minimise risks, save time and comply with legal, ethical, institutional and funders’ requirements. The RDM team can guide you in the creation of your Data Management Plan, read and assess your plans when complete as well as advise you throughout the research process.

Contact: lib-researchsupport@ucl.ac.uk.

Outside the RDM team, there are a range of teams across the university that can support you.

The Data Protection and Freedom of Information Team are responsible for providing advice to UCL on data protection issues and handling statutory data protection and freedom of information requests. UCL’s Data Protection Officer leads the team, which sits in the Office of the General Counsel, and we work very closely with the Legal Services team and the Information Services Group.

All research proposals that involve personal data must be registered with the data protection office before processing begins. Further information, including FAQs and guidance notes are also available.

Contact: data-protection@ucl.ac.uk.

The UCL Research Ethics Committee (REC) and Research Ethics Officers facilitate an important function in the assessment of all applications submitted for ethical review and approval. The research ethics team must ensure that all applications have rigorously considered any ethical implications arising from proposed research design, methodology, conduct, dissemination, future use and data sharing and linkage, and how this will be managed should be carefully explained within a research plan. Ethical review of data management and security is a fundamental component of the ethical review procedure and researchers must demonstrate a strategy for data storage, handling of sensitive data, data retention and sharing. The following points are frequently requested:

  1. What type of data will you collect and how will you describe them?
  2. How will you store and keep your data secure?
  3. Will you be allowed to give access to your data once the project is completed? Who will be able to access them, under what conditions and for how long?

More information about research ethics, data protection for researchers and data management tools for handling sensitive and personal/special category data is available online.

Contact: ethics@ucl.ac.uk

The Research Contracts Team sits within Research and Innovation Services. Research Contracts assists Academics by putting in place appropriate agreements for sponsored research on behalf of UCL. This includes reviewing, drafting and providing advice to Academics and Departments and negotiating acceptable terms. This includes material transfer agreements of which data, including personal and pseudo-anonymised data are part. There are current exceptions to our remit which include: Clinical Trial Agreements and EU grants & contracts, procurement agreements and consultancy.

For queries please contact:

Visit their website for more information.

The Information Governance & Compliance team are a part of the UCL Information Security Team, with a focus on research compliance. They provide support for compliance aspects of data applications, such as DARS and CAG for NHS data, also the Department for Education. They also have experience with a wide range of requirement sets ranging from commercial organisations through to public services. However, for data that falls outside of formal external agreements, for example directly collected, they can offer suitable information governance advice that aligns with the Information Commissioner’s Office accountability Framework. Most requirements can be met by using the UCL Data Safe Haven (DSH).

For research that cannot use the DSH, we can help determine suitable technical and organisational measures. We also manage access to the ONS Secure Research Service (see the process section on the left hand side of linked page).

Contact: infogov@ucl.ac.uk

The UCL/UCLH Joint Research Office (JRO) provides research management and governance support for clinical research studies that take place across University College London and/or UCL Hospitals NHS Foundation Trust (UCLH). Support and guidance are provided to researchers wishing to conduct clinical research which recruits NHS patients and/or uses their tissue or their data.

This includes any clinical research that requires a formal ‘Sponsor’ as defined by the UK Policy Framework for Health and Social Care Research (2017), the Medicines for Human Use (Clinical Trials) Regulations 2004 and subsequent amendments and the Medical Devices Regulations. Sponsor authorisation for these studies are provided by the JRO or one of the UCL clinical trials units (CTU).

The JRO consists of specialist teams who interface with colleagues across UCL/UCLH to support researchers through the research process. This includes guiding researchers through the approvals processes (e.g. NHS REC/MHRA), research contracting, research finance, regulations and compliance, study set-up and conduct, and data management.

More information about the JRO and how to get in touch can be found on the website.

And finally, the Research Integrity Team oversees and supports a broad set of research integrity initiatives at UCL to ensure compliance with the Concordat to Support research integrity support UCL to ‘Pursue a responsible research agenda (UCL 2019 Research Strategy – Cross-cutting Theme A). This includes coordinating periodic audits of UCL’s adherence with research integrity standards, leading on policy matters relating to research integrity, and frameworks for supporting integrity in research, such as the Statement on Research Integrity, the Framework for Research Integrity and the Code of Conduct for Research.  The team also led on the development of training for staff and students and provide advice and advocacy across UCL.

Who knew there were so many wonderful places to get support, and information to support your research data journey! If you are reading this during Love Data Week, don’t forget that we are hosting a Research Data Clinic with members of these teams to answer your questions! Thursday 17th February 2022 at 10.30am – register your interest on the form and we will send you all the information. After Love Data Week, get in touch with the teams directly, comment below or get in touch on Twitter!

Copyright and Text & Data mining – what do I need to know?

By Kirsty, on 6 July 2021

Text and Data Mining (TDM) is a broad term used to cover any advanced techniques for computer-based analysis of large quantities of data of all kinds (numbers, text, images etc). It is a crucial tool in many areas of research, including notably Artificial Intelligence (AI). TDM can be used to reveal significant new facts, relationships and insights from the detailed analysis of vast amounts of data in ways which were not previously possible. An example would be mining medical research literature to investigate the underlying causes of health issues and the efficacy of treatments.

The importance of having copyright exceptions in place to facilitate TDM arises from the fact that the swathes of material which need to be mined are often protected by copyright. That would be true for example of “literary works” of all kinds and of images in many cases. It is frequently the case that researchers will have lawful access to the material but will be prevented from applying TDM techniques because copying the material onto the required computer platform risks legal action for infringement on the part of the copyright owners. “Copying” is of course one of the acts restricted by copyright law and in general the greater the amount and variety of material, the greater the copyright risk.

It is worth remembering that when the Government created an exception for Text and Data Mining in 2014, it meant that the UK was ahead of the game. Other countries did not generally have an exception in their legislation at that time. Since then, other jurisdictions have caught up and, in some cases overtaken the UK. Cutting edge research is a highly competitive area and researchers working in a country which benefits from a generous TDM exception will have a distinct advantage.

The existing exception is still significant from the Open Science perspective in enabling research projects where computer analysis of large quantities of copyright-protected material is required, particularly in the context of AI.

Let’s take a closer look at the UK TDM exception and what it allows us to do, before comparing it briefly with the more recent EU exceptions. The UK exception is to be found in Section 29A of the Copyright, Designs and Patents Act 1988.

What does the exception allow us to do?

Copying copyright-protected works in order to carry out “text and data analysis” (“computational analysis” in the wording of the exception). The need to copy arises because researchers must have have the material to be analysed on a specific platform, to carry out the analysis. The need for the exception then arises because without it, the researcher would require permission from the owner of copyright in each item. Without permission (or an exception), the researchers would be infringing copyright by copying a vast swathe of protected material. That in turn would often make the research impractical to carry out.

Who may do this?

Absolutely anyone, the exception says “a person.” This is wonderfully broad and one of the more favourable aspects of the UK exception. For example you don’t need to be working for/ studying at a particular type of institution to benefit from the exception.

Are there conditions?

You must have lawful access to the material. A prime example would be the text of academic journals. We have lawful access to large numbers of e-journals because UCL Library subscribes to them. The exception would allow a UCL researcher to download large amounts of content from e-journals to carry out detailed analysis using specialised tools. It is important to note that the exception cannot be overridden by contract terms. It follows that a term in an e-journal contract seeking to prevent TDM would have no force, in circumstances where the exception applies. This makes the exception a much more useful tool than it would otherwise be.

As you might expect the copies made for TDM purposes may not be used for other purposes, shared etc under the exception.

Significantly, the analysis must be “…for the sole purpose of research for a non commercial purpose.” This is a major restriction, which would rule out many situations where TDM might be used, for example research by a pharmaceutical company developing new drugs which will be marketed commercially. A major issue with the exception is that it can be unclear at what point “non-commercial” shades into “commercial.” A project which starts out as academic research may take on commercial significance down the line and a piece of research with no commercial aspects may be funded by commercial sponsors. It is an important constraint in the legislation which can also be difficult to be sure about in real life situations. It can stand in the way of joint projects by HEIs and commercial organisations.

Still, in situations where we can claim there is no commercial aspect to the research, the exception is potentially very useful. In addition to material which is already digital it can cover projects where digitisation of copyright- protected print material is required to be analysed. It can be very useful in situations where the copyright status of the source material is unclear, since provided the exception applies, there is no need to investigate further the complexities of copyright in the material.

The new EU TDM exception or rather exceptions

The EU Directive on Copyright in the Digital Single Market (DSM Directive) offers two new exceptions, which EM member states are obliged to transpose. They can be found in Articles 3 and 4 of the Directive.

There are important differences of approach to the UK in the answer to the question:  who may carry out the TDM? Article 3 provides an exception which benefits two defined categories of organisations: “Research organisations” and “Cultural heritage organisations.” Included within those groups are for example universities, museums, publicly funded libraries. Commercial organisations are excluded. It seems that independent researchers, not associated with an organisation would also be excluded, even though their research might be “non-commercial.” In common with the UK legislation, this exception cannot be overridden by contract terms and is therefore a powerful tool. The Directive addresses the question of public-private research collaborations in the recitals to the directive, e.g. recital 11. They are not excluded from benefitting from the Article 3 exception.

Article 4 offers a separate TDM exception which is available to anyone (including commercial organisations) but which is limited in a specific way: If the rights owners explicitly reserve the rights to carry out TDM within their works, then it cannot be mined under the exception. In other words, the EU DSM Directive goes one step further than the UK by offering an exception which can be used to mine lawfully accessible works by commercial organisations (or by anyone else), but it does not apply if the rights owner has explicitly ruled out TDM.  By contrast, commercial organisations would not be able to use the UK exception, unless they can claim the specific research is for a non-commercial purpose.

Guest post by Chris Holland, UCL Copyright Support Officer. For more information or advice contact: copyright@ucl.ac.uk

ORCID Updates for 2021

By Kirsty, on 14 April 2021

Over the past year, we have written a number of blog posts talking about ORCID and giving you lots of options for how you can make the best use of your ORCID, including using it to add your research outputs to RPS, and a series of ways that you can automatically populate your ORCID and save time! While all of these posts are still relevant, and we would recommend you having a look, there are a few updates that we wanted to share with you.

ORCID have recently added Data Management Plan as a new work-type you can include in your ORCID, which is great news. In addition to this, ORCID have now made it possible to record funding peer review contributions in your ORCID record by linking your ORCID to Je-S, increasing the number of work types you can add to ORCID to 44!

ORCID have also relaunched the help and support part of their website info.orcid.org to make it easier to access updates, FAQs and blog posts. I really enjoyed this recent post in which they interviewed Dr. Romero-Olivares, assistant professor at New Mexico State University, about her experiences using ORCID throughout her career and the ways that having an ORCID has made maintaining her CV easier over the years.

After this blog was published, ORCID also announced that they have started supporting CRediT – the Contributor Roles Taxonomy. This is a great step, and so keep an eye out if you have published in a journal that uses CRediT to add this to your ORCID record soon!

Finally, ORCID have released a new video tour of the ORCID record that you can see below. In addition to their previous video in our prior posts telling you about what ORCID is and its advantages, this video aims to remind you of the key features of the interface and answering a few questions you may have about how to maintain your personal ORCID record.

A Quick Tour of the ORCID Record from ORCID.

Persistent Identifiers 101

By Kirsty, on 27 July 2020

You might have heard the phrase ‘Persistent Identifier or even PID in passing, but what does it actually mean 

A persistent identifier (PID) is a long-lasting reference to a resource. That resource might be a publication, dataset or person. Equally it could be a scientific sample, funding body, set of geographical coordinates, unpublished report or piece of software. Whatever it is, the primary purpose of the PID is to provide the information required to reliably identify, verify and locate it.” – OpenAIRE 

These identifiers either connect to a set of metadata describing an item, or link to the item itself.  

In 2018, the Tickell report was released. It presented independent advice about Open Access, which had implications for the world of PIDs. Adam Tickell recommended that Jisc lead a project to select and promote a range of unique identifiers for different purposes, to try and limit the amount of confusion and duplication in this area.  

The JISC project has been in progress for the last year. They are working on what they describe as ‘priority PIDs’ which cover the following categories:  

  • People 
  • Works 
  • Organisations 
  • Grants 
  • Projects 

So what are the PIDs we need to be aware of? 

People 

The primary PID for people is one that you will already be familiar with if you are a regular reader of the blog. Even if you aren’t, you have probably heard of it – it’s ORCID.  

ORCID is an open identifier for individuals that allows you to secure accurate attribution for all of your outputs. It also functions quite nicely as an online bibliography, and can be used to automatically collect and record your papers in RPS. All in all, it’s pretty useful 

If you want to know more about what you can do with ORCID, have a look at our recent blog post ‘Getting the best out of your ORCID. All of the details about linking ORCID to RPS and vice versa, are available on the blog and the Open Access website 

Works 

The next identifier is for works. It’s another that you have probably seen, even if you don’t know a lot about themDOIDOI stands for Digital Object IdentifierIt’s a unique registration number for a Digital Object. This could be an article or a dataset, but it could equally be an image, a book, or even a chapter in a book. DOIs are unique and persistent which means that if your chosen journal changes publisher, you will still be able to find your article because the DOI is independent and will keep up to date.  

DOIs are most often acquired through a Registration Agency called Crossref, but you will also come across DataCiteBoth of these services do the same job, providing and tracking DOIs, but the underlying tools are slightly different.  

Did you know: if you have the DOI of a paper, an easy way to find that paper is to add https://doi.org/ to the front. The URL this creates will take you to the paper, no matter who published it. For example: 10.1080/08870446.2019.1679373 is DOI, and https://doi.org/10.1080/08870446.2019.1679373 will take you straight to the paper 

Organisations 

The Research Organisation Registry (ROR) is a new PID registry that is being created by key stakeholders, including Crossref and Jisc, to bring more detail and consistency to organisational identifiers. The definition of organisations goes beyond institutions like UCL to include any organisation that is involved in research production or management, so this can include funders, publishers, research institutes and scholarly societies.   

Grants 

Crossref is key in the identification of individual funders and in creating identifiers for research grants. Grant IDs are DOI’s, but connected to grant-specific metadata such as award type, value and investigators. The intent is for funders to register each grant and provide a GrantID, which has the potential to make tracking papers and data linked to individual projects much simpler in the long run. Several hundred grants have been registered already, mostly via Wellcome (With thanks to Rachael Lammey for the clarification 03/08/2020)

Projects 

The Jisc project is supporting Research Activity ID (RAiD), a project based in Australia which creates a unique identifier for a research project. The intent is for this to be the final part of a network of identifiers that will allow people, works, and institutions to be linked to their projects and funders. This will complete the chain and allow accurate attribution and accountability at every stage of the research process.   

How can I get involved? 

The work being undertaken to select and support individual PIDs at each stage of the research process is a good idea, and if it works then it will be a step towards a fully interconnected, open and transparent research process. The next stage of the Jisc project is currently underway, and they are surveying all sectors of the UK research community about awareness, use, and experience of PIDs. If you want to contribute, their survey is open and has just been extended until 21 August!  

PIDs diagram

PIDs environment – Click to enlarge

Spotlight on: Kudos – helping people find, read, understand and cite your research

By Kirsty, on 3 June 2020

Kudos (growkudos.com) is not a social networking site, or yet another profile – it’s a toolkit. Kudos is a free service which exists to help you manage your profiles and social media posts more effectively to maximize visibility of your work.

Kudos allows you to claim and describe your work for a variety of audiences, from your colleagues, to potential multi-disciplinary collaborators, to the general public. It also allows each contributor to put a personal statement onto a paper, describing your part in the work and putting your own personal spin on it. For example this publication, chosen at random, has been annotated with a short summary, had an image added, and each of the contributors has added a short personal comment.

Then all you have to do is use the inbuilt tools to share to multiple sources at once. You can even generate trackable links in Kudos for items without DOIs, so that however you do share your work – via email, social media, posters, discussion groups, scholarly networks etc – you can track which of those is really helping you maximize readership.

The metrics generated by these links include the number of people you have reached, the number of views, a global breakdown (which countries is your work attracting attention in), the Altmetric score (how is your work being discussed online), citation counts for publications, and a granular breakdown of the different ways you have communicated and which of these have been most effective. A recent study has shown that explaining and sharing via Kudos takes on average 10 minutes and leads to over 20% more downloads.

Kudos pro

Kudos have recently launched a pro version of their free to use platform, which extends their service beyond publications into the rest of your research, called Kudos Pro. This new service allows you to create profile pages for your work – whether for a specific project, or a general overview of your body of work. These pages are quick and easy to set up using a template. For example, this project, chosen at random, includes links to the profiles of the contributors and institutions, some publications as well as images and an extensive background to the project.

You can link from these pages to relevant materials and outputs, from links to surveys, code, data, images, to links to pre-prints/publications in your institutional repository, publisher website, pre-print server or even Kudos itself – this helps you provide a single ‘entry point’ to which you can direct people looking for more info about your work – while also enabling you to post outputs on other appropriate sites as you normally would.

Kudos Pro also includes a planning tool which can guide you through creating a communication, engagement and impact plan, helping you to identify target audiences, impact goals, and different activities that will help you achieve those goals with your project. You can also gather evidence of engagement and impact within this tool and download the plan and results for reporting, or to submit as part of a grant application to demonstrate the rigour with which you will plan and manage impact of your project.

Free access to Kudos pro

Given that many of the usual ways researchers communicate their work are currently off limits due to the current situation (e.g. conferences, workshops, meetings with stakeholders etc) Kudos have opened up the pro platform so that researchers can use it for free – people can claim their free access by signing up at https://growkudos.com/hub/projects

Kudos are also maintaining a project of their own collating Covid-19 research that has been annotated.