X Close

Open@UCL Blog

Home

Menu

Archive for the 'Guest post' Category

Text and Data Mining (TDM) and Your Research: Copyright Implications and New Website Guidance

By Rafael, on 13 May 2024

This the second blog post of our collaborative series between the UCL Office for Open Science and Scholarship and the UCL Copyright team. Here, we continue our exploration of important aspects of copyright and its implications for open research and scholarship. In this instalment, we examine Text and Data Mining (TDM) and its impact on research along with the associated copyright considerations.

Data processing concept illustration

Image by storyset on Freepik.

The development of advanced computational tools and techniques for analysing large amounts of data has opened up new possibilities for researchers. Text and Data Mining (TDM) is a broad term referring to a range of ‘automated analytical techniques to analyse text and data for patterns, trends, and useful information’ (Intellectual Property Office definition). TDM has many applications in academic research across disciplines (Intellectual Property Office definition). TDM has many applications in academic research across disciplines.

In an academic context, the most common sources of data for TDM include journal articles, books, datasets, images, and websites. TDM involves accessing, analysing, and often reusing (parts of) these materials. As these materials are, by default, protected by copyright, there are limitations around what you can do as part of TDM. In the UK, you may rely on section 29A of the Copyright, Designs and Patents Act, a copyright exception for making copies for text and data analysis for non-commercial research. You must have lawful access to the materials (for example via a UCL subscription or via an open license). However, there are often technological barriers imposed by publishers preventing you from copying large amounts of materials for TDM purposes – measures that you must not try to circumvent. Understanding what you can do with copyright materials, what may be more problematic and where to get support if in doubt, should help you manage these barriers when you use TDM in your research.

The copyright support team works with e-resources, the Library Skills librarians, and the Office for Open Science and Scholarship to support the TDM activities of UCL staff and students. New guidance is available on the copyright website. TDM libguide and addresses questions that often arise during TDM, including:

  • Can you copy journal articles, books, images, and other materials? What conditions apply?
  • What do you need to consider when sharing the outcomes of a TDM analysis?
  • What do publishers and other suppliers of the TDM sources expect you to do?

To learn more about copyright (including how it applies to TDM):

Get involved!

alt=""The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on X, formerly Twitter, LinkedIn, and join our mailing list to be part of the conversation!

 

 

Launching today: Open Science Case Studies

By Kirsty, on 29 April 2024

Announcement from Paul Ayris, Pro-Vice Provost, UCL Library, Culture, Collections and Open Science

A close up of old leather-bound books on a shelfHow can Open Science/Open Research support career progression and development? How does the adoption of Open Science/Open Research approaches benefit individuals in the course of their career?

The UCL Open Science Office, in conjunction with colleagues across UCL, has produced a series of Case Studies showing how UCL academics can use Open Science/Open Research approaches in their plans for career development, in applications for promotion and in appraisal documents.

In this way, Open Science/Open Research practice can become part of the Research Culture that UCL is developing.

The series of Case Studies covers each of the 8 pillars of Open Science/Open Research. They can be found on a new webpage: Open Science Case Studies 4 UCL.

It is only fair that academics should be rewarded for developing their skills and adopting best practice in research and in its equitable dissemination. The Case Studies show how this can be done, and each Case Study identifies a Key Message which UCL academics can use to shape their activities.

Examples of good practice are:

  • Publishing outputs as Open Access outputs
  • Sharing research data which is used as the building block of academic books and papers
  • Creating open source software which is then available for others to re-use and develop
  • Adopting practices allied to Reproducibility and Research Integrity
  • The responsible use of Bibliometrics
  • Public Engagement: Citizen Science and Co-Production as mechanisms to deliver results

Contact the UCL Open Science Office for further information at openscience@ucl.ac.uk.

UCL open access output: 2023 state-of-play

By Kirsty, on 15 April 2024

Post by Andrew Gray (Bibliometrics Support Officer) and Dominic Allington Smith (Open Access Publications Manager)

Summary

UCL is a longstanding and steadfast supporter of open access publishing, organising funding and payment for gold open access, maintaining the UCL Discovery repository for green open access, and monitoring compliance with REF and research funder open access requirements.  Research data can  be made open access in the Research Data Repository, and UCL Press also publish open access books and journals.

The UCL Bibliometrics Team have recently conducted research to analyse UCL’s overall open access output, covering both total number of papers in different OA categories, and citation impact.  This blog post presents the key findings:

  1. UCL’s overall open access output has risen sharply since 2011, flattened around 80% in the last few years, and is showing signs of slowly growing again – perhaps connected with the growth of transformative agreements.
  2. The relative citation impact of UCL papers has had a corresponding increase, though with some year-to-year variation.
  3. UCL’s open access papers are cited around twice as much, on average, as non-open-access papers.
  4. UCL is consistently the second-largest producer of open access papers in the world, behind Harvard University.
  5. UCL has the highest level of open access papers among a reference group of approximately 80 large universities, at around 83% over the last five years.

Overview and definitions

Publications data is taken from the InCites database.  As such, the data is primarily drawn from InCites papers attributed to UCL, filtered down to only articles, reviews, conference proceedings, and letters. It is based on published affiliations to avoid retroactive overcounting in past years: existing papers authored by new starters at UCL are excluded.

The definition of “open access” provided by InCites is all open access material – gold, green, and “bronze”, a catch-all category for material that is free-to-read but does not meet the formal definition of green or gold. This will thus tend to be a few percentage points higher than the numbers used for, for example, UCL’s REF open access compliance statistics.

Data is shown up to 2021; this avoids any complications with green open access papers which are still under an embargo period – a common restriction imposed by publishers when pursuing this route – in the most recent year.

1. UCL’s change in percentage of open access publications over time

(InCites all-OA count)

The first metric is the share of total papers recorded as open access.  This has grown steadily over time over the last decade, from under 50% in 2011 to almost 90% in 2021, with only a slight plateau around 2017-19 interrupting progress.

2. Citation impact of UCL papers over time

(InCites all-OA count, Category Normalised Citation Impact)

The second metric is the citation impact for UCL papers.  These are significantly higher than average: the most recent figure is above 2 (which means that UCL papers receive over twice as many citations as the world average; the UK university average is ~1.45) and continue a general trend of growing over time, with some occasional variation. Higher variation in recent years is to some degree expected, as it takes time for citations to accrue and stabilise.

3. Relative citation impact of UCL’s closed and Open Access papers over time

(InCites all-OA count, Category Normalised Citation Impact)

The third metric is the relative citation rates compared between open access and non-open access (“closed”) papers. Open access papers have a higher overall citation rate than closed papers: the average open access paper from 2017-21 has received around twice as many citations as the average closed paper.

4. World leading universities by number of Open Access publications

(InCites all-OA metric)

Compared to other universities, UCL produces the second-highest absolute number of open access papers in the world, climbing above 15,000 in 2021, and has consistently been the second largest publisher of open access papers since circa 2015.

The only university to publish more OA papers is Harvard. Harvard typically publishes about twice as many papers as UCL annually, but for OA papers this gap is reduced to about 1.5 times more papers than UCL.

5. World leading universities by percentage of Open Access publications

(5-year rolling average; minimum 8000 publications in 2021; InCites %all-OA metric)

UCL’s percentage of open access papers is consistently among the world’s highest.  The most recent data from InCites shows UCL as having the world’s highest level of OA papers (82.9%) among institutions with more than 8,000 papers published in 2021, having steadily risen through the global ranks in previous years.

Conclusion

The key findings of this research are very good news for UCL, indicating a strong commitment by authors and by the university to making work available openly.  Furthermore, whilst high levels of open access necessarily lead to benefits relating to REF and funder compliance, the analysis also indicates that making research outputs open access leads, on average, to a greater number citations, providing further justification for this support, as being crucial to communicating and sharing research outcomes as part of the UCL 2034 strategy.

Get involved!

alt=""The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on X, formerly Twitter, LinkedIn, and join our mailing list to be part of the conversation!

 

The Predatory Paradox – book review

By Kirsty, on 29 February 2024

Guest post from Huw Morris, Honorary Professor of Tertiary Education, UCL Institute of Education. If anyone would like to contribute to future blogs, please get in touch.

Book cover: "The Predatory Paradox: Ethics, Politics, and Practices in Contemporary Scholarly Publishing'. Review of The Predatory Paradox: Ethics, Politics and the Practices in Contemporary Scholarly Publishing (2023). Amy Koerber, Jesse Starkey, Karin Ardon-Dryer, Glenn Cummins, Lyombe Eko and Kerk Kee. Open Book Publishers. DOI 10.11647/obp.0364.

We are living in a publishing revolution, in which the full consequences of changes to the ways research and other scholarly work are prepared, reviewed and disseminated have yet to be fully felt or understood. It is just over thirty years since the first open access journals began to appear on email groups in html and pdf formats.

It is difficult to obtain up-to-date and verifiable estimates of the number of journals published globally. There are no recent journal articles which assess the scale of this activity. However, recent online blog sources suggest that there are at least 47,000 journals available worldwide of which 20,318 are provided in open access format (DOAJ, 2023; WordsRated, 2023). The number of journals is increasing at approximately 5% per annum and the UK provides an editorial home for the largest proportion of these titles.

With this rapid expansion questions have been raised about whether there are too many journals, whether they will continue to exist in their current form, and if so how can readers and researchers assess the quality of the editorial processes they have adopted (Brembs et al., 2023; Oosterhaven, 2015)

This new book, ‘The Predatory Paradox,’ steps into these currents of change and seeks not only to comment on developments, but also to consider what the trends mean for academics, particularly early career researchers, for journal editors and for the wider academic community. The result is an impressive collection of chapters which summarise recent debates and report the authors’ own research examining the impact of these changes on the views of researchers and crucially their reading and publishing habits.

The book is divided into seven chapters, which consider the ethical and legal issues associated with open access publishing, as well as the consequences for assessing the quality of articles and journals. A key theme in the book, as the title indicates, is tracking the development of concern about predatory publishing. Here the book mixes a commentary on the history of this phenomenon with information gained from interviews and the authors’ reflections on the impact of editorial practices on their own publication plans. In these accounts the authors demonstrate that it is difficult to tightly define what constitutes a predatory journal because peer review and editorial processes are not infallible, even at the most prestigious journals. These challenges are illustrated by the retelling of stories about recent scientific hoaxes played on so-called predatory journals and other more respected titles. These hoaxes include the submission of articles with a mix of poor research designs, bogus data, weak analyses and spurious conclusions. Building on insights derived from this analysis, the book’s authors provide practical guidance about how to avoid being lured into publishing in predatory journals and how to avoid editorial practices that lack integrity. They also survey the teaching materials used to deal with these issues in the training of researchers at the most research-intensive US universities.

One of the many excellent features of the book is its authors practicing much of what they preach. The book is available for free via open access in a variety of formats. The chapters which draw on original research provide links to the underpinning data and analysis. At the end of each chapter there is also a very helpful summary of the key takeaway messages, as well as a variety of questions and activities that can be used to prompt reflection on the text or as the basis for seminar and tutorial activities.

Having praised the book for its many fine features, it is important to note the questions it raises about defining quality research which could have been more fully answered. The authors summarise their views about what constitutes quality research under a series of headings drawing on evidence from interviews with researchers in a range of subject areas they conclude that quality research defies explicit definition. They suggest, following Harvey and Green, that it is multi-factorial and changes over time with the identity of the reviewer and reader. This uncertainty, while partially accurate, has not prevented people from rating the quality of other peoples’ research or limited the number of people putting themselves forward for these types of review.

As the book explains, peer review by colleagues with an expertise in the same specialism, discipline or field is an integral part of the academic endeavour. Frequently there are explicit criteria against which judgements are made, whether for grant awards, journal reviewing or research assessment. The criteria may be unclear, open to interpretation, overly narrow or overly wide, but they do exist and have been arrived at through collective review and confirmed by processes involving many reviewers.

Overall I would strongly recommend this book and suggest that it should be required or background reading on research methods courses for doctoral and research masters programmes. For other readers who are not using this book as part of a course of study, I would recommend also reading research assessment guidelines for research council and funding body websites and advice to authors provided by long established journals in their field. In addition, it is worth looking at the definitions and reports on research activity provided by Research Excellence Framework panels in the UK and their counterparts in other nations.

Get involved!

alt=""The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on X, formerly Twitter, LinkedIn, and join our mailing list to be part of the conversation!

Getting a Handle on Third-Party Datasets: Researcher Needs and Challenges

By Rafael, on 16 February 2024

Guest post by Michelle Harricharan, Senior Research Data Steward, in celebration of International Love Data Week 2024.

ARC Data Stewards have completed the first phase of work on the third-party datasets project, aiming to help researchers better access and manage data provided to UCL by external organisations.

alt=""

The problem:

Modern research often requires access to large volumes of data generated outside of universities. These datasets, provided to UCL by third parties, are typically generated during routine service delivery or other activities and are used in research to identify patterns and make predictions. UCL research and teaching increasingly rely on access to these datasets to achieve their objectives, ranging from NHS data to large-scale commercial datasets such as those provided by ‘X’ (formerly known as Twitter).

Currently, there is no centrally supported process for research groups seeking to access third-party datasets. Researchers sometimes use departmental procedures to acquire personal or university-wide licenses for third-party datasets. They then transfer, store, document, extract, and undertake actions to minimize information risk before using the data for various analyses. The process to obtain third-party data involves significant overhead, including contracts, compliance (IG), and finance. Delays in acquiring access to data can be a significant barrier to research. Some UCL research teams also provide additional support services such as sharing, managing access to, licensing, and redistributing specialist third-party datasets for other research teams. These teams increasingly take on governance and training responsibilities for these specialist datasets. Concurrently, the e-resources team in the library negotiates access to third-party datasets for UCL staff and students following established library procedures.

It has long been recognized that UCL’s processes for acquiring and managing third-party data are uncoordinated and inefficient, leading to inadvertent duplication, unnecessary expense, and underutilisation of datasets that could support transformative research across multiple projects or research groups. This was recognised in the “Data First, 2019 UCL Research Data Strategy”.

What we did:

Last year, the ARC Data Stewards team reached out to UCL professional services staff and researchers to understand the processes and challenges they faced regarding accessing and using third-party research datasets. We hoped that insights from these conversations could be used to develop more streamlined support and services for researchers and make it easier for them to find and use data already provided to UCL by third parties (where this is within licensing conditions).

During this phase of work, we spoke with 14 members of staff:

  • 7 research teams that manage third-party datasets
  • 7 members of professional services that support or may support the process, including contracts, data protection, legal, Information Services Division (databases), information security, research ethics and integrity, and the library.

What we’ve learned:

An important aspect of this work involved capturing the existing processes researchers use when accessing, managing, storing, sharing, and deleting third-party research data at UCL. This enabled us to understand the range of processes involved in handling this type of data and identify the various stakeholders involved—or who potentially need to be involved. In practice, we found that researchers follow similar processes to access and manage third-party research data, depending on the security of the dataset. However, as there is no central, agreed procedure to support the management of third-party datasets in the organization, different parts of the process may be implemented differently by different teams using the methods and resources available to them. We turned the challenges researchers identified in accessing and managing this type of data into requirements for a suite of services to support the delivery and management of third-party datasets at UCL.

Next steps:

 We have been working on addressing some of the common challenges researchers identified. Researchers noted that getting contracts agreed and signed off takes too long, so we reached out to the RIS Contract Services Team, who are actively working to build additional capacity into the service as part of a wider transformation programme.

Also, information about accessing and managing third-party datasets is fragmented, and researchers often don’t know where to go for help, particularly for governance and technical advice. To counter this, we are bringing relevant professional services together to agree on a process for supporting access to third-party datasets.

Finally, respondents noted that there is too much duplication of data. The costs for data are high, and it’s not easy to know what’s already available internally to reuse. In response, we are building a searchable catalogue of third-party datasets already licensed to UCL researchers and available for others to request access to reuse.

Our progress will be reported to the Research Data Working Group, which acts as a central point of contact and a forum for discussion on aspects of research data support at UCL. The group advocates for continual improvement of research data governance.

If you would like to know more about any of these strands of work, please do not hesitate to reach out (email: researchdata-support@ucl.ac.uk). We are keen to work with researchers and other professional services to solve these shared challenges and accelerate research and collaboration using third-party datasets.

Get involved!

alt=""The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on X, formerly Twitter, and join our mailing list to be part of the conversation!

FAIR Data in Practice

By Rafael, on 15 February 2024

Guest post by Victor Olago, Senior Research Data Steward and Shipra Suman, Research Data Steward, in celebration of International Love Data Week 2024.

Image depicting the FAIR guiding principles for data resources: Findable, Accessible, Interoperable, and Reusable. Created by SangyaPundir.

Credit: Sangya Pundir, CC BY-SA 4.0 via Wikimedia Commons

The problem:

We all know sharing is caring, and so data needs to be shared to explore its full potential and usefulness. This makes it possible for researchers to answer questions that were not the primary research objective of the initial study. The shared data also allows other researchers to replicate the findings underpinning the manuscript, which is important in knowledge sharing. It also allows other researchers to integrate these datasets with other existing datasets, either already collected or which will be collected in the future.

There are several factors that can hamper research data sharing. These might include a lack of technical skill, inadequate funding, an absence of data sharing agreements, or ethical barriers. As Data Stewards we support appropriate ways of collecting, standardizing, using, sharing, and archiving research data. We are also responsible for advocating best practices and policies on data. One of such best practices and policies includes the promotion and the implementation of the FAIR data principles.

FAIR is an acronym for Findable, Accessible Interoperable and Reusable [1]. FAIR is about making data discoverable to other researchers, but it does not translate exactly to Open Data. Some data can only be shared with others once security considerations have been addressed. For researchers to use the data, a concept-note or protocol must be in place to help gatekeepers of that data understand what each data request is meant for, how the data will be processed and expected outcomes of the study or sub study. Findability and Accessibility is ensured through metadata and enforcing the use of persistent identifiers for a given dataset. Interoperability relates to applying standards and encoding such as ICD-10, ICDO-3 [2] and, lastly, Reusability means making it possible for the data to be used by other researchers.

What we are doing:

We are currently supporting a data reuse project at the Medical Research Council Clinical Trials Unit (MRC CTU). This project enables the secondary analysis of clinical trial data. We use pseudonymisation techniques and prepare metadata that goes along with each data set.

Pseudonymisation helps process personal data in such a way that the data cannot be attributed to specific data subjects without the use of additional information [3]. This reduces the risks of reidentification of personal data. When data is pseudonymized direct identifiers are dropped while potentially identifiable information is coded. Data may also be aggregated. For example, age is transformed to age groups. There are instances where data is sampled from the original distribution, allowing only sharing of the sample data. Pseudonymised data is still personal data which must be protected with GDPR regulation [4].

The metadata makes it possible for other researchers to locate and request access to reuse clinical trials data at MRC CTU. With the extensive documentation that is attached, when access is approved, reanalysis and or integration with other datasets are made possible.  Pseudonymisation and metadata preparation helps in promoting FAIR data.

We have so far prepared one data-pack for RT01 studies which is ‘A randomized controlled trial of high dose versus standard dose conformal radiotherapy for localized prostate cancer’ which is currently in review phase and almost ready to share with requestors. Over the next few years, we hope to repeat and standardise the process for past, current and future studies of Cancer, HIV, and other trials.

References:    

  1. 8 Pillars of Open Science.
  2. Digital N: National Clinical Coding Standards ICD-10 5th Edition (2022), 5 edn; 2022.
  3. Anonymisation and Pseudonymisation.
  4. Complete guide to GDPR compliance.

Get involved!

alt=""The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on X, formerly Twitter, and join our mailing list to be part of the conversation!

Finding Data Management Tools for Your Research Discipline

By Rafael, on 14 February 2024

Guest post by Iona Preston, Research Data Support Officer, in celebration of International Love Data Week 2024.

Various gardening tools arranged on a dark wooden background

Photo by Todd Quackenbush on Unsplash.

While there are a lot of general resources to support good research data management practices – for example UCL’s Research Data Management webpages – you might sometimes be looking for something a bit more specific. It’s good practice to store your data in a research data repository that is subject specific, where other people in your research discipline are most likely to search for data. However, you might not know where to begin your search. You could be looking for discipline-specific metadata standards, so your data is more easily reusable by academic colleagues in your subject area. This is where subject-specific research data management resources become valuable. Here are some resources for specific subject areas and disciplines that you might find useful: 

  • The Research Data Management Toolkit for Life Sciences
    This resource guides you through the entire process of managing research data, explaining which tools to use at each stage of the research data lifecycle. It includes sections on specific life science research areas, from plant sciences to rare disease data. These sections also cover research community-specific repositories and examples of metadata standards. 
  • Visual arts data skills for researchers: Toolkits
    This consists of two different tutorials covering an introduction to research data management in the visual arts and how to create an appropriate data management plan. 
  • Consortium of European Social Science Data Archives
    CESSDA brings together data archives from across Europe in a searchable catalogue. Their website includes various resources for social scientists to learn more about data management and sharing, along with an extensive training section and a Data Management Expert Guide to lead you through the data management process. 
  • Research Data Alliance for Disciplines (various subject areas)
    The Research Data Alliance is an international initiative to promote data sharing. They have a webpage with special interest groups in various academic research areas, including agriculture, biomedical sciences, chemistry, digital humanities, social science, and librarianship, with useful resource lists for each discipline. 
  • RDA Metadata Standards Catalogue (all subject areas)
    This directory helps you find a suitable metadata scheme to describe your data, organized by subject area, featuring specific schemes across a wide range of academic disciplines. 
  • Re3Data (all subject areas)
    When it comes to sharing data, we always recommend you check if there’s a subject specific repository first, as that’s the best place to share. If you don’t know where to start finding one, this is a great place to look with a convenient browse feature to explore available options within your discipline.

These are only some of the different discipline specific tools that are available. You can find more for your discipline on the Research Data Management webpages. If you need any help and advice on finding data management resources, please get in touch with the Research Data Management team on lib-researchsupport@ucl.ac.uk 

Get involved!

alt=""The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on X, formerly Twitter, and join our mailing list to be part of the conversation!

Join us for International Love Data Week!

By Rafael, on 7 February 2024

Guest post by Iona Preston, Research Data Support Officer.

Next week (February 12-16), we’re excited to be celebrating International Love Data Week. We’ll be looking at how data is shared and reused within our UCL and academic community, highlighting the support available across UCL for these initiatives. This year’s theme, “My Kind of Data,” focuses on data equity, inclusion, and disciplinary communities. We’ll be blogging and posting on X throughout the week, so please join us to learn more.

International Love Data Week 2024 poster

Here’s a sneak preview of what’s coming up:

  • Did you know the Research Data Management team can review your data management plan and support you in publishing your data in our Research Data Repository? Find out more about our last year in review with Christiana McMahon, Research Data Support Officer.
  • Have you met any members of our Data Stewards team? James Wilson, Head of Research Data Services, will be explaining how you can collaborate with them to streamline the process of managing and preserving your data, thereby supporting reproducibility and transparency in your research.
  • Are you seeking tools to support best practices in data management for your specific discipline? We have some suggestions from Iona Preston, Research Data Support Officer.
  • You may have heard of FAIR data – but what does that mean in practice? Join Research Data Steward Shipra Suman and Senior Research Data Steward Victor Olago as they discuss projects where they’ve supported making data FAIR.
  • And, finally, to round off the week, Senior Research Data Steward Michelle Harricharan will talk about a project the Data Stewards are carrying out to better support UCL researchers in accessing and managing external datasets.

We look forward to engaging with you throughout the week and hope you enjoy learning more about research data at UCL.

And get involved!

alt=""The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on X, formerly Twitter, and join our mailing list to be part of the conversation!

Altmetrics at UCL: one year on!

By Harry, on 29 August 2023

Guest post by Andrew Gray, Bibliometrics Support Officer

Altmetrics are the concept of “alternative metrics” – measuring the impact of research beyond scholarly literature. This covers a wide range of different things, ranging from social media discussions (e.g. Twitter or Facebook), mainstream news reporting, and grey literature such as policy documents. Understanding how research is being reported and discussed in these can help give us a broader understanding of the impact and reach of papers that we don’t see from looking at traditional scholarly citations.

UCL has a subscription to Altmetric, the primary commercial database for this information. It covers a broad range of materials. We also subscribe to a second source, which focuses purely on policy documents – Overton and can be a helpful complement.

There are several ways in which looking at altmetrics can give us information that wouldn’t otherwise be available. For example, we can see how different audiences outwith academia are responding to research, and we can look at what they’re saying to get an idea of the kind of response.

Some of the altmetric indicators (particularly Mendeley bookmarks) seem to have a close correlation with subsequent citations and can give us an early view of what citation figures may be like six months to a year in future.

Lastly, tracing policy citations through Altmetric or Overton can effectively demonstrate the wider research impact, for example, for use in a funding report or application.

Looking at activity

So what data can we see? Altmetric provides an aggregated “score” for each paper, indicating an overall activity level. While this isn’t a very exact measure, it lets us identify papers with high and low activity levels.

Looking over the past few years at UCL, the most obvious thing is that discussion of research is dominated by COVID-19. It accounts for thirteen out of the fifteen most heavily discussed UCL papers overall – by comparison, were we to look at pure citation counts, COVID papers account for none of UCL’s top fifteen overall, and only perhaps four out of the top fifteen from the past few years. This very striking difference highlights how altmetrics and citations can show different things.

The colour swatches on each show how the activity is broken down. For example, in this paper, we can see that most of the activity is from X/Twitter (light blue), with smaller contributions from Facebook (dark blue), news media (red) and blogs (yellow). Clicking through will let us drill down to see all the activity details.

Diving into data – day by day

One thing that surprised us with Altmetric is the sheer volume of data that they make available. Reports of 100,000+ papers can be downloaded, including DOIs and PubMed IDs, making it easy to link data to other sources such as RPS and InCites. This lets us do some analyses that wouldn’t be possible in other sources – but do tell us something unexpected.

For example, it gives us the exact date papers were published. Looking at around 50,000 UCL papers published in 2020-22, we find that the response differs depending on the day of the week – papers on Wednesday and Thursday are above-average, and papers on Tuesdays are below average.

In part, it is because some of the most prestigious publications have fixed publication days – most Nature papers are released on Wednesdays, for example. These journals have a large share of high-impact papers and an excellent publicising system.

The weekends are interesting. Not many papers come out on the weekends, but the ones that do, have a noticeable citation/bookmarking penalty compared to weekday ones, suggesting they are less impactful on average. And they make much less of a stir in the news media – a weekend paper is less than half as likely to get news coverage as a weekday one.

But social media has a sharp difference – Sunday papers get significantly more Twitter activity than Saturday ones. An intriguing mystery!

Using Altmetric at UCL

Altmetric and Overton are both available to any user at UCL. You simply need to log in to Altmetric using a UCL email, which will set up your user account. For Overton, you can browse the data without an individual account or set up an account to save searches and other functionalities.

We have integrated Altmetric with RPS, the central UCL publications database. Every two to four weeks, every paper in RPS since 2013 is exported, tagged with the UCL author(s) and associated departments, and uploaded into Altmetric.

This means that we can use the Altmetric dashboard to dig down into UCL outputs in some detail – we can ask it questions like “news stories in the last month referring to a piece of research published by someone in Chemistry”. It is also possible to save and circulate reports from the dashboard – this report shows the top 20 papers from Chemistry in 2023 by Altmetric Activity.

Similar functionality is not yet available for Overton, but if you would like to search for papers from a specific department, we would recommend generating a list of DOIs from InCites (or even from Altmetric itself!) and importing those as an advanced search.

We will be running introductory training sessions for both Altmetric and Overton in the coming term – please contact bibliometrics@ucl.ac.uk if you would be interested in attending these or booking a 1:1 meeting to go through the services.

Have you seen our new UCL Citizen Science website pages?

By Harry, on 15 August 2023

Guest post by Sheetal Saujani, Citizen Science Coordinator

We are pleased to launch our new and improved Citizen Science web pages on UCL’s Office for Open Science and Scholarship website. You can now access the updated content and browse what UCL is doing in this fast-growing and exciting area!

Citizen science includes a wide range of activities, and it is gaining increasing recognition among the public and within the area of research. UCL recognises citizen science as a diverse practice, encompassing various forms, depths and aims of collaboration between academic and community researchers and various disciplines.

workshop meeting
Check out our new website pages:

  • Defining Citizen Science: whether you call it participatory research, community action, crowdsourcing, public engagement, or anything else, have a look at our word cloud showing various activities and practices falling under one umbrella. UCL teams are collaborating on different projects and working together under a joint mission to strengthen UCL’s activities. This fosters stronger connections and more collaborative solutions.
  • Citizen Science projects: discover the broad range of innovative projects at UCL (grouped by discipline) showcasing various ways to use a citizen science approach in research. If you have a citizen science project to feature or have any questions, please contact us.
  • History of Citizen Science: explore the exciting history of citizen science, early definitions, and three relevant periods in modern science. Learn about one of the longest-running citizen science projects!
  • Types and levels of Citizen Science: read about the growth of citizen science, which has led to the development of three broad categories: ‘long-running citizen science’, ‘citizen cyberscience’, and ‘community science’. Citizen science practices can be categorised into a continuum using the ‘Doing It Together Science’ escalator model. This model focuses on individual participation levels, allowing individuals to choose the best level for their needs, interests, and free time.
  • UCL Citizen Science Certificate: find out about this high-quality, non-academic certification awarded to individuals who complete a training programme as part of the UCL Citizen Science Academy. The Certificate recognises research abilities through participation in active projects, enabling citizen scientists to influence local decisions.

The Office for Open Science and Scholarship is working to raise awareness of citizen science approaches and activities to build a support service and a community around citizen science.  We are bringing together colleagues who have run or are currently running citizen science projects, to share experiences and encourage others to do the same.

If you are interested in citizen science, we would like to hear from you, so please get in touch by email openscience@ucl.ac.uk and tell us what you need.