X Close

Open@UCL Blog

Home

Menu

Research Support Advent Calendar 2025

By Naomi, on 1 December 2025

It’s time. For the third year in a row, we have a wonderful Advent Calendar of Research Support for you to enjoy!

We will be sharing a link each day on our Bluesky account, as well as our Linkedin account, but don’t worry if you’re not on Bluesky or Linkedin – the interactive calendar is embedded below for you to access at your own pace, or you can access it directly on your browser. We will also update this blog post throughout the month with an accessible version of the content.

We hope you find something here that will interest, inform and inspire you during this month of advent.

The front cover of the book published by UCL Press. It is dark blue, and in yellow text is written 'The collected works of Jeremy Bentham' at the top of the cover, then 'Essays on logic, ethics and universal grammar' in the middle, and in small yellow text at the bottom 'edited by Philip Schofield', below which is the UCL Press logo, also in yellow.

Cover image from UCL Press website.

1 December: Unwrap timeless ideas this festive season with Bentham’s open access Essays on Logic, Ethics and Universal Grammar, which publishes today. These thought-provoking essays explore reasoning, morality, and language- perfect for cosy winter reflections and sparking deep conversations by the fire!

 

 

 

A green bauble hanging from the branches of a Christmas tree which fills the entire image. Printed on the bauble is an image of the UCL portico as well as the UCL logo.

Image by Mary Hinkley on UCL imagestore.

 

 

2 December: Nothing says Season’s Greetings like writing and sharing your data management plan!

 

 

 

A cartoon of Father Christmas holding a scroll with the copyright symbol on it.

Image AI-generated using prompts from Christine Daoutis.

3 December: Father Christmas has been collecting data again this year…But is his list protected by copyright? Take our online copyright Christmas quiz.

A grey background covered with half a clock which has the large hand pointing just past 3 o'clock and the short hand just past 4 o'clock. In front of this is the title 'The Chronopolitics of Life' with the subheading 'Rethinking temporalities in health and biomedicine beyond the life course' below which is a list of the editors - Nolwenn Buhler, Nils Graber, Victoria Boydell and Cinzia Greco.

Cover image from UCL Press.

 

 

 

 

 

4 December: End the year with a powerful read.

Publishing today, The Chronopolitics of Life is the final book of the year from UCL Press. This open access work explores how time shapes life, politics and power, offering fresh insights for reflective winter reading and inspiring conversations as the year comes to a close.

 

A central view of the portico looking up at it from the ground. The pillars are lit up in different colours, from blue on the right, going through purple, pink, orange, gold, green and ending with turquoise on the left. In front of this colourful façade is a beautiful Christmas tree lit up in warm white lights. Everything in the foreground of the image is in darkness which gives a more impressive effect to the lights.

Image by Alejandro Salinas Lopez on UCL imagestore.

 

5 December: Read about the gift of rights retention, which is now included in UCL’s updated Publications Policy, and the actions for UCL authors.

 

 

Two people sit in front of computers in a room decorated with lots of plants. One of the people, a man wearing a navy t-shirt, is pointing at their screen, showing the other person, a woman wearing a light green jumper, something on the screen.

Image by Mary Hinkley on UCL imagestore

 

6 December: Retraction Watch is a searchable database of retracted, corrected, or concern articles with 40k+ entries. Search by author, title, or affiliation to ensure your research is based on trustworthy sources.

 

Four people are standing in front of a large interactive digital screen, which displays some hand-drawn notes in the form of a flow chart. One person is standing close to the screen with a pen in his hand but is looking towards the rest of the group who appear to be giving him some ideas or opinions and it looks as if he will continue to write some more notes on the screen.

Image by Alejandro Salinas Lopez on UCL imagestore

 

7 December: Looking to start or grow your Citizen Science project? UCL’s Resources Hub offers training, tools & support to help you succeed. Explore what’s available today!

 

 

Cartoon of an anthropomorphic red copyright symbol with a white beard, legs and arms, smiling and wearing a Santa hat.

Image AI-generated using prompts from Christine Daoutis

 

8 December: Join UCL’s Copyright Literacy community channel for a virtual mince pie and the latest copyright news!

 

 

 

 

 

Two people are behind a desk which has an open notebook and what appears to be elements of an experiment, as well as a computer screen. One person, a woman wearing a green cardigan and earrings which appear to be in the shape of a raspberry, is sat looking at the screen whilst the other person, a woman wearing jeans and a brown jacket as well as blue latex gloves, is standing next to her with a hand on the mouse also looking at the screen.

Image by Mary Hinkley on UCL imagestore

9 December: Refresh your Research Integrity training with the recently updated course which now includes guidance on Research Security and updates from the revised Concordat to Support Research Integrity.

 

A wintry, evening view of the entrance to the main UCL campus. The portico with it's ten pillars is in the background, lit up in rainbow colours, there is a Christmas tree with warm white lights in front of this and then two large trees adorned with colourful lights on the left and right sides of the portico. In the foreground, the two small security buildings on either side of the entrance are lit up from within and groups of people under umbrellas are walking along the pavement. At the far edges of the image are illuminated street lamps and the whole effect of the image is a wet, wintry, festive feeling.

Image by James Tye on UCL imagestore

 

 

10 December: Jingle all the way…to gaining ethical approval! The Research Ethics Team can help – book a drop-in session with one of the team.

 

 

A blue-grey mug sits on a plate, along with a mince pie dusted with icing sugar, and a sprig of holly with red berries.

Image by Lidia from Pixabay

11 December: Christmas is a time for relaxation, celebration…and careful study of official documents. There are 4,000 government documents in Overton from 80 different countries on the topic of Christmas.

 

A Christmas tree, decorated with warm white lights and colourful baubles is in the centre of the image, in front of the Andrew Huxley building in the centre of the main UCL campus. A dark blue sky is slightly visible above the buildings, many of the lights inside the buildings are on and there are a few people along the walkway on the left hand side of the image.

Image by Mary Hinkley on UCL imagestore

 

 

12 December: Keep your rights, and wave goodbye to embargoes – next year, UCL’s updated Publications Policy will help staff use and share their own articles as soon as they’re published.

 

 

A view along the centre of a large desk with students working on laptops on either side, some wearing earphones. There are water bottles, phones and a handbag in the centre of the desk. At the far end of the room is a door, and there are windows on the right-hand side.

Image by Alejandro Salinas Lopez on UCL imagestore

13 December: Grey literature, published by non-academic institutions, provides insights from real-world practitioners. It often addresses current, pressing issues & offers data or case studies not found in academic journals. Take a look at the UCL library guide all about grey literature.

A snowy scene of the quad and Wilkins building at UCL. The sky is completely white/grey, and the ground is completely white with snow, with a few people gathered or walking across it. A few leafless trees and two small round buildings are coated in snow, and it looks like the snow is still falling.

Image by Mary Hinkley on UCL imagestore

 

 

14 December: Dashing through the snow… to the new UCL data management plan template!

 

 

Three students stand smiling and facing the camera with hot drinks in their hands and coats on. A larger group of students are standing and socialising behind them, not looking at the camera. In the background are two illuminated street lamps, as well as some purple and pink lights adorning two trees, and some windows within a building lit up with warm light.

Image by Mary Hinkley on UCL imagestore

 

15 December: Join the UCL Citizen Science Community! Connect, share ideas, and grow your network with your peers at UCL. Staff & students welcome – let’s make research inclusive together!

 

 

A side view of Jeremy Bentham's auto-icon located in a glass box in UCL's student centre. Bentham is in the centre of the image, seated, holding his walking stick and wearing brown trousers, black jacket and a light brown hat. Some Christmas themed graphic elements have been added to the image, in the bottom left-hand corner is an image of a pile of presents, there are images of a star, Christmas tree, presents and bauble appearing on the wall behind Bentham, and a garland of holly, berries and a red bow above his head.

Image by Mary Hinkley on UCL imagestore, edited using Canva

16 December: When philosopher Jeremy Bentham died, he bequeathed over 100,000 manuscript pages to UCL. But what do these pages contain, and how does UCL’s Bentham Project make sense of them? In the final release from UCL Press Play this year, Professor Philip Schofield explains more.

 

 

A view from above of a selection of beautifully wrapped gifts in pale blue, orange, silver and grey, tied up with ribbon. Around the pile of presents are silver baubles, pinecones with the edges painted white, rose gold ribbons and a string of silver beads.

Image by Yevhen Buzuk from Pixabay

 

17 December: The gift that keeps on giving – but sometimes it doesn’t give quite what we want it to. Have a look at our libguide on using generative AI for searching.

 

 

A cartoon character with a Christmas hat and a long scarf with Creative Commons symbols on it, holding a present.

Image AI-generated using prompts from Christine Daoutis

18 December: Creative Commons licences reflect the giving spirit of the season. But are you as generous as a Creative Commons licence? Complete our fun personality quiz to find out!

 

 

 

 

 

A dark blue bauble hanging on the branch of a Christmas tree is in focus on this image, whilst a purple bauble, other branches of the Christmas tree and coloured lights are blurred in the background. The pillars of the portico lit up in green are reflected in the blue bauble which is also coated in raindrops.

Image by Mary Hinkley on UCL imagestore

19 December: Are you a parent or carer toilet training a child? We need your help! Join the Big Toilet Project – the world’s largest toilet training study. Participate in this UCL citizen science project & help reduce plastic pollution from nappy waste.

 

A person wearing a red santa hat is standing facing away from the camera, looking towards the pillars of the portico at UCL's main campus, which is dark but has an image of a large snowflake projected onto it in light. On the left-hand side of the image is the edge of a low building which is decorated with icicle lights and has a window which is lit up from the inside.

Image by Mary Hinkley on UCL imagestore

20 December: Take some time to reflect on Research Transparency with UCL’s online training course on transparency and reproducibility in research.

 

A logo with a deep pink background and a large white triangle in the centre, with two of its corners at the top and bottom of the logo, and the other pointing to the right, in order to appear as a 'play' button. 'UCL Press Play', the title of the podcast, is written across the white triangle.

Image from UCL Press website

 

 

21 December: Make this season brighter with UCL Press Play! Explore podcasts and documentaries where brilliant minds reveal bold ideas on queer histories, neurodiversity, climate justice and more. Listen now and celebrate knowledge!

 

 

A view facing the Cruciform building from outside the Wilkins building. The sky above is grey, and the night is drawing in, so lights are on inside the Cruciform building, creating a golden glow from all the windows, complimenting the vibrant red of the bricks making it seem cosy and festive. In the foreground, there are several bare trees which are decorated with purple and pink lights. This colour contrasts with the colour of the cruciform building, giving the whole image a magical, enchanting quality. The area is empty of people, apart from two small figures standing between two small buildings at either side of the entrance.

Image by Mary Hinkley on UCL imagestore

22 December: Great news for UCL staff publishing articles in subscription journals next year. Even if there’s no transformative agreement with your publisher, UCL can still make your manuscript open access immediately.

 

A view of the Wilkins building with the Portico looking quite iconic in the centre. With it's ten pillars and a UCL flag flying from the roof, the Portico looks grand against a blue sky, and in front of it sits a decorated Christmas tree reaching up to the middle of the pillars. In the foreground, there are blurred images of several people who must be walking across the quad, and there are a few small marquees on the left-hand and right-hand sides under which seem to be different food and drink stalls.

Image by James Tye on UCL imagestore

 

23 December: Make an ethical start to the new year! Plan your ethics applications for 2026 and check out our high-risk application deadlines.

 

 

Celebrating Open Science & Scholarship at UCL: Highlights from the Third Annual Awards Ceremony!

By Naomi, on 29 October 2025

Two rows of four people stand facing the camera, in front of a red wall. They are smiling and holding framed certificates.

Photo by Kirsty Wallis

On the afternoon of 22nd October 2025, 40 people gathered in Bentham House to celebrate the winners and honourable mentions of this year’s UCL Open Science and Scholarship Awards.

Sandy Schumann and Jessie Baldwin, the UKRN Local Network Leads at UCL, hosted the ceremony and awards were presented by David Shanks, UCL’s UKRN Institutional Lead. Sandy began by congratulating this year’s cohort – 69 applications were submitted for consideration this year, so the competition was fierce! She also thanked the judges, as well as UCL Press for sponsoring the event.

There were five categories in total, and after the awards were presented, the overall winner of each category showcased their project.

A classroom with three rows of white desks and several people sitting at these desks looking towards the front of the room where someone is standing and giving a presentation. There is a large screen on which a PowerPoint presentation is displayed with a slide reading 'Open Research Training Programme and Practice Community'

Photo by Kirsty Wallis

The first category was, ‘Activities Led By Non-Academic Staff’, won by Vassilis Sideropoulos (Senior Research Technical Professional, Department of Psychology and Human Development, IOE) for his work establishing an open research training programme and practice community within the IOE. Vassilis saw the need to make open research practical and relevant, and created a programme with modular training covering topics such as Data Management and Pre-Registration. Following feedback on the initial training programme delivered between 2019-2023, he spent 18 months considering how to improve it, which led to a revamped programme with more applicable guidance. Alongside this, he recognised that researchers were seeking a community, a place where they could reach out to someone who could train them and respond to their questions, which led him to establish an open research practice community.

To encourage engagement with the practice of open science, an understanding of what researchers need is vital. By listening and responding to feedback, Vassilis recognised this and has created a programme that has transformed the ways in which IOE researchers engage with and understand open science.

A person is standing at the front of a classroom giving a presentation. On a large screen, a powerpoint slide is displayed with a screenshot of an interactive map of the UK with different criteria along the left-hand side which can be changed to decide where is best to plant which trees across the country.

Photo by Kirsty Wallis

The winner of the second category, ‘Activities by academic staff (including post-docs) or PhD students: Open-source software/analytical tools’, was Deyu Ming (Lecturer in Mathematics and Data Analytics, School of Management, Faculty of Engineering) for the development of the open-source package ‘DGPSI’, which allows for scalable surrogate modelling of expensive computer models and model networks. In his showcase, Deyu took us on the journey of this project. From the origins of the idea in 2019, to translating it into something that others could use and publishing it on GitHub in 2020, to it subsequently appearing on the python package index and on CONDA in 2022. But it didn’t stop there. In 2023, the package started making considerable impact through the UKRI-funded projects Net Zero Plus and ADD-TREES, which support AI-enhanced tree-planting decision tools used by DEFRA, Forest Research, the National Trust, and other stakeholders to advance the UK’s Net Zero 2050 goals.

Since 2021, there have been 19 releases of the software, and it is now 60x faster than the original. As creator, lead developer, and sole maintainer of ‘DGPSI’, Deyu has worked incredibly hard on this open-source software, and with already over 100,000 downloads, it will no doubt continue to make a resounding and long-lasting impact.

Three people stand at the front of a classroom delivering a presentation. One appears to be speaking into a microphone whilst the other two stand listening. On the screen is a PowerPoint slide reading 'Open Peer Review System for Statistical Science Undergraduate Coding Assignments'

Photo by Kirsty Wallis

The award for ‘Activities led by undergraduate or postgraduate students’ went to Yinan Chen, Eric Chen and Adelina Xie (undergraduate students at the Department of Statistical Science, Faculty of Mathematical and Physical Sciences) for developing an open peer-review system for statistical science undergraduate coding assignments as part of a UCL ChangeMakers project. The problem they set out to address was the limitation in Moodle (the learning platform used at UCL) with regard to peer review, as students could only receive general feedback on coding assignments. Since Moodle only supports the review of PDF outputs and not raw R code, there was no option for line-by-line code reviews, and they felt that collaborative learning opportunities were being missed. Their solution: GitHub and Moodle integration. This innovative hybrid approach, with GitHub’s powerful code review system and Moodle’s familiar interface, has led to a practical, accessible and scalable tool designed for students, by students.

This is a recently concluded pilot project, but it is already having significant impact. A paper is being written on it for the Journal of Open-Source Education, and it has attracted interest for presentation at the Royal Statistical Society’s education conference, which shows its potential for nation-wide statistical education – testament to Yinan, Eric and Adelina’s hard work and dedication. Alongside this, their commitment to the practice of open science at such an early stage in their academic career was inspiring to see.

A man is giving a presentation at the front of a classroom. He is pointing to the large screen on which is a screenshot of the homepage of Programming Historian website.

Photo by Kirsty Wallis

For the category ‘Activities led by academic staff (including post-docs) or PhD students: Open publishing’, the award was presented to Adam Crymble (Lecturer of Digital Humanities, Department of Information Studies, Faculty of Arts and Humanities), for the open publishing initiative ‘Programming Historian’ which he co-founded. Programming Historian offers over 250 peer-reviewed tutorials for digital humanities in English, Spanish, French, and Portuguese. Adam explained how a gap in digital skills amongst humanities professionals was the motivation for the project, and from its humble beginnings as a blog, it has become a financially self-sustaining open publisher. By offering practical applications and case studies in each tutorial, as well as ensuring translations are culturally adapted, this project has had far-reaching influence and continues to do so.

Since the outset, community and collaboration have been vital in the development of Programming Historian, and Adam has worked hard to expand the project’s global community and to ensure inclusivity. This approach, alongside the use of open peer review and the promotion of open data and open-source tools, epitomises the principles of open science and was fantastic to hear about.

A man is presenting at the front of a classroom, behind him is a large screen on which is written '3DForEcoTech' in large letters, under which is an image of a forest.

Photo by Kirsty Wallis

The final category was ‘Activities by academic staff (including post-docs) or PhD students: Enhancing open science and reproducibility capacity in the academic community’, won by Martin Mokros (Lecturer in Earth Observation, Department of Geography, Faculty of Social and Historical Sciences) for his COST Action 3DForEcoTech project. Four years ago, Martin noticed the issue of scientists undertaking similar forest ecosystem research but not talking to each other about it. He wanted to standardise laser scanning technologies for forest ecology and inventory to allow for collaboration, and so launched COST Action 3DForEcoTech – the first global open-science network focused on ground-based 3D forest monitoring. With over 600 members from 50+ countries, the reach is impressive, and it is an innovative approach to scientific practice. Open science was a key motivation for the project, and it incorporates fully accessible datasets, algorithms and benchmarks results, as well as open-source software and an algorithm library.

Alongside the provision of open data and tools, this project has engaged with open science by creating equitable access to knowledge and opportunities through supporting ECRs, enforcing gender balance and ensuring participation from underrepresented regions. The idea of equitable access underpins the entire concept of open science, and by making it a central tenet to the COST Action 3DForEcoTech project, Martin has provided an excellent example of how this can be done.

Each of these award winners have advocated for, harnessed and showcased open science in various fields of research and study, and we are delighted that they have received recognition with a UCL Open Science & Scholarship Award.

We are looking forward to hearing about these projects’ ongoing impact and wonder what new initiatives they might inspire!

alt=""

The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities.

Follow us on Bluesky, LinkedIn, and join our mailing list to be part of the conversation!

Share this post on Bluesky

‘Who Owns Our Knowledge?’ Reflections from UCL Citizen Science and Research Data Management

By Naomi, on 23 October 2025

Guest post by Sheetal Saujani, Citizen Science Coordinator, and Christiana McMahon, Research Data Support Officer

A graphic divided into two halves, on the left is a starry night sky with the silhouette of a person looking up at it in wonder, and against the backdrop of the sky is a large version of the International Open Access Week logo which looks like an open padlock. On the right is a dark purple background with the text 'International Open Access Week' at the top with the logo, and 'Open Access Week 2025' near the bottom, below which is written 'October 20-26, 2025, #OAWeek'

Graphic from openaccessweek.org, photo by Greg Rakozy

This year’s theme for International Open Access Week 2025, “Who Owns Our Knowledge?”, asks us to reflect on how knowledge is created, shared, and controlled, and whose voices are included in that process. It’s a question that aligns closely with UCL’s approach to citizen science, which promotes openness, collaboration and equity in research.

Citizen science provides a powerful lens to examine how knowledge is co-produced with communities. It recognises that valuable knowledge comes not only from academic institutions but also but also from lived experience, community knowledge, and shared exploration.

Five people are sitting around a long table, and seem to be listening to one person speak. There are lots of resources laid out on the table, including sheets of paper, pens, post-it notes and posters. There is also a badge making machine, as well as a few mugs.

Photo by Sheetal Saujani, at a Citizen Science and Public Engagement workshop

Through initiatives like the UCL Citizen Science Academy and UCL Citizen Science Certificate, we support researchers and project leads to work in partnership with the public, enabling people from all backgrounds to take part in research that matters to them. These programmes are designed to be inclusive and hands-on, helping to build confidence, skills and shared responsibility.

For those of us working in academia, this theme reminds us that open access isn’t just about making papers free to read – it’s about changing how research is produced. Involving citizen scientists in forming research questions, collecting data, and interpreting findings opens up the research process itself, not just access to its outputs.

The Principles for Citizen Science at UCL emphasise respectful partnerships, transparency, and fair recognition. They reflect our belief that citizen scientists are co-creators whose insights – rooted in everyday experience and local knowledge – bring depth and relevance to academic work.

A graphic which has the acronyms 'Fair' and 'Care' in large letters, with what they stand for written under each letter: F - Findable, A - Accessible, I - Interoperable, R - Reusable and C - Collective Benefit, A - Authority to Control, R - Responsibility, E - Ethics

Graphic from gida-global.org/care

In particular, the fifth principle for Citizen Science at UCL states that CARE Principles for Indigenous Data Governance should be considered when working with marginalised communities and Indigenous groups. These principles are: Collective Benefit, Authority to Control, Responsibility, and Ethics, which remind researchers that creating knowledge from Indigenous data must be to the benefit of Indigenous Peoples, nations and communities. These Principles support Indigenous Peoples in establishing more control over their data and its use in research. The Research Data Management Team encourage staff and students to engage with the CARE Principles in addition to the FAIR principles.

So, who owns our knowledge? At UCL, we believe the answer should be: everyone. Through citizen science and its principles, we’re building a future where knowledge is created collectively, shared responsibly and made openly accessible – because it belongs to the communities that help shape it.

alt=""

The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities.

Follow us on Bluesky, LinkedIn, and join our mailing list to be part of the conversation!

Share this post on Bluesky

Open Science & Scholarship Award Winners 2025!

By Kirsty, on 6 October 2025

The UCL Open Science and Scholarship Awards are a joint programme between the UCL Office for Open Science and Scholarship (OOSS) and the local network chapter of UKRN, the UK Reproducibility Network. Together we are delighted to be running these awards for the third year and are proud to say that the quality and volume of applications has only continued to grow year on year.

We would like to invite you to join us in celebrating our award winners during the Open Access Week festivities. At the awards presentation on Wednesday 22 October we will be presenting all of the awards as well as hearing from a selection of winners and honourable mentions about their research. There will also be the opportunity to network with our winners at a reception sponsored by UCL Press.

And without further ado – our award winners! Each category has an overall winner and two honourable mentions.

Category – Activities led by non-academic staff

  • Winner: Vassilis Sideropoulos, Senior Research Technical Professional, Department of Psychology and Human Development, IOE — Leading the IOE Open Research Practice Community and an open research training programme
  • Honourable Mention: Nikoloz Sirmpilatze, Research Software Engineer, Sainsbury Wellcome Centre, Faculty of Life Sciences — Technical lead of ‘movement’, an open-source Python package for analysing animal body movements
  • Honourable Mention: Samarth Pimparkar, Research Technician, Clinical and Movement Neuroscience, Faculty of Brain Science — Contributions to building open-source resources that preserve and share valuable patient-derived material

Category – Activities led by undergraduate or postgraduate students:

  • Winner: Yinan Chen; Eric Chen; Adelina Xie undergraduate students at the Department of Statistical Science, Faculty of Mathematical and Physical Sciences — Developers of open peer-review system for statistical science undergraduate coding assignments
  • Honourable Mention: Chaeyeon Lim, MSc student at the UCL Interaction Centre, Faculty of Life Sciences — Lead-developer of NatureNest
  • Honourable Mention: Ka Ying Ivy Chan, MSc student at the Faculty of Brain Sciences — Introducing the OSF to fellow master’s students

Category – Activities by academic staff (including post-docs) or PhD students: Open-source software/analytical tools

  • Winner: Deyu Ming, Lecturer in Mathematics and Data Analytics, School of Management, Faculty of Engineering — Lead developer of ‘DGPSI’
  • Honourable Mention: Michal Ovadek, Lecturer in European Institutions, Politics and Policy, Department of Political Science, Faculty of Social and Historical Sciences — Lead developer of ‘eurlex’
  • Honourable Mention: Pietro Lubello, Research Fellow, Energy Institute, Bartlett Faculty of the Built Environment — Lead developer of the Kenya Power System Model and the Kenya Whole Energy System Model

Category – Activities led by academic staff (including post-docs) or PhD students: Open publishing

  • Winner: Adam Crymble, Lecturer of Digital Humanities, Department of Information Studies, Faculty of Arts and Humanities — Co-founder and first chair of Programming Historian
  • Honourable Mention: Anastasia Kokori, PhD student in the Astrophysics Group, Department of Physics and Astronomy, Faculty of Mathematical & Physical Sciences — Founder and coordinator of ExoClock
  • Honourable Mention: Annabelle South, Principal Research Fellow in Research Impact and Communication, MRC Clinical Trials Unit, Faculty of Population Health Sciences — Innovating how results of clinical trials are shared with participants

Category – Activities by academic staff (including post-docs) or PhD students: Enhancing open science and reproducibility capacity in the academic community

  • Winner: Martin Mokros, Lecturer in Earth Observation, Department of Geography, Faculty of Social and Historical Sciences — Chair of the COST Action 3DForEcoTech
  • Honourable Mention: Dongyi Ma, PhD student in the Connected Environments Lab, Bartlett Faculty of the Built Environment — Founder of UrbanHeatSense IoT initiative
  • Honourable Mention: Lewis Jones, NERC Independent Research Fellow, Department of Earth Sciences, Faculty Mathematical & Physical Sciences— Founder of the Palaeoverse

 

alt=""

The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities.

Follow us on Bluesky, LinkedIn, and join our mailing list to be part of the conversation!

Share this post on Bluesky

Authorship in the Era of AI – Panel Discussion

By Naomi, on 9 July 2025

Guest post by Andrew Gray, Bibliometrics Support Officer

This panel discussion at the 2025 Open Science and Scholarship Festival was made up of three professionals with expertise in different aspects of publishing and scholarly writing, across different sectors – Ayanna Prevatt-Goldstein, from the UCL Academic Communication Centre focusing on student writing; Rachel Safer, the executive publisher for ethics and integrity at Oxford University Press, and also an officer of the Committee on Publication Ethics, with a background in journal publishing; and Dhara Snowden, from UCL Press, with a background in monograph and textbook publishing.

We are very grateful to everyone who attended and brought questions or comments to the session.

This is a summary of the discussion from all three panel members, and use of any content from this summary should be attributed to the panel members. If you wish to cite this, please do so as A. Prevatt-Goldstein, R. Safer & D. Snowden (2025). Authorship in the Era of AI. [https://blogs.ucl.ac.uk/open-access/2025/07/09/authorship-in-the-era-of-ai/]

Where audience members contributed, this has been indicated. We have reorganised some sections of the discussion for better flow.

The term ‘artificial intelligence’ can mean many things, and often a wide range of different tools are grouped under the same general heading. This discussion focused on ‘generative AI’ (large language models), and on their role in publishing and authorship rather than their potential uses elsewhere in the academic process.

Due to the length of this write-up, you can directly access each question using the following links:
1. There is a growing awareness of the level of use of generative AI in producing scholarly writing – in your experience, how are people currently using these tools, and how widespread do you think that is? Is it different in different fields? And if so, why?

2. Why do you think people are choosing to use these tools? Do you think that some researchers – or publishers – are feeling that they now have to use them to keep pace with others?

3. On one end of the spectrum, some people are producing entire papers or literature reviews with generative AI. Others are using it for translation, or to generate abstracts. At the other end, some might use it for copyediting or for tweaking the style. Where do you think we should draw the line as to what constitutes ‘authorship’?

4. Do you think readers of scholarly writing would draw the line on ‘authorship’ differently to authors and publishers? Should authors be expected to disclose the use of these tools to their readers? And if we did – is that something that can be enforced?

5. Do you think ethical use of AI will be integrated into university curriculums in the future? What happens when different institutions have different ideas of what is ‘ethical’ and ‘responsible’?

6. Many students and researchers are concerned about the potential for being falsely accused of using AI tools in their writing – how can we help people deal with this situation? How can people assert their authorship in a world where there is a constant suspicion of AI use?

7. Are there journals which have developed AI policies that are noticeably more stringent than the general publisher policies, particularly in the humanities? How do we handle it if these policies differ, or if publisher and institutional policies on acceptable AI use disagree?

8. The big AI companies often have a lack of respect for authorship, as seen in things like the mass theft of books. Are there ways that we can protect authorship and copyrights from AI tools?

9. We are now two and a half years into the ‘ChatGPT era’ of widespread AI text generation. Where do you see it going for scholarly publishing by 2030?


1. There is a growing awareness of the level of use of generative AI in producing scholarly writing – in your experience, how are people currently using these tools, and how widespread do you think that is? Is it different in different fields? And if so, why?

Among researchers, a number of surveys by publishers have suggested that 70-80% of researchers are using some form of AI, broadly defined, and a recent Nature survey suggested this is fairly consistent across different locations and fields. However, there was a difference by career stage, with younger researchers feeling it was more acceptable to use it to edit papers, and by first language, where non-English speakers were more likely to use it for this as well.

There is a sense that publishers in STEM fields are more likely to have guidance and policy for the use of AI tools; in the humanities and social sciences, this is less well developed, and publishers are still in the process of fact-finding and gathering community responses. There may still be more of a stigma around the use of AI in the humanities.

In student writing, a recent survey from HEPI found that from 2024 to 2025, the share of UK undergraduates who used generative AI for generating text had gone from a third of students to two thirds, and only around 8% said they did not use generative AI at all. Heavier users included men, students from more advantaged backgrounds, and students with English as a second or additional language.

There are some signs of variation by discipline in other research. Students in fields where writing is seen as an integral part are more concerned with developing their voice and a sense of authorship, and are less likely to use it for generating text – or at least are less likely to acknowledge it – and where they do, they are more likely to personalise the output. By comparison, students in STEM subjects are more likely to feel that they were being assessed on the content – the language they use to communicate it might be seen as less important.

[For more on this, see A. Prevatt-Goldstein & J. Chandler (forthcoming). In my own words? Rethinking academic integrity in the context of linguistic diversity and generative AI. In D. Angelov and C.E. Déri (Eds.), Academic Writing and Integrity in the Age of Diversity: Perspectives from European and North American Higher Education. Palgrave.)]


2. Why do you think people are choosing to use these tools? Do you think that some researchers – or publishers – are feeling that they now have to use them to keep pace with others?

Students in particular may be more willing to use it as they often prioritise the ideas being expressed over the mode of expressing them, and the idea of authorship can be less prominent in this context. But at a higher level, for example among doctoral students, we find that students are concerned about their contribution and whether perceptions of their authorship may be lessened by using these tools.

A study among publishers found that the main way AI tools were being used was not to replace people at specific tasks, but to make small efficiency savings in the way people were doing them. This ties into the long-standing use of software to assist copyediting and typesetting.

Students and academics are also likely to see it from an efficiency perspective, especially among those who are becoming used to working with generative AI tools in their daily lives, and so are more likely to feel comfortable using it in academic and professional contexts. Academics may feel pressure to use tools like this to keep up a high rate of publication. But the less involvement of time in a particular piece of work might be a trade-off of time spent against quality; we might also see trade-offs in terms of the individuality and nuance of the language, of fewer novel and outlier ideas being developed, as generative AI involvement becomes more common.

Ultimately, though, publishers struggle to monitor researchers’ use of generative AI in their original research – they are dependent on institutions training students and researchers, and on the research community developing clearer norms, and perhaps there is also a role for funders to support educating authors about best practices.

Among all users, a significant – and potentially less controversial – role for generative AI is to help non-native English speakers with language and grammar, and to a more limited degree translation – though quality here varies and publishers would generally recommend that any AI translation should be checked by a human specialist. However, this has its own costs.

With English as a de facto academic lingua franca, students (and academics) who did not have it as a first language were inevitably always at a disadvantage. Support for this could be found – perhaps paying for help, perhaps friends or family or colleagues who could support language learning – but this was very much support that was available more to some students than others, due to costs or connections, and generative AI tools have the potential to democratise this support to some degree. However, this causes a corresponding worry among many students that the bar has been raised – they feel they are now expected to use these tools or else they are disadvantaged compared to their peers.


3. On one end of the spectrum, some people are producing entire papers or literature reviews with generative AI. Others are using it for translation, or to generate abstracts. At the other end, some might use it for copyediting or for tweaking the style. Where do you think we should draw the line as to what constitutes ‘authorship’?

In some ways, this is not a new debate. As we develop new technologies which change the way we write – the printing press, the word processor, the spell checker, the automatic translator – people have discussed how it changes ‘authorship’. But all these tools have been ways to change or develop the words that someone has already written; generative AI can go far beyond that, producing vastly more material without direct involvement beyond a short prompt.

A lot of people might treat a dialogue with generative AI, and the way they work with those outputs, in the same way as a discussion with a colleague, as a way to thrash out ideas and pull them together. We have found that students are seeing themselves shifting from ‘author’ to ‘editor’, claiming ownership of their work through developing prompts and personalising the output, rather than through having written the text themselves. There is still a concept of ownership, a way of taking responsibility for the outcome, and for the ideas being expressed, but that concept is changing, and it might not be what we currently think of as ‘authorship’.

Sarah Eaton’s work has discussed the concept of ‘Post-plagiarism’ as a way to think about writing in a generative AI world, identifying six tenets of post-plagiarism. One of those is that humans can concede control, but not responsibility; another is that attribution will remain important. This may give us a useful way to consider authorship.

In publishing, ‘authorship’ can be quite firmly defined by the criteria set by a specific journal or publisher. There are different standards in different fields, but one of the most common is the ICMJE definition which sets out four requirements to be considered an author – substantial contribution to the research; drafting or editing the text; having final approval; and agreeing to be accountable for it. In the early discussions around generative AI tools in 2022, there was a general agreement that these could never meet the fourth criteria, and so could never become ‘authors’; they could be used, and their use could be declared, but it did not conceptually rise to the level of authorship as it could not take ownership of the work.

The policy that UCL Press adopted, drawing on those from other institutions, looked at ways to identify potential responsible uses, rather than a blanket ban – which it was felt would lead to people simply not being transparent when they had used it. It prohibited ‘authorship’ by generative AI tools, as is now generally agreed; it required that authors be accountable, and take responsibility for the integrity and validity of their work; and it asked for disclosure of generative AI.

Monitoring and enforcing that is hard – there are a lot of systems claiming to test for generative AI use, but they may not work for all disciplines, or all kinds of content – so it does rely heavily on authors being transparent about how they have used these tools. They are also reliant on peer reviewers flagging things that might indicate a problem. (This also raises the potential of peer reviewers using generative AI to support their assessments – which in turn indicates the need for guidance about how they could use it responsibly, and clear indications on where it is or is not felt to be appropriate.)

Generative AI potentially has an interesting role to play in publishing textbooks, which tend to be more of a survey of a field than original thinking, but do still involve a dialogue with different kinds of resources and different aspects of scholarship. A lot of the major textbook platforms are now considering ways in which they can use generative AI to create additional resources on top of existing textbooks – test quizzes or flash-cards or self-study resources.


4. Do you think readers of scholarly writing would draw the line on ‘authorship’ differently to authors and publishers? Should authors be expected to disclose the use of these tools to their readers? And if we did – is that something that can be enforced?

There is a general consensus emerging among publishers that authors should be disclosing use of AI tools at the point of submission, or revisions, though where the line is drawn there varies. For example, Sage requires authors to disclose the use of generative AI, but not ‘assistive’ AI such as spell-checkers or grammar checkers. The STM Association recently published a draft set of recommendations for using AI, with nine classifications of use. (A commenter in the discussion also noted a recent proposed AI Disclosure Framework, identifying fourteen classes.)

However, we know that some people, especially undergraduates, spend a lot of time interacting with generative AI tools in a whole range of capacities, around different aspects of the study and writing process, which can be very difficult to define and describe – there may not be any lack of desire to be transparent, but it simply might not fit into the ways we ask them to disclose the use of generative AI.

There is an issue about how readers will interpret a disclosure. Some authors may worry that there is a stigma attached to using generative AI tools, and be reluctant to disclose if they worry their work will be penalised, or taken less seriously, as a result. This is particularly an issue in a student writing context, where it might not be clear what will be done with that disclosure – will the work be rejected? Will it be penalised, for example a student essay losing some marks for generative AI use? Will it be judged more sceptically than if there had been no disclosure? Will different markers, or editors, or peer-reviewers make different subjective judgements, or have different thresholds?

These concerns can cause people to hesitate before disclosing, or to avoid disclosing fully. But academics and publishers are dependent on honest disclosure to identify inappropriate use of generative AI, so may need to be careful in how they frame this need to avoid triggering these worries about more minor use of generative AI. Without honest disclosure, we also have no clear idea of what writers are using AI for – which makes it all the harder to develop clear and appropriate policies.

For student writing, the key ‘reader’ is the marker, who will also be the person to whom generative AI use is disclosed. But for published writing, once a publisher has a disclosure of AI use, they may need to decide what to pass along to the reader. Should readers be sent the full disclosure, or is that overkill? It may include things like idea generation, assistance with structure, or checking for more up-to-date references – these might be useful for the publisher to know, but might not need to be disclosed anywhere in the text itself. Conversely, something like images produced by generative AI might need to be explicitly and clearly disclosed in context.

The recent Nature survey mentioned earlier showed that there is no clear agreement among academics as to what is and isn’t acceptable use, and it would be difficult for publishers to draw a clear line in that situation. They need to be guided by the research community – or communities, as it will differ in different disciplines and contexts.

We can also go back to the pre-GenAI assumptions about what used to be expected in scholarly writing, and consider what has changed. In 2003, Diane Pecorari identified the three assumptions for transparency in authorship:

1. that language which is not signaled as quotation is original to the writer;
2. that if no citation is present, both the content and the form are original to the writer;
3. that the writer consulted the source which is cited.

There is a – perhaps implicit – assumption among readers that all three of these are true unless otherwise disclosed. But do those assumptions still hold among a community of people – current students – who are used to the ubiquitous use of generative AI? On the face of it, generative AI would clearly break all three.

If we are setting requirements for transparency, there should also be consequences for breach of transparency – from a publisher’s perspective, if an author has put out a generative AI produced paper with hallucinated details or references, the journal editor or publisher should be able to investigate and correct or retract it, exactly as would be the case with plagiarism or other significant issues.

But there is a murky grey area here – if a paper is otherwise acceptable and of sufficient quality, but does not have appropriate disclosure of generative AI use, would that in and of itself be a reason for retraction? At the moment, this is not on the COPE list of reasons for retraction – it might potentially justify a correction or an editorial note, but not outright retraction.

Conversely, in the student context, things are simpler – if it is determined that work does not belong to the student, whether that be through use of generative AI or straightforward plagiarism, then there are academic misconduct processes and potentially very clear consequences which follow from that. These do not necessarily reflect on the quality of the output – what is seen as critical is the authorship.


5. Do you think ethical use of AI will be integrated into university curriculums in the future? What happens when different institutions have different ideas of what is ‘ethical’ and ‘responsible’?

A working group at UCL put together a first set of guidance on using generative AI in early 2023, and focused on ethics in the context of learning outcomes – what is it that students are aiming to achieve in their degree, and will generative AI help or not in that process? But ethical questions also emerged in terms of whose labour had contributed to these tools, what the environmental impacts where, and importantly whether students were able to opt out of using generative AI. There are no easy answers to any of these, but they very much are ongoing questions.

Recent work from MLA looking at AI literacies for students is also informative here in terms of what it expects students using AI to be aware of.


6. Many students and researchers are concerned about the potential for being falsely accused of using AI tools in their writing – how can we help people deal with this situation? How can people assert their authorship in a world where there is a constant suspicion of AI use?

There was no easy answer here and a general agreement that this is challenging for everyone – it can be very difficult to prove a negative. Increasing the level of transparency around disclosing AI use – and how much AI has been used – will help overall, but maybe not in individual cases.

Style-based detection tools are unreliable and can be triggered by normal academic or second-language writing styles. A lot of individuals have their own assumptions as to what is a ‘clear marker’ of AI use, and these are often misleading, leading to false positives and potentially false accusations. Many of the plagiarism detection services have scaled back or turned off their AI checking tools.

In publishing, a lot of processes have historically been run on a basis of trust – publishers, editors, and reviewers have not fact-checked every detail. If you are asked to disclose AI use and you do not, the system has to trust you did not use it, in the same way that it trusts you obtained the right ethical approvals or that you actually produced the results you claim. Many publishers are struggling with this, and feeling that they are still running to catch up with recent developments.

In academia, we can encourage and support students to develop their own voice in their writing. This is a hard skill to develop, and it takes time and effort, but it can be developed, and it is a valuable thing to have – it makes their writing more clearly their own. The growth of generative AI tools can be a very tempting shortcut for many people to try and get around this work, but there are really no shortcuts here to the investment of time that is needed.

There was a discussion of the possibility of authors being more transparent with their writing process to help demonstrate research integrity – for example, documenting how they select their references, in the way that systematic review does, or using open notebooks? This could potentially be declared in the manuscript, as a section alongside acknowledgements and funding. Students could be encouraged to keep logs of any generative AI prompts they have used and how they are handling them, to be able to disclose this in case of concerns.


7. Are there journals which have developed AI policies that are noticeably more stringent than the general publisher policies, particularly in the humanities? How do we handle it if these policies differ, or if publisher and institutional policies on acceptable AI use disagree?

There are definitely some journals that have adopted more restrictive policies than the general guidance from their publisher, mostly in the STEM fields. We know that many authors may not read the specific author guidelines for a journal before submitting. Potentially we could see journals highlighting these restrictions in the submission process, and requiring the authors to acknowledge they are aware of the specific policies for that journal.


8. The big AI companies often have a lack of respect for authorship, as seen in things like the mass theft of books. Are there ways that we can protect authorship and copyrights from AI tools?

A substantial issue for many publishers, particularly smaller non-commercial ones, is that so much scholarly material is now released under an open-access license that makes it easily available for training generative AI; even if the licenses forbid this, it can be difficult in practice to stop it, as seen in trade publishing. It is making authors very concerned, as they do not know how or where their material will be used, and feel powerless to prevent it.

One potential way forward is by reaching agreements between publishers and AI companies, making agreements on licensing material and ensuring that there is some kind of renumeration. This is more practical for larger commercial publishers with more resources. There is also the possibility of sector-wide collective bargaining agreements, as has been seen with the Writers Guild of America, where writers were able to implement broader guardrails on how their work would be used.

It is clear that the current system is not weighted in favour of the original creators, and some form of compensation would be ideal, but we also need to be careful that any new arrangement doesn’t continue to only benefit a small group.

The issue of Creative Commons licensing regulating the use of material for AI training purposes was discussed – Creative Commons take the position that this work may potentially be allowed under existing copyright law, but they are investigating the possibility of adding a way to signal the author’s position. AI training would be allowed by most of the Creative Commons licenses, but might require specific conditions on the final model (eg displaying attribution or non-commercial restrictions).

A commenter in the discussion also mentioned a more direct approach, where some sites are using tools to obfuscate artwork or building “tarpits” to combat scraping – but these can shade into being malware, so not a solution for many publishers!


9. We are now two and a half years into the ‘ChatGPT era’ of widespread AI text generation. Where do you see it going for scholarly publishing by 2030?

Generative AI use is going to become even more prevalent and ubiquitous, and will be very much more integrated into daily life for most people. As part of that integration, ideally we would see better awareness and understanding of what it can do, and better education on appropriate use in the way that we now teach about plagiarism and citation. That education will hopefully begin at an early stage, and develop alongside new uses of the technology.

Some of our ideas around what to be concerned about will change, as well. Wikipedia was suggested as an analogy – twenty years ago we collectively panicked about the use of it by students, feeling it might overthrow accepted forms of scholarship, but then – it didn’t. Some aspects of GenAI use may simply become a part of what we do, rather than an issue to be concerned with.

There will be positive aspects of this, but also negative ones; we will have to consider how we keep a space for people who want to minimise their use of these tools, and choose not to engage with them, for practical reasons or for ethical ones, particularly in educational contexts.

There are also discussions around the standardisation of language with generative AI – as we lose a diversity of language and of expression, will we also lose the corresponding diversity of thought? Standardised, averaged language can itself be a kind of loss.

The panel concluded by noting that this is very much an evolving space, and encouraged greater feedback and collaboration between publishers and the academic community, funders, and institutions, to try and navigate where to draw the line. The only way forward will be by having these discussions and trying to agree common ground – not just on questions of generative AI, but on all sorts of issues surrounding research integrity and publication ethics.

 

Creativity in Research and Engagement: Making, Sharing and Storytelling

By Naomi, on 3 July 2025

Guest post by Sheetal Saujani, Citizen Science Coordinator in the Office for Open Science & Scholarship

A small room with desks pulled together to create a large table around which several people sit looking at someone who is stood to the right hand side of the room, clearly leading a session to which they are listening. There are double glass doors at the back of the room and on the right-hand side, behind the person standing in front of the group, there is a large screen mounted on the wall.

At the Creativity in Research and Engagement session during the 2025 Open Science and Scholarship Festival, we invited participants to ask a simple question: what if we looked at research and engagement through the lens of creativity?

Together, we explored how creative approaches can unlock new possibilities across research, public engagement, and community participation. Through talks, discussions, and hands-on activities, we discussed visual thinking, storytelling, and participatory methods – tools that help us rethink how we work and connect with others.

Why creativity?

Whether it’s communicating complex science through visual storytelling, turning data into art, or reimagining who gets to ask the research questions in the first place, creative approaches help break down barriers and make research more inclusive and impactful.

Sketchnoting

We began by learning a new skill – sketchnoting – a quick, visual way of capturing ideas with shapes, symbols, diagrams, and keywords rather than full sentences. It’s not about being artistic; it’s about clarity and connection. As we reminded participants “Anyone can draw!”

Throughout the session, it became clear that creativity isn’t about perfection – it’s about connection, experimentation, and finding new ways to involve and inspire others in our work.

Three UCL speakers then shared how they’ve used creative methods in their research and engagement work.

Angharad Green – Turning genomic data into art

Angharad Green, Senior Research Data Steward at UCL’s Advanced Research Computing Centre, shared her work on the evolution of Streptococcus pneumoniae (the bacteria behind pneumonia and meningitis) using genomic data and experimental evolution.

What made her talk stand out was the way she visualised complex data. Using vibrant Muller plots to track changes in bacterial populations over time, she transformed dense genomic information into something accessible and visually compelling. She also ensured the visuals were accessible to people with colour blindness.

The images were so impactful that they earned a place on the cover of Infection Control & Hospital Epidemiology. Angharad’s work is a powerful example of how creative design can not only improve research communication and uncover patterns that might otherwise go unnoticed, but also proves that data can double as art and that science can be both rigorous and imaginative.

“As I looked at the Muller plots,” she said, “I started to see other changes I hadn’t noticed – how one mutation would trigger another.”

Katharine Round – Ghost Town and the art of the undirected lens

Katharine Round, a filmmaker and Lecturer in Ethnographic and Documentary Film in UCL’s Department of Anthropology presented Ghost Town, set in the tsunami-struck city of Kamaishi, Japan. Local taxi drivers reported picking up passengers who then vanished – ghosts, perhaps, or expressions of unresolved grief.

A small room in which lots of desks are joined together to create a large table around which several people are sitting. They are facing a screen at the far end of the room, next to which someone is standing and appears to be speaking. On the table are various pieces of paper, pens, pencils, and mugs.Katharine explored memory, myth, and trauma using a unique method: fixed cameras installed inside taxis, with no filmmaker present. This “abandoned camera” approach created a space that felt intimate and undirected, like a moving confessional booth, allowing deeply personal stories to surface.

By simply asking, “Has anything happened to you since the tsunami that you’ve never spoken about?” the project uncovered raw, unstructured truths, stories that traditional interviews might never reach.

Katharine’s work reminds us that storytelling can be an evocative form of research. By using creative, non-linear methods, she uncovered stories that traditional data collection approaches might have missed. Sometimes, the most powerful insights come when the researcher steps back, listens, and lets the story unfold on its own.

Joseph Cook – Co-creation and creativity in Citizen Science

Joseph Cook leads the UCL Citizen Science Academy at the UCL Institute for Global Prosperity.

He shared how the Academy trains and supports community members to become co-researchers in community projects that matter to them, often co-designed with local councils on topics like health, prosperity, and wellbeing.

Joseph shared a range of inspiring creative work:

  • Zines made by young citizen scientists in Tower Hamlets, including a research rap and reflections on life in the care system.
  • A silk scarf by Aysha Ahmed, filled with symbols of home and belonging drawn from displaced communities in Camden.
  • A tea towel capturing community recipes and food memories from Regent’s Park Estate, part of a project on culture and cohesion.
  • Creative exhibitions such as The Architecture of Pharmacies, exploring healthcare spaces through the lens of lived experience.

Instead of asking communities to answer predefined questions, the Academy invites people to ask their own, reframing participants as experts in their own lives.

Joseph was joined by Mohammed Rahman, a citizen scientist and care leaver, awarded a UCL Citizen Science Certificate through the Academy’s ActEarly ‘Citizen Science with Care Leavers’ programme. Through his zine and audio documentary, Mohammed shared personal insights on wellbeing, support and independence showing how storytelling deepens understanding and drives change.

Laid out on a desk, there is a silk scarf on which are depicted small images and words. There are three people behind the desk, two are standing and one is sitting, all looking at the scarf. One of the people standing is pointing to something on the scarf and appears to be describing this to others who do not appear in the photo.

From thinking to making

After the talks, participants reflected and got creative. They explored evaluation methods like the “4Ls” (Liked, Learned, Lacked, Longed For) and discussed embedding co-design throughout projects, including evaluation, and why it’s vital to  involve communities from the start.

Participants made badges, sketchnoted their reflections, and took on a “Zine in 15 Minutes” challenge, contributing to a collective zine on creativity and community.

Final reflections

Creativity isn’t an add-on – it’s essential. It helps us ask better questions, involve more people, and communicate in ways that resonate. Methods like sketchnoting, visual metaphors, zine-making, and creative media open research and engagement to a wider range of voices and experiences.

Creative work doesn’t need to be academic papers – it can be a rap, a tea towel, or a short film. Creativity sparks insight, supports co-creation, and builds meaningful connection.

Whether through drawing, storytelling, or simply asking different questions, we must continue making space for creativity – in our projects and institutions.

Find out more

Get involved!

The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on Bluesky, and join our mailing list to be part of the conversation!

What might a Citizen Science approach in your research project look like?

By Harry, on 27 March 2023

Guest post by Sheetal Saujani, Citizen Science Coordinator

Have you thought about including members of the public in your research?  Would you like to connect and collaborate with the community around you? Alternatively, would you like to work with project leaders to answer real-world questions and gather data?

Broadly defined, citizen science is research undertaken by members of the public, often in collaboration with academic or research institutions or similar. Citizen science is a diverse practice involving various forms and aims of collaboration between academic and community researchers and a broad range of disciplines.

What are the great things about Citizen Science?

Working together as part of a community with professionals, citizen scientists can play an important part in genuine discovery, experiments, data collection and analysis. Through citizen science, any one of us can take part in extraordinary research!

We can improve our community whilst at the same time helping to provide answers to some of the big questions about the world we live in.  Whether we participate in projects that measure air quality, monitor damage from storms, or track where our rubbish is going, we can help solve problems and influence a better future for our society.

The Office for Open Science and Scholarship advocates a broad approach to citizen science, so whether you call it citizen science, participatory research, community action, co-production, public engagement, or anything else, we’re all working together to strengthen UCL activities in this area!

What do Citizen Science projects look like?

Take a look at some of the exciting citizen science projects at UCL run by various research groups and departments at UCL. Some of these projects have now been completed.

And below are a few newer ones (this list is not exhaustive):

Also, if you’re interested, there are many platforms and projects happening outside of UCL (below are just a few):

  • Thousands of people across the country take part in the Natural History Museum’s crowdsourced science projects.
  • On the SciStarter website you can join and contribute to science through thousands of amazing research projects and events.
  • With more than one million volunteers, Zooniverse is one of the biggest citizen science platforms in the UK.
  • If you’re interested in Biology, Ecology or Earth Science, check out the citizen science projects run by the National Geographic Society.
  • The InSPIRES Open Platform is an online collaborative and crowdsourced database featuring many citizen-led participatory research and innovation projects.
  • Patientslikeme is an online platform where patients can share and learn from real-time, outcome-based health data and contribute to the scientific conversation surrounding thousands of diseases.
  • The Globe at Night project aims to raise awareness about light pollution and its impacts on communities. You can report your night sky brightness observations daily.

What is UCL doing around Citizen Science?

Our Office is working to raise awareness of citizen science approaches and activities, with the aim of building a support service and a community around citizen science.  The plan is to bring together colleagues who’ve run or are currently running citizen science or participatory research projects to share good practices and experiences with each other and support and encourage others to do the same!

If you are interested in citizen science, we would really like to hear from you, so please get in touch with us via email at openscience@ucl.ac.uk and tell us what you need.