X Close

Open@UCL Blog

Home

Menu

Authorship in the Era of AI – Panel Discussion

By Naomi, on 9 July 2025

Guest post by Andrew Gray, Bibliometrics Support Officer

This panel discussion at the 2025 Open Science and Scholarship Festival was made up of three professionals with expertise in different aspects of publishing and scholarly writing, across different sectors – Ayanna Prevatt-Goldstein, from the UCL Academic Communication Centre focusing on student writing; Rachel Safer, the executive publisher for ethics and integrity at Oxford University Press, and also an officer of the Committee on Publication Ethics, with a background in journal publishing; and Dhara Snowden, from UCL Press, with a background in monograph and textbook publishing.

We are very grateful to everyone who attended and brought questions or comments to the session.

This is a summary of the discussion from all three panel members, and use of any content from this summary should be attributed to the panel members. If you wish to cite this, please do so as A. Prevatt-Goldstein, R. Safer & D. Snowden (2025). Authorship in the Era of AI. [https://blogs.ucl.ac.uk/open-access/2025/07/09/authorship-in-the-era-of-ai/]

Where audience members contributed, this has been indicated. We have reorganised some sections of the discussion for better flow.

The term ‘artificial intelligence’ can mean many things, and often a wide range of different tools are grouped under the same general heading. This discussion focused on ‘generative AI’ (large language models), and on their role in publishing and authorship rather than their potential uses elsewhere in the academic process.

Due to the length of this write-up, you can directly access each question using the following links:
1. There is a growing awareness of the level of use of generative AI in producing scholarly writing – in your experience, how are people currently using these tools, and how widespread do you think that is? Is it different in different fields? And if so, why?

2. Why do you think people are choosing to use these tools? Do you think that some researchers – or publishers – are feeling that they now have to use them to keep pace with others?

3. On one end of the spectrum, some people are producing entire papers or literature reviews with generative AI. Others are using it for translation, or to generate abstracts. At the other end, some might use it for copyediting or for tweaking the style. Where do you think we should draw the line as to what constitutes ‘authorship’?

4. Do you think readers of scholarly writing would draw the line on ‘authorship’ differently to authors and publishers? Should authors be expected to disclose the use of these tools to their readers? And if we did – is that something that can be enforced?

5. Do you think ethical use of AI will be integrated into university curriculums in the future? What happens when different institutions have different ideas of what is ‘ethical’ and ‘responsible’?

6. Many students and researchers are concerned about the potential for being falsely accused of using AI tools in their writing – how can we help people deal with this situation? How can people assert their authorship in a world where there is a constant suspicion of AI use?

7. Are there journals which have developed AI policies that are noticeably more stringent than the general publisher policies, particularly in the humanities? How do we handle it if these policies differ, or if publisher and institutional policies on acceptable AI use disagree?

8. The big AI companies often have a lack of respect for authorship, as seen in things like the mass theft of books. Are there ways that we can protect authorship and copyrights from AI tools?

9. We are now two and a half years into the ‘ChatGPT era’ of widespread AI text generation. Where do you see it going for scholarly publishing by 2030?


1. There is a growing awareness of the level of use of generative AI in producing scholarly writing – in your experience, how are people currently using these tools, and how widespread do you think that is? Is it different in different fields? And if so, why?

Among researchers, a number of surveys by publishers have suggested that 70-80% of researchers are using some form of AI, broadly defined, and a recent Nature survey suggested this is fairly consistent across different locations and fields. However, there was a difference by career stage, with younger researchers feeling it was more acceptable to use it to edit papers, and by first language, where non-English speakers were more likely to use it for this as well.

There is a sense that publishers in STEM fields are more likely to have guidance and policy for the use of AI tools; in the humanities and social sciences, this is less well developed, and publishers are still in the process of fact-finding and gathering community responses. There may still be more of a stigma around the use of AI in the humanities.

In student writing, a recent survey from HEPI found that from 2024 to 2025, the share of UK undergraduates who used generative AI for generating text had gone from a third of students to two thirds, and only around 8% said they did not use generative AI at all. Heavier users included men, students from more advantaged backgrounds, and students with English as a second or additional language.

There are some signs of variation by discipline in other research. Students in fields where writing is seen as an integral part are more concerned with developing their voice and a sense of authorship, and are less likely to use it for generating text – or at least are less likely to acknowledge it – and where they do, they are more likely to personalise the output. By comparison, students in STEM subjects are more likely to feel that they were being assessed on the content – the language they use to communicate it might be seen as less important.

[For more on this, see A. Prevatt-Goldstein & J. Chandler (forthcoming). In my own words? Rethinking academic integrity in the context of linguistic diversity and generative AI. In D. Angelov and C.E. Déri (Eds.), Academic Writing and Integrity in the Age of Diversity: Perspectives from European and North American Higher Education. Palgrave.)]


2. Why do you think people are choosing to use these tools? Do you think that some researchers – or publishers – are feeling that they now have to use them to keep pace with others?

Students in particular may be more willing to use it as they often prioritise the ideas being expressed over the mode of expressing them, and the idea of authorship can be less prominent in this context. But at a higher level, for example among doctoral students, we find that students are concerned about their contribution and whether perceptions of their authorship may be lessened by using these tools.

A study among publishers found that the main way AI tools were being used was not to replace people at specific tasks, but to make small efficiency savings in the way people were doing them. This ties into the long-standing use of software to assist copyediting and typesetting.

Students and academics are also likely to see it from an efficiency perspective, especially among those who are becoming used to working with generative AI tools in their daily lives, and so are more likely to feel comfortable using it in academic and professional contexts. Academics may feel pressure to use tools like this to keep up a high rate of publication. But the less involvement of time in a particular piece of work might be a trade-off of time spent against quality; we might also see trade-offs in terms of the individuality and nuance of the language, of fewer novel and outlier ideas being developed, as generative AI involvement becomes more common.

Ultimately, though, publishers struggle to monitor researchers’ use of generative AI in their original research – they are dependent on institutions training students and researchers, and on the research community developing clearer norms, and perhaps there is also a role for funders to support educating authors about best practices.

Among all users, a significant – and potentially less controversial – role for generative AI is to help non-native English speakers with language and grammar, and to a more limited degree translation – though quality here varies and publishers would generally recommend that any AI translation should be checked by a human specialist. However, this has its own costs.

With English as a de facto academic lingua franca, students (and academics) who did not have it as a first language were inevitably always at a disadvantage. Support for this could be found – perhaps paying for help, perhaps friends or family or colleagues who could support language learning – but this was very much support that was available more to some students than others, due to costs or connections, and generative AI tools have the potential to democratise this support to some degree. However, this causes a corresponding worry among many students that the bar has been raised – they feel they are now expected to use these tools or else they are disadvantaged compared to their peers.


3. On one end of the spectrum, some people are producing entire papers or literature reviews with generative AI. Others are using it for translation, or to generate abstracts. At the other end, some might use it for copyediting or for tweaking the style. Where do you think we should draw the line as to what constitutes ‘authorship’?

In some ways, this is not a new debate. As we develop new technologies which change the way we write – the printing press, the word processor, the spell checker, the automatic translator – people have discussed how it changes ‘authorship’. But all these tools have been ways to change or develop the words that someone has already written; generative AI can go far beyond that, producing vastly more material without direct involvement beyond a short prompt.

A lot of people might treat a dialogue with generative AI, and the way they work with those outputs, in the same way as a discussion with a colleague, as a way to thrash out ideas and pull them together. We have found that students are seeing themselves shifting from ‘author’ to ‘editor’, claiming ownership of their work through developing prompts and personalising the output, rather than through having written the text themselves. There is still a concept of ownership, a way of taking responsibility for the outcome, and for the ideas being expressed, but that concept is changing, and it might not be what we currently think of as ‘authorship’.

Sarah Eaton’s work has discussed the concept of ‘Post-plagiarism’ as a way to think about writing in a generative AI world, identifying six tenets of post-plagiarism. One of those is that humans can concede control, but not responsibility; another is that attribution will remain important. This may give us a useful way to consider authorship.

In publishing, ‘authorship’ can be quite firmly defined by the criteria set by a specific journal or publisher. There are different standards in different fields, but one of the most common is the ICMJE definition which sets out four requirements to be considered an author – substantial contribution to the research; drafting or editing the text; having final approval; and agreeing to be accountable for it. In the early discussions around generative AI tools in 2022, there was a general agreement that these could never meet the fourth criteria, and so could never become ‘authors’; they could be used, and their use could be declared, but it did not conceptually rise to the level of authorship as it could not take ownership of the work.

The policy that UCL Press adopted, drawing on those from other institutions, looked at ways to identify potential responsible uses, rather than a blanket ban – which it was felt would lead to people simply not being transparent when they had used it. It prohibited ‘authorship’ by generative AI tools, as is now generally agreed; it required that authors be accountable, and take responsibility for the integrity and validity of their work; and it asked for disclosure of generative AI.

Monitoring and enforcing that is hard – there are a lot of systems claiming to test for generative AI use, but they may not work for all disciplines, or all kinds of content – so it does rely heavily on authors being transparent about how they have used these tools. They are also reliant on peer reviewers flagging things that might indicate a problem. (This also raises the potential of peer reviewers using generative AI to support their assessments – which in turn indicates the need for guidance about how they could use it responsibly, and clear indications on where it is or is not felt to be appropriate.)

Generative AI potentially has an interesting role to play in publishing textbooks, which tend to be more of a survey of a field than original thinking, but do still involve a dialogue with different kinds of resources and different aspects of scholarship. A lot of the major textbook platforms are now considering ways in which they can use generative AI to create additional resources on top of existing textbooks – test quizzes or flash-cards or self-study resources.


4. Do you think readers of scholarly writing would draw the line on ‘authorship’ differently to authors and publishers? Should authors be expected to disclose the use of these tools to their readers? And if we did – is that something that can be enforced?

There is a general consensus emerging among publishers that authors should be disclosing use of AI tools at the point of submission, or revisions, though where the line is drawn there varies. For example, Sage requires authors to disclose the use of generative AI, but not ‘assistive’ AI such as spell-checkers or grammar checkers. The STM Association recently published a draft set of recommendations for using AI, with nine classifications of use. (A commenter in the discussion also noted a recent proposed AI Disclosure Framework, identifying fourteen classes.)

However, we know that some people, especially undergraduates, spend a lot of time interacting with generative AI tools in a whole range of capacities, around different aspects of the study and writing process, which can be very difficult to define and describe – there may not be any lack of desire to be transparent, but it simply might not fit into the ways we ask them to disclose the use of generative AI.

There is an issue about how readers will interpret a disclosure. Some authors may worry that there is a stigma attached to using generative AI tools, and be reluctant to disclose if they worry their work will be penalised, or taken less seriously, as a result. This is particularly an issue in a student writing context, where it might not be clear what will be done with that disclosure – will the work be rejected? Will it be penalised, for example a student essay losing some marks for generative AI use? Will it be judged more sceptically than if there had been no disclosure? Will different markers, or editors, or peer-reviewers make different subjective judgements, or have different thresholds?

These concerns can cause people to hesitate before disclosing, or to avoid disclosing fully. But academics and publishers are dependent on honest disclosure to identify inappropriate use of generative AI, so may need to be careful in how they frame this need to avoid triggering these worries about more minor use of generative AI. Without honest disclosure, we also have no clear idea of what writers are using AI for – which makes it all the harder to develop clear and appropriate policies.

For student writing, the key ‘reader’ is the marker, who will also be the person to whom generative AI use is disclosed. But for published writing, once a publisher has a disclosure of AI use, they may need to decide what to pass along to the reader. Should readers be sent the full disclosure, or is that overkill? It may include things like idea generation, assistance with structure, or checking for more up-to-date references – these might be useful for the publisher to know, but might not need to be disclosed anywhere in the text itself. Conversely, something like images produced by generative AI might need to be explicitly and clearly disclosed in context.

The recent Nature survey mentioned earlier showed that there is no clear agreement among academics as to what is and isn’t acceptable use, and it would be difficult for publishers to draw a clear line in that situation. They need to be guided by the research community – or communities, as it will differ in different disciplines and contexts.

We can also go back to the pre-GenAI assumptions about what used to be expected in scholarly writing, and consider what has changed. In 2003, Diane Pecorari identified the three assumptions for transparency in authorship:

1. that language which is not signaled as quotation is original to the writer;
2. that if no citation is present, both the content and the form are original to the writer;
3. that the writer consulted the source which is cited.

There is a – perhaps implicit – assumption among readers that all three of these are true unless otherwise disclosed. But do those assumptions still hold among a community of people – current students – who are used to the ubiquitous use of generative AI? On the face of it, generative AI would clearly break all three.

If we are setting requirements for transparency, there should also be consequences for breach of transparency – from a publisher’s perspective, if an author has put out a generative AI produced paper with hallucinated details or references, the journal editor or publisher should be able to investigate and correct or retract it, exactly as would be the case with plagiarism or other significant issues.

But there is a murky grey area here – if a paper is otherwise acceptable and of sufficient quality, but does not have appropriate disclosure of generative AI use, would that in and of itself be a reason for retraction? At the moment, this is not on the COPE list of reasons for retraction – it might potentially justify a correction or an editorial note, but not outright retraction.

Conversely, in the student context, things are simpler – if it is determined that work does not belong to the student, whether that be through use of generative AI or straightforward plagiarism, then there are academic misconduct processes and potentially very clear consequences which follow from that. These do not necessarily reflect on the quality of the output – what is seen as critical is the authorship.


5. Do you think ethical use of AI will be integrated into university curriculums in the future? What happens when different institutions have different ideas of what is ‘ethical’ and ‘responsible’?

A working group at UCL put together a first set of guidance on using generative AI in early 2023, and focused on ethics in the context of learning outcomes – what is it that students are aiming to achieve in their degree, and will generative AI help or not in that process? But ethical questions also emerged in terms of whose labour had contributed to these tools, what the environmental impacts where, and importantly whether students were able to opt out of using generative AI. There are no easy answers to any of these, but they very much are ongoing questions.

Recent work from MLA looking at AI literacies for students is also informative here in terms of what it expects students using AI to be aware of.


6. Many students and researchers are concerned about the potential for being falsely accused of using AI tools in their writing – how can we help people deal with this situation? How can people assert their authorship in a world where there is a constant suspicion of AI use?

There was no easy answer here and a general agreement that this is challenging for everyone – it can be very difficult to prove a negative. Increasing the level of transparency around disclosing AI use – and how much AI has been used – will help overall, but maybe not in individual cases.

Style-based detection tools are unreliable and can be triggered by normal academic or second-language writing styles. A lot of individuals have their own assumptions as to what is a ‘clear marker’ of AI use, and these are often misleading, leading to false positives and potentially false accusations. Many of the plagiarism detection services have scaled back or turned off their AI checking tools.

In publishing, a lot of processes have historically been run on a basis of trust – publishers, editors, and reviewers have not fact-checked every detail. If you are asked to disclose AI use and you do not, the system has to trust you did not use it, in the same way that it trusts you obtained the right ethical approvals or that you actually produced the results you claim. Many publishers are struggling with this, and feeling that they are still running to catch up with recent developments.

In academia, we can encourage and support students to develop their own voice in their writing. This is a hard skill to develop, and it takes time and effort, but it can be developed, and it is a valuable thing to have – it makes their writing more clearly their own. The growth of generative AI tools can be a very tempting shortcut for many people to try and get around this work, but there are really no shortcuts here to the investment of time that is needed.

There was a discussion of the possibility of authors being more transparent with their writing process to help demonstrate research integrity – for example, documenting how they select their references, in the way that systematic review does, or using open notebooks? This could potentially be declared in the manuscript, as a section alongside acknowledgements and funding. Students could be encouraged to keep logs of any generative AI prompts they have used and how they are handling them, to be able to disclose this in case of concerns.


7. Are there journals which have developed AI policies that are noticeably more stringent than the general publisher policies, particularly in the humanities? How do we handle it if these policies differ, or if publisher and institutional policies on acceptable AI use disagree?

There are definitely some journals that have adopted more restrictive policies than the general guidance from their publisher, mostly in the STEM fields. We know that many authors may not read the specific author guidelines for a journal before submitting. Potentially we could see journals highlighting these restrictions in the submission process, and requiring the authors to acknowledge they are aware of the specific policies for that journal.


8. The big AI companies often have a lack of respect for authorship, as seen in things like the mass theft of books. Are there ways that we can protect authorship and copyrights from AI tools?

A substantial issue for many publishers, particularly smaller non-commercial ones, is that so much scholarly material is now released under an open-access license that makes it easily available for training generative AI; even if the licenses forbid this, it can be difficult in practice to stop it, as seen in trade publishing. It is making authors very concerned, as they do not know how or where their material will be used, and feel powerless to prevent it.

One potential way forward is by reaching agreements between publishers and AI companies, making agreements on licensing material and ensuring that there is some kind of renumeration. This is more practical for larger commercial publishers with more resources. There is also the possibility of sector-wide collective bargaining agreements, as has been seen with the Writers Guild of America, where writers were able to implement broader guardrails on how their work would be used.

It is clear that the current system is not weighted in favour of the original creators, and some form of compensation would be ideal, but we also need to be careful that any new arrangement doesn’t continue to only benefit a small group.

The issue of Creative Commons licensing regulating the use of material for AI training purposes was discussed – Creative Commons take the position that this work may potentially be allowed under existing copyright law, but they are investigating the possibility of adding a way to signal the author’s position. AI training would be allowed by most of the Creative Commons licenses, but might require specific conditions on the final model (eg displaying attribution or non-commercial restrictions).

A commenter in the discussion also mentioned a more direct approach, where some sites are using tools to obfuscate artwork or building “tarpits” to combat scraping – but these can shade into being malware, so not a solution for many publishers!


9. We are now two and a half years into the ‘ChatGPT era’ of widespread AI text generation. Where do you see it going for scholarly publishing by 2030?

Generative AI use is going to become even more prevalent and ubiquitous, and will be very much more integrated into daily life for most people. As part of that integration, ideally we would see better awareness and understanding of what it can do, and better education on appropriate use in the way that we now teach about plagiarism and citation. That education will hopefully begin at an early stage, and develop alongside new uses of the technology.

Some of our ideas around what to be concerned about will change, as well. Wikipedia was suggested as an analogy – twenty years ago we collectively panicked about the use of it by students, feeling it might overthrow accepted forms of scholarship, but then – it didn’t. Some aspects of GenAI use may simply become a part of what we do, rather than an issue to be concerned with.

There will be positive aspects of this, but also negative ones; we will have to consider how we keep a space for people who want to minimise their use of these tools, and choose not to engage with them, for practical reasons or for ethical ones, particularly in educational contexts.

There are also discussions around the standardisation of language with generative AI – as we lose a diversity of language and of expression, will we also lose the corresponding diversity of thought? Standardised, averaged language can itself be a kind of loss.

The panel concluded by noting that this is very much an evolving space, and encouraged greater feedback and collaboration between publishers and the academic community, funders, and institutions, to try and navigate where to draw the line. The only way forward will be by having these discussions and trying to agree common ground – not just on questions of generative AI, but on all sorts of issues surrounding research integrity and publication ethics.

 

Creativity in Research and Engagement: Making, Sharing and Storytelling

By Naomi, on 3 July 2025

Guest post by Sheetal Saujani, Citizen Science Coordinator in the Office for Open Science & Scholarship

A small room with desks pulled together to create a large table around which several people sit looking at someone who is stood to the right hand side of the room, clearly leading a session to which they are listening. There are double glass doors at the back of the room and on the right-hand side, behind the person standing in front of the group, there is a large screen mounted on the wall.

At the Creativity in Research and Engagement session during the 2025 Open Science and Scholarship Festival, we invited participants to ask a simple question: what if we looked at research and engagement through the lens of creativity?

Together, we explored how creative approaches can unlock new possibilities across research, public engagement, and community participation. Through talks, discussions, and hands-on activities, we discussed visual thinking, storytelling, and participatory methods – tools that help us rethink how we work and connect with others.

Why creativity?

Whether it’s communicating complex science through visual storytelling, turning data into art, or reimagining who gets to ask the research questions in the first place, creative approaches help break down barriers and make research more inclusive and impactful.

Sketchnoting

We began by learning a new skill – sketchnoting – a quick, visual way of capturing ideas with shapes, symbols, diagrams, and keywords rather than full sentences. It’s not about being artistic; it’s about clarity and connection. As we reminded participants “Anyone can draw!”

Throughout the session, it became clear that creativity isn’t about perfection – it’s about connection, experimentation, and finding new ways to involve and inspire others in our work.

Three UCL speakers then shared how they’ve used creative methods in their research and engagement work.

Angharad Green – Turning genomic data into art

Angharad Green, Senior Research Data Steward at UCL’s Advanced Research Computing Centre, shared her work on the evolution of Streptococcus pneumoniae (the bacteria behind pneumonia and meningitis) using genomic data and experimental evolution.

What made her talk stand out was the way she visualised complex data. Using vibrant Muller plots to track changes in bacterial populations over time, she transformed dense genomic information into something accessible and visually compelling. She also ensured the visuals were accessible to people with colour blindness.

The images were so impactful that they earned a place on the cover of Infection Control & Hospital Epidemiology. Angharad’s work is a powerful example of how creative design can not only improve research communication and uncover patterns that might otherwise go unnoticed, but also proves that data can double as art and that science can be both rigorous and imaginative.

“As I looked at the Muller plots,” she said, “I started to see other changes I hadn’t noticed – how one mutation would trigger another.”

Katharine Round – Ghost Town and the art of the undirected lens

Katharine Round, a filmmaker and Lecturer in Ethnographic and Documentary Film in UCL’s Department of Anthropology presented Ghost Town, set in the tsunami-struck city of Kamaishi, Japan. Local taxi drivers reported picking up passengers who then vanished – ghosts, perhaps, or expressions of unresolved grief.

A small room in which lots of desks are joined together to create a large table around which several people are sitting. They are facing a screen at the far end of the room, next to which someone is standing and appears to be speaking. On the table are various pieces of paper, pens, pencils, and mugs.Katharine explored memory, myth, and trauma using a unique method: fixed cameras installed inside taxis, with no filmmaker present. This “abandoned camera” approach created a space that felt intimate and undirected, like a moving confessional booth, allowing deeply personal stories to surface.

By simply asking, “Has anything happened to you since the tsunami that you’ve never spoken about?” the project uncovered raw, unstructured truths, stories that traditional interviews might never reach.

Katharine’s work reminds us that storytelling can be an evocative form of research. By using creative, non-linear methods, she uncovered stories that traditional data collection approaches might have missed. Sometimes, the most powerful insights come when the researcher steps back, listens, and lets the story unfold on its own.

Joseph Cook – Co-creation and creativity in Citizen Science

Joseph Cook leads the UCL Citizen Science Academy at the UCL Institute for Global Prosperity.

He shared how the Academy trains and supports community members to become co-researchers in community projects that matter to them, often co-designed with local councils on topics like health, prosperity, and wellbeing.

Joseph shared a range of inspiring creative work:

  • Zines made by young citizen scientists in Tower Hamlets, including a research rap and reflections on life in the care system.
  • A silk scarf by Aysha Ahmed, filled with symbols of home and belonging drawn from displaced communities in Camden.
  • A tea towel capturing community recipes and food memories from Regent’s Park Estate, part of a project on culture and cohesion.
  • Creative exhibitions such as The Architecture of Pharmacies, exploring healthcare spaces through the lens of lived experience.

Instead of asking communities to answer predefined questions, the Academy invites people to ask their own, reframing participants as experts in their own lives.

Joseph was joined by Mohammed Rahman, a citizen scientist and care leaver, awarded a UCL Citizen Science Certificate through the Academy’s ActEarly ‘Citizen Science with Care Leavers’ programme. Through his zine and audio documentary, Mohammed shared personal insights on wellbeing, support and independence showing how storytelling deepens understanding and drives change.

Laid out on a desk, there is a silk scarf on which are depicted small images and words. There are three people behind the desk, two are standing and one is sitting, all looking at the scarf. One of the people standing is pointing to something on the scarf and appears to be describing this to others who do not appear in the photo.

From thinking to making

After the talks, participants reflected and got creative. They explored evaluation methods like the “4Ls” (Liked, Learned, Lacked, Longed For) and discussed embedding co-design throughout projects, including evaluation, and why it’s vital to  involve communities from the start.

Participants made badges, sketchnoted their reflections, and took on a “Zine in 15 Minutes” challenge, contributing to a collective zine on creativity and community.

Final reflections

Creativity isn’t an add-on – it’s essential. It helps us ask better questions, involve more people, and communicate in ways that resonate. Methods like sketchnoting, visual metaphors, zine-making, and creative media open research and engagement to a wider range of voices and experiences.

Creative work doesn’t need to be academic papers – it can be a rap, a tea towel, or a short film. Creativity sparks insight, supports co-creation, and builds meaningful connection.

Whether through drawing, storytelling, or simply asking different questions, we must continue making space for creativity – in our projects and institutions.

Find out more

Get involved!

The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on Bluesky, and join our mailing list to be part of the conversation!

Open Science & Scholarship Festival 2025: next steps, links and recordings!

By Kirsty, on 25 June 2025

It has been a couple of weeks since our debut collaboration with our friends at LSE and the Francis Crick Institute and I can safely say that the festival was a roaring success. We all would like to extend a huge thank you to everyone that came to any of the events, in person or online, it was great to see so many people engaging with Open Science!

In case you missed it, the festival ran from 2 – 6 June and included an exciting array of sessions including creative workshops, informal networking, case studies, online and in person panel discussions and technology demonstrations. The full programme is still available online, or keep scrolling for links, recordings and upcoming content!

montage of institution logos

Monday 2 June

Open Methods with Protocols.io

This workshop introduced the benefits of publishing your methods and protocols as a separate open access output. As an in person event, there is no available recording, but you can access Protocols.io and their excellent free help and support guidance online.

Creativity in research and engagement

This session of making, sharing and storytelling has its own blog post – read it now!

Tuesday 3 June

Co-producing research with Special Collections: Prejudice and Power case study

UCL Special Collections presented their experiences of using co-creation to engage with rare book and archive collections especially as applied to the recent Prejudice in Power project, that consisted of a range of co-creation, community and academic initiatives that focussed on our holdings to respond to the university’s historic role in promoting eugenics. It also briefly discussed wider co-creation activity in UCL Special Collections, the lessons learned and how these are being embedded in practice.

Resources:

Scaling up Diamond Open Access Journals

Diamond open access (OA) is championed as a more open, equitable and inclusive, community-driven journal publishing model, especially when compared against other commercially owned, author pay and subscription models. Demand is rapidly growing but there is a lack of capacity and funding for journals to sustainably meet it. There are many barriers to solving these complex challenges, but one new initiative called the Open Journals Collective aims to disrupt the current landscape by offering a more equitable, sustainable and alternative solution to the traditional and established payment structures.

During this interactive session we heard from the conveners of the collective to learn about why and how it came about, what it offers and why it is needed. We also heard about the experiences with various OA journal models, as well as perspectives from a journal Editor who resigned from a subscription journal and successfully launched a new and competing diamond open access journal.

We are currently working on getting material from this session ready to share, so if you want to catch up, watch this space!

Professionalising data, software, and infrastructure support to transform open science

This workshop focused on the needs of both researchers and technical support, seeking to understand the answers to some fundamental questions: If you are a researcher – what do you need in terms of technical support and services? If you are a research technology professional – what skill and training do you need to be able to offer this support?

The team in ARC behind this fascinating session have shared a write-up about it which you can read on their blog page.

Wednesday 4 June:

Should reproducibility be the aim for open qualitative research? Researchers’ perspectives

Reproducibility is often touted among quantitative researchers as a necessary step to make studies rigorous. To determine reproducibility, whether the same analyses of the same data produce the same results, the raw data and code must be accessible to other researchers. Qualitative researchers have also begun to consider making their data open. However, for researchers in fields where cultural knowledge plays a key role in the analysis of qualitative data, openness of such data may invite misrepresentation by re-use of the data by researchers unfamiliar with the cultural and social context in which it was produced.

This event asked whether reproducibility should be the aim for open qualitative data, and if not, why should researchers make their qualitative data open and what are the other methods used to establish rigour and integrity in research?

Access the recording below or on the LSE Library YouTube Channel.

How open is possible, how closed is necessary? Navigating data sharing whilst working with personal data

In the interests of transparency and research integrity, researchers are encouraged to open up more of their research process, including sharing data. However, for researchers working with personal data, including interview and medical data, there are important considerations for sharing. This event brought together researchers from a range of disciplines to share their experiences and strategies for open research when working with personal data.

Access the recording below or on the LSE Library YouTube Channel.

Thursday 5 June: Open Research in the Age of Populism

Political shifts around the world, from the Trump administration in the US to Orban’s government in Hungary, are making it more important than ever to have reliable research freely available. However, these governments are also making it more risky to openly share the results of research in many countries and disciplines. Alongside the political censorship of research in some countries there are also changes to research funding, research being misrepresented and used to spread misinformation online, and concerns about the stability of open research infrastructure which is funded by the state. In this session the panellists considered the value of open knowledge, the responsibilities of individual researchers and institutions to be open and how you can protect yourself when making your research openly available.

Access the recording below or on the LSE Library YouTube Channel.

Friday 6 June

Authorship in the era of AI

With the rapid growth of AI tools over the past three years, there has been a corresponding rise in the number of academics and students using them in their own writing. While it is generally agreed that we still expect people to be the “authors” of their work, deciding how to interpret that is often a nuanced and subjective decision by the writer. This in-depth panel discussed how we think about “authorship” for AI-assisted writing.

This session was so in-depth that the panel and the chair have worked together to create a summary of the discussion, complete with the resources and themes shared, which you can read on a separate blog post.

UCL Research Data Repository: Celebrating over 1million views!

By Naomi, on 10 June 2025

Guest post by Dr Christiana McMahon, Research Data Support Officer

Since launching in June 2019, the UCL Research Data Repository has now received over 1million views from over 190 countries and territories across the world! Plus, we have published over 1000 items and facilitated over 800,000 downloads!

This is a huge milestone and demonstrates how far reaching the Research Data Repository has become.


To date, the:

  • most viewed record is:

Heenan, Thomas; Jnawali, Anmol; Kok, Matt; Tranter, Thomas; Tan, Chun; Dimitrijevic,  Alexander; et al. (2020). Lithium-ion Battery INR18650 MJ1 Data: 400 Electrochemical Cycles (EIL-015). University College London. Dataset. https://doi.org/10.5522/04/12159462.v1

  • most downloaded record is:

Steinmetz, Nicholas A; Zatka-Haas, Peter; Carandini, Matteo; Harris, Kenneth (2019). Distributed coding of choice, action, and engagement across the mouse brain. University College London. Dataset. https://doi.org/10.5522/04/9970907.v1

  • most cited record is:

Pérez-García, Fernando; Rodionov, Roman; Alim-Marvasti, Ali; Sparks, Rachel; Duncan, John; Ourselin, Sebastien (2020). EPISURG: a dataset of postoperative magnetic resonance images (MRI) for quantitative analysis of resection neurosurgery for refractory epilepsy. University College London. Dataset. https://doi.org/10.5522/04/9996158.v1

What is the UCL Research Data Repository?

From the Research Publications Service for published manuscripts and theses, to MediaCentral for all things media, UCL staff and students can access different places to store their research outputs – and the UCL Research Data Repository is a perfect place for research data, posters, presentations, software, workflows, data management plans, figures and models.

Key features:

  • Available to all current staff and research students
  • Supports almost all file types
  • All published items can have a full data citation including a DOI (unique persistent identifier)
  • Items can be embargoed where necessary
  • Helps provide access and data sharing
  • Preserves and curates outputs for 10+ years
  • Facilitates discovery of research outputs
  • Helps researchers to meet UCL / funders’ requirements for FAIR data

More information about the service can be found on our website.

Access our user guide.

Why use the Research Data Repository?

With communities across UCL being actively encouraged to engage with the FAIR principles, it was important to give staff and research students even greater means to do so. The FAIR principles: Findable, Accessible, Interoperable and Reusable, refer to a set of attributes research outputs should have to enable secondary researchers to find, understand, repurpose and reuse these without major technical barriers​. Subsequently, there are many advantages to having FAIR research outputs including:

  • Greater accessibility of research outputs
  • Enhanced transparency of the research process
  • Greater potential to replicate studies and verify findings
  • Enhanced potential for greater citation and collaboration
  • Encourages members of the public to become involved in research projects and become citizen scientists
  • Maximises research potential of existing research resources by reusing and repurposing them

Hence, we developed and launched the Research Data Repository to support staff and research students wanting to further engage with the FAIR principles here at UCL.

Collaboration is key

The Research Data Management team in Library Services and the Research Data Stewardship team from the Centre for Advanced Research Computing, collaborate to provide both administrative and technical support – helping users to upload, publish and archive their research outputs.

You can reach us using researchdatarepository@ucl.ac.uk or join us at one of our online or in-person drop-in sessions.

What does the future hold?

Over the past year, the Research Data Repository team participated in a series of workshops as part of the FAIR-IMPACT Coordination and Support Action  funded by the European Union. This work was led by Dr Socrates Varakliotis and supported by Dr Christiana McMahon, Kirsty Wallis, Dr James Wilson and Daniel Delargy.

The aims of these workshops were to:

  • firstly, enhance the trustworthiness of the repository; and
  • secondly, to enhance the semantic metadata (documentation) made publicly available online

During the first project, we conducted a thorough self-assessment of the information we provide about the repository service with a view to highlighting how we demonstrate trustworthiness. Consequently, we made a series of improvements to our documentation including the publishing of a new, more accessible website.

Over the course of the second project, we focused on improving the standardised metadata we make available to search engines indexing repository information globally. In this project, we were able to demonstrate how having validated metadata is important to supporting the trustworthiness of repository services.

The next step is to further explore how the repository’s trustworthiness may be enhanced even further to formally meet international standards and expectations.

Final thoughts

Having over 1million views truly is a fantastic achievement and testament to the hard work and dedication of those working behind the scenes to provide this brilliant service, and the wonderful users across UCL who have published with us.

Next stop, 2million views – and until then…

Get involved!

The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on Bluesky, and join our mailing list to be part of the conversation!

Open Science & Scholarship Festival 2025 now open for booking!

By Kirsty, on 24 April 2025

We are delighted to be able to finally launch the full programme for the Open Science & Scholarship Festival 2025 in collaboration with LSE and the Francis Crick Institute.

montage of institution logos

The festival will run from the 2 – 6 June and includes and exciting array of sessions including creative workshops, informal networking, case studies, online and in person panel discussions and technology demonstrations. Read the rest of this entry »

Introducing the Principles for Citizen Science at UCL

By Kirsty, on 14 April 2025

Guest post by Sheetal Saujani, Citizen Science Coordinator in the Office for Open Science & Scholarship

Citizen science is a powerful and evolving way to conduct research, bringing together researchers and the public to advance knowledge and create real-world impact. At UCL, we’re committed to supporting ethical, inclusive, and high-quality citizen science.

To support this growing area of research, we are pleased to introduce the Principles for Citizen Science at UCL, a framework designed to guide best practices and meaningful collaboration across UCL and beyond.

Where did the Principles come from?

Our journey began with a simple question: What does citizen science look like across UCL?

We mapped existing projects across UCL and found many departments already involved in citizen science, even if they didn’t call it that. Conversations with project leads helped us to identify great practices, what support is needed and how to help more people get involved in citizen science. These conversations, in conjunction with UCL’s Citizen Science Working Group, helped shape UCL’s broad definition of citizen science (encompassing a diverse range of activities and practices) and informed the development of the Principles, a shared foundation for project leads, researchers, and citizen scientists working together.

Rooted in UCL’s inclusive approach to citizen science, the Principles are also informed by the ECSA (European Citizen Science Association) Ten Principles of Citizen Science, adapted to reflect UCL’s research culture and values.

What do the Principles cover?

The Principles for Citizen Science at UCL provide practical guidance primarily for anyone designing or participating in a citizen science project. They focus on key areas such as:

  • Citizen scientists – Ensuring meaningful participation and recognition of contributions.
  • Communication – Promoting open, clear and respectful dialogue between everyone.
  • Data quality and ethics – Ensuring robust, responsible approaches to data collection, analysis, and sharing.
  • Inclusivity and accessibility – Creating opportunities for everyone to get involved, regardless of background or experience.

Why do they matter?

Citizen science at UCL is more than a research method; it’s a way to connect knowledge with communities and expand the impact of our work.

The Principles aim to:

  • Help project leads and citizen scientists work more effectively together.
  • Support ethical and responsible research practices.
  • Encourage wider participation and access.
  • Increase the visibility and influence of citizen science across different disciplines.

By embedding these principles into projects, we can ensure that citizen-led research contributes to both academic excellence and societal benefit at UCL and beyond.

Use the Principles as a living framework

UCL’s Principles for Citizen Science aren’t just a checklist, they’re a flexible guide in the principles of co-creation, quality and inclusivity to use throughout your project journey. Use them to shape a project from idea to delivery and return to them often as your work evolves.

Explore and reach out to us!

We encourage all UCL researchers, project leads, staff, students, and citizen scientists to explore and adopt the Principles for Citizen Science at UCL in their work. Whether you’re starting a new project or refining an existing one, the Principles are here to support you.

If you’d like to learn more or discuss how these Principles can support your work, reach out to us as we would love to hear from you!

Ethics of Open Science: Science as Activism

By Kirsty, on 2 April 2025

Guest post by Ilan Kelman, Professor of Disasters and Health, building on his captivating presentation in Session 2 of the UCL Open Science Conference 2024.

Many scientists accept a duty of ensuring that their science is used to help society. When we are publicly funded, we feel that we owe it to the public to offer Open Science for contributing to policy and action.

Some scientists take it a step further. Rather than merely making their science available for others to use, they interpret it for themselves to seek specific policies and actions. Open Science becomes a conduit for the scientist to become an activist. Positives and negatives emerge, as shown by the science of urban exploration and of climate change.

Urban exploration

‘Urban exploration’ (urbex), ‘place-hacking’, and ‘recreational trespass’ refer to people accessing infrastructure which is off-limits to the public, such as closed train stations, incomplete buildings, and utility systems. As per the third name, it sometimes involves trespassing and it is frequently dangerous, since sites are typically closed off for safety and security reasons.

Urbex research does not need to involve the infrastructure directly, perhaps through reviewing existing material or interviewing off-site. It can, though, involve participating in accessing the off-limits sites for documenting experiences through autoethnography or participant-observer. As such, the urbex researcher could be breaking the law. In 2014, one researcher was granted a conditional discharge, 20 months after being arrested for involvement in urbex while researching it.

Open Science for urbex research has its supporters and detractors. Those stating the importance of the work and publicising it point to the excitement of learning about and documenting a city’s undercurrents, creative viewing and interacting with urban environments, the act of bringing sequestered spaces to the public while challenging authoritarianism, the need to identify security lapses, and making friends. Many insist on full safety measures, even while trespassing.

Detractors explain that private property is private and that significant dangers exist. People have died. Rescues and body recoveries put others at risk. Urbex science might be legitimate, particularly to promote academic freedom, but it should neither be glorified nor encourage foolhardiness.

This situation is not two mutually exclusive sides. Rather, different people prefer different balances. Urbex Open Science as activism can be safe, legal, and fun—also as a social or solo hobby. Thrill-seekers for social media influence and income would be among the most troublesome and the least scientific.

Figure 1: Unfinished and abandoned buildings are subjects of ‘urbex’ research (photo by Ilan Kelman).

Climate everything?

Humanity is changing the Earth’s climate rapidly and substantively with major, deleterious impacts on society. Open Science on climate change has been instrumental in popularising why human-caused climate change is happening, its implications, how we could avert it, and actions to tackle its negative impacts.

Less clear is the penchant for some scientists to use Open Science to try to become self-appointed influencers and activists beyond their expertise. They can make grandiose public pronouncements on climate change science well outside their own work, even contradicting their colleagues’ published research. An example is an ocean physicist lamenting the UK missing its commitments on climate change’s Paris Agreement, despite the agreement being unable to meet its own targets, and then expressing concerns about “climate refugees” which legally cannot exist.

A meme distributed by some scientists states that cats kill more birds than wind turbines, yet no one tries to restrict cats! Aside from petitions and studies about restricting cats, the meme never explains how cats killing birds justifies wind turbines killing birds, particularly when kill-avoiding strategies exist. When a scientist’s social media postings are easily countered, it undermines efforts to suggest that scientists ought to be listened to regarding climate change.

Meanwhile, many scientists believe they can galvanise action by referring to “climate crisis” or “climate emergency” rather than to “climate change”. From the beginnings of this crisis/emergency framing, political concerns were raised about the phrasing. Now, evidence is available of the crisis/emergency wording leading to negative impacts for action.

In fact, scientist activism aiming to “climat-ify” everything leads to non-sensical phrasing. From “global weirding” to “climate chaos”, activist terminology can reveal a lack of understanding of the basics of climate science—such as climate, by definition, being mathematically chaotic. A more recent one is “climate obstruction”. When I asked how we could obstruct the climate since the climate always exists, I never received an answer.

Figure 2: James Hansen, climate scientist and activist (photo by Ilan Kelman).

Duty for accuracy and ethics

Scientists have a duty for accuracy and ethics, which Open Science should be used for. Fulfilling this duty contributes to credibility and clarity, rather than using Open Science to promote either subversive or populist material, simply for the sake of activism, without first checking its underlying science and the implications of publicising it. When applied appropriately, Open Science can and should support accurate and ethical activism.

Save the Date! Open Science & Scholarship festival 2025

By Kirsty, on 20 March 2025

The library teams at LSE and the Francis Crick institute and the UCL Office for Open Science & Scholarship are proud to announce the first collaborative Open Science & Scholarship Festival in London. 

The festival will be taking place from 2-6 of June and will include a mixture of in person and hybrid events across all three institutions as well as a range of sessions purely held online. We have an exciting programme in development for you, including:

  • Open Research in the Age of Populism
    Political shifts around the world, from the Trump administration in the US to Meloni’s government in Italy, are making it more important than ever to have reliable research freely available. However, these governments are also making it more risky to be a researcher openly sharing the results of research in many countries and disciplines. Alongside the political censorship of research in some countries there are also changes to research funding, research being misrepresented and used to spread misinformation online, and concerns about the stability of open research infrastructure which is funded by the state. In these circumstances we will consider the value of open knowledge, the responsibilities of individual researchers and institutions to be open and how you can protect yourself when making your research openly available?
  • How open is possible, how closed is necessary? Navigating data sharing whilst working with personal data
    In the interests of transparency and research integrity, researchers are encouraged to open up more of their research process, including sharing data. However, for researchers working with personal data, including interview and medical data, there are important considerations for sharing. This event will bring together researchers from a range of disciplines to share their experiences and strategies for open research when working with personal data.
    The panel will discuss if and how this type of data can be made openly available, the balance between the work involved to anonymise data and benefits to research and society for making it available, and consider the legal frameworks researchers are working within in the UK.
  • Authorship in the era of AI 
    With the rapid growth of AI tools over the past three years, there has been a corresponding rise in the number of academics and students using them in their own writing. While it is generally agreed that we still expect people to be the “authors” of their work, deciding how to interpret that is often a nuanced and subjective decision by the writer. This panel discussion will look at how we think about “authorship” for AI-assisted writing – what are these tools used for in different contexts? Where might readers and publishers draw their own lines as to what is still someone’s own work? And how might we see this develop over time?
  • Creativity in research and engagement
    A session of making, sharing and storytelling. Speakers from across UCL share how they use creative methods to enrich their research, engage with people, and share their learning. Join us to discuss these methods, the benefits of creativity, and try creating a visual output based on your own work.   
  • Professionalising data, software, and infrastructure support to transform open science
    Workshop in development where researchers and research technology professionals can come together to discuss challenges and opportunities to support research. This session will focus on skills and training needed in creating a culture of Open Science.
  • Open Methods with Protocols.io
    Join the Francis Crick Institute and Protocols.io to talk about making your lab protocols and article methods sections open access. Improve replicability, re-use and gain credit for all those hours you spent at the bench. The session is open to all and will involve discussions of the value of open protocols alongside hands on training on how to use the protocols.io platform.
  • Should reproducibility be the aim for open qualitative research? Researchers’ perspectives
    Reproducibility has been touted among quantitative researchers as a necessary step to make studies rigorous. To determine reproducibility, whether the same analyses of the same data produce the same results, the raw data and code must be accessible to other researchers. Qualitative researchers have also begun to consider making their data open too. However, where the analyses of these data do not involve quantification and statistical analysis, it is difficult to see how such analysis processes could be reproducible. Furthermore, for researchers in fields where cultural knowledge plays a key role in the analysis of qualitative data, openness of such data may invite misrepresentation by re-use of the data by researchers unfamiliar with the cultural and social context in which it was produced.  This event asks whether reproducibility should be the aim for open qualitative data, and if not, why should researchers make their qualitative data open and what are the other methods used to establish rigour and integrity in research? 

We are also developing sessions about:

  • The Big Deal for Diamond Journals
  • A networking coffee morning
  • Openness and Engagement with Special Collections and Archives

More information will be shared and booking will be available as soon as we can, so watch this space and follow us on BlueSky and LinkedIn for updates!

Ethics of Open Science: Navigating Scientific Disagreements

By Kirsty, on 6 March 2025

Guest post by Ilan Kelman, Professor of Disasters and Health, building on his captivating presentation in Session 2 of the UCL Open Science Conference 2024.

Open Science reveals scientific disagreements to the public, with advantages and disadvantages. Opportunities emerge to demonstrate the scientific process and techniques for sifting through diverging ideas and evidence. Conversely, disagreements can become personal, obscuring science, scientific methods, and understandable disagreements due to unknowns, uncertainties, and personality clashes. Volcanology and climate change illustrate.

Volcanology

During 1976, a volcano rumbled on the Caribbean island of Guadeloupe which is part of France. Volcanologists travelled there to assess the situation leading to public spats between those who were convinced that a catastrophic eruption was likely and those who were unconcerned, indicating that plenty of time would be available for evacuating people if dangers worsened. The authorities decided to evacuate more than 73,000 people, permitting them to return home more than three months later when the volcano quieted down without having had a major eruption.

Aside from the evacuation’s cost and the possible cost of a major eruption without an evacuation, volcanologists debated for years afterwards how everyone could have dealt better with the science, the disagreements, and the publicity. Open Science could support all scientific viewpoints being publicly available as well as how this science could be and is used for decision making, including navigating disagreements. It might mean that those who shout loudest are heard most, plus media can sell their wares by amplifying the most melodramatic and doomerist voices—a pattern also seen with climate change.

Insults and personality clashes can mask legitimate scientific disagreements. For Guadeloupe, in one commentary responding to intertwined scientific differences and personal attacks, the volcanologist unhelpfully suggests their colleagues’ lack of ‘emotional stability’ as part of numerous, well-evidenced scientific points. In a warning prescient for the next example, this scientist indicates difficulties if Open Science means conferring credibility to ‘scientists who have specialized in another field that has little or no bearing on [the topic under discussion], and would-be scientists with no qualification in any scientific field whatever’.

Figure 1: Chile’s Osorno volcano (photo by Ilan Kelman).

Climate change, tropical cyclones, and anthropologists

Tropical cyclones are the collective term for hurricanes, typhoons, and cyclones. The current scientific consensus (which can change) is that due to human-caused climate change, tropical cyclone frequency is decreasing while intensity is increasing. On occasion, anthropologists have stated categorically that tropical cyclone numbers are going up due to human-caused climate change.

I responded to a few of these statements with the current scientific consensus, including foundational papers. This response annoyed the anthropologists even though they have never conducted research on this topic. I offered to discuss the papers I mentioned, an offer not accepted.

There is a clear scientific disagreement between climate change scientists and some anthropologists regarding projected tropical cyclone trends under human-caused climate change. If these anthropologists publish their unevidenced viewpoint as Open Science, it offers fodder to the industries undermining climate change science and preventing action on human-caused climate change. They can point to scientists disputing the consensus of climate change science and then foment further uncertainty and scepticism about climate change projections.

One challenge is avoiding censorship of, or shutting down scientific discussions with, the anthropologists who do not accept climate change science’s conclusions. It is a tricky balance between permitting Open Science across disciplines, including to connect disciplines, and not fostering or promoting scientific misinformation.

Figure 2: Presenting tropical cyclone observations (photo by Ilan Kelman).

Caution, care, and balance

Balance is important between having scientific discussions in the open and avoiding scientists levelling personal attacks at each other or spreading incorrect science, both of which harm all science. Some journals use an open peer review process in which the submitted article, the reviews, the response to the reviews, all subsequent reviews and responses, and the editorial decision are freely available online. A drawback is that submitted manuscripts are cited as being credible, including those declined for publication. Some journals identify authors and reviewers to each other, which can reduce snide remarks while increasing possibilities for retribution against negative reviews.

Even publicly calling out bullying does not necessarily diminish bullying. Last year, after I privately raised concerns about personal attacks against me on an anthropology email list due to a climate change posting I made, I was called “unwell” and “unhinged” in private emails which were forwarded to me. When I examined the anthropology organisation’s policies on bullying and silencing, I found them lacking. I publicised my results. The leaders not only removed me from the email list against the email list’s own policies, but they also refused to communicate with me. That is, these anthropologists (who are meant to be experts in inter-cultural communication) bullied and silenced me because I called out bullying and silencing.

Awareness of the opportunities and perils of Open Science for navigating scientific disagreements can indicate balanced pathways for focusing on science rather than on personalities. Irrespective, caution and care can struggle to overcome entirely the fact that scientists are human beings with personalities, some of whom are ardently opposed to caution, care, and disagreeing well.

Announcing: UCL’s first Replication Games

By Kirsty, on 17 February 2025

Registrations are now open for UCL’s first Replication Games, organised by the Office for Open Science & Scholarship and UCL’s UKRN local network chapter. The event will be run by the Institute for Replication (I4R), and it is supported by a Research Culture Seed Grant.

The Replication Games is a one-day event that brings together researchers to collaborate on reproducing and replicating papers published in highly regarded journals. Researchers participating in the Replication Games will join a small team of 3-5 members with similar research interests. Teams verify the reproducibility of a paper using its replication package. They may conduct sensitivity analysis, employing different procedures than the original investigators.  Teams may also recode the study using the raw or intermediate data or implement novel analyses with new data. More information can be found on I4R’s Website.

Teams will be guided in all activities by Derek Mikola, an experienced facilitator from the I4R. After the event, teams are encouraged to document their work in a report that will be published on the website of the I4R. Participants are also eligible to be granted co-authorship in a meta-paper that combines a large number of replications.

This event takes place in person. Lunch and afternoon snacks are provided.

Who are we inviting to register?

Registration is on a ‘first come, first serve’ basis. We invite MRes students, doctoral students and researchers, post-docs, and faculty members at UCL to apply. Although students and scholars from all disciplines can apply, we hope to attract especially those working in the social sciences and humanities.

Participants must be confident using at least one of the following: R, Python, Stata, or Matlab.

Papers available for replication are listed on the I4R website. Prospective participants are asked to review this list to ensure that at least one paper aligns with their research interests.

How to apply?

Please complete this short form: https://forms.office.com/e/WEUUKH2BvA

Timeline and Procedure

  • 15 March 25 – registrations close
  • 31 March 25 – notification of outcomes and teams
  • 7 April 25,  1pm – Mandatory Teams call with the I4R (online)
  • 25 April 25, 9am-5pm – Replication Games (at UCL’s Bloomsbury Campus)

Please note that participants are expected to attend the full day.

Contact

If you have any questions, please contact Sandy Schumann (s.schumann@ucl.ac.uk)