X Close

Open@UCL Blog

Home

Menu

Archive for the 'UCL Open Science Conference' Category

Authorship in the Era of AI – Panel Discussion

By Naomi, on 9 July 2025

Guest post by Andrew Gray, Bibliometrics Support Officer

This panel discussion at the 2025 Open Science and Scholarship Festival was made up of three professionals with expertise in different aspects of publishing and scholarly writing, across different sectors – Ayanna Prevatt-Goldstein, from the UCL Academic Communication Centre focusing on student writing; Rachel Safer, the executive publisher for ethics and integrity at Oxford University Press, and also an officer of the Committee on Publication Ethics, with a background in journal publishing; and Dhara Snowden, from UCL Press, with a background in monograph and textbook publishing.

We are very grateful to everyone who attended and brought questions or comments to the session.

This is a summary of the discussion from all three panel members, and use of any content from this summary should be attributed to the panel members. If you wish to cite this, please do so as A. Prevatt-Goldstein, R. Safer & D. Snowden (2025). Authorship in the Era of AI. [https://blogs.ucl.ac.uk/open-access/2025/07/09/authorship-in-the-era-of-ai/]

Where audience members contributed, this has been indicated. We have reorganised some sections of the discussion for better flow.

The term ‘artificial intelligence’ can mean many things, and often a wide range of different tools are grouped under the same general heading. This discussion focused on ‘generative AI’ (large language models), and on their role in publishing and authorship rather than their potential uses elsewhere in the academic process.

Due to the length of this write-up, you can directly access each question using the following links:
1. There is a growing awareness of the level of use of generative AI in producing scholarly writing – in your experience, how are people currently using these tools, and how widespread do you think that is? Is it different in different fields? And if so, why?

2. Why do you think people are choosing to use these tools? Do you think that some researchers – or publishers – are feeling that they now have to use them to keep pace with others?

3. On one end of the spectrum, some people are producing entire papers or literature reviews with generative AI. Others are using it for translation, or to generate abstracts. At the other end, some might use it for copyediting or for tweaking the style. Where do you think we should draw the line as to what constitutes ‘authorship’?

4. Do you think readers of scholarly writing would draw the line on ‘authorship’ differently to authors and publishers? Should authors be expected to disclose the use of these tools to their readers? And if we did – is that something that can be enforced?

5. Do you think ethical use of AI will be integrated into university curriculums in the future? What happens when different institutions have different ideas of what is ‘ethical’ and ‘responsible’?

6. Many students and researchers are concerned about the potential for being falsely accused of using AI tools in their writing – how can we help people deal with this situation? How can people assert their authorship in a world where there is a constant suspicion of AI use?

7. Are there journals which have developed AI policies that are noticeably more stringent than the general publisher policies, particularly in the humanities? How do we handle it if these policies differ, or if publisher and institutional policies on acceptable AI use disagree?

8. The big AI companies often have a lack of respect for authorship, as seen in things like the mass theft of books. Are there ways that we can protect authorship and copyrights from AI tools?

9. We are now two and a half years into the ‘ChatGPT era’ of widespread AI text generation. Where do you see it going for scholarly publishing by 2030?


1. There is a growing awareness of the level of use of generative AI in producing scholarly writing – in your experience, how are people currently using these tools, and how widespread do you think that is? Is it different in different fields? And if so, why?

Among researchers, a number of surveys by publishers have suggested that 70-80% of researchers are using some form of AI, broadly defined, and a recent Nature survey suggested this is fairly consistent across different locations and fields. However, there was a difference by career stage, with younger researchers feeling it was more acceptable to use it to edit papers, and by first language, where non-English speakers were more likely to use it for this as well.

There is a sense that publishers in STEM fields are more likely to have guidance and policy for the use of AI tools; in the humanities and social sciences, this is less well developed, and publishers are still in the process of fact-finding and gathering community responses. There may still be more of a stigma around the use of AI in the humanities.

In student writing, a recent survey from HEPI found that from 2024 to 2025, the share of UK undergraduates who used generative AI for generating text had gone from a third of students to two thirds, and only around 8% said they did not use generative AI at all. Heavier users included men, students from more advantaged backgrounds, and students with English as a second or additional language.

There are some signs of variation by discipline in other research. Students in fields where writing is seen as an integral part are more concerned with developing their voice and a sense of authorship, and are less likely to use it for generating text – or at least are less likely to acknowledge it – and where they do, they are more likely to personalise the output. By comparison, students in STEM subjects are more likely to feel that they were being assessed on the content – the language they use to communicate it might be seen as less important.

[For more on this, see A. Prevatt-Goldstein & J. Chandler (forthcoming). In my own words? Rethinking academic integrity in the context of linguistic diversity and generative AI. In D. Angelov and C.E. Déri (Eds.), Academic Writing and Integrity in the Age of Diversity: Perspectives from European and North American Higher Education. Palgrave.)]


2. Why do you think people are choosing to use these tools? Do you think that some researchers – or publishers – are feeling that they now have to use them to keep pace with others?

Students in particular may be more willing to use it as they often prioritise the ideas being expressed over the mode of expressing them, and the idea of authorship can be less prominent in this context. But at a higher level, for example among doctoral students, we find that students are concerned about their contribution and whether perceptions of their authorship may be lessened by using these tools.

A study among publishers found that the main way AI tools were being used was not to replace people at specific tasks, but to make small efficiency savings in the way people were doing them. This ties into the long-standing use of software to assist copyediting and typesetting.

Students and academics are also likely to see it from an efficiency perspective, especially among those who are becoming used to working with generative AI tools in their daily lives, and so are more likely to feel comfortable using it in academic and professional contexts. Academics may feel pressure to use tools like this to keep up a high rate of publication. But the less involvement of time in a particular piece of work might be a trade-off of time spent against quality; we might also see trade-offs in terms of the individuality and nuance of the language, of fewer novel and outlier ideas being developed, as generative AI involvement becomes more common.

Ultimately, though, publishers struggle to monitor researchers’ use of generative AI in their original research – they are dependent on institutions training students and researchers, and on the research community developing clearer norms, and perhaps there is also a role for funders to support educating authors about best practices.

Among all users, a significant – and potentially less controversial – role for generative AI is to help non-native English speakers with language and grammar, and to a more limited degree translation – though quality here varies and publishers would generally recommend that any AI translation should be checked by a human specialist. However, this has its own costs.

With English as a de facto academic lingua franca, students (and academics) who did not have it as a first language were inevitably always at a disadvantage. Support for this could be found – perhaps paying for help, perhaps friends or family or colleagues who could support language learning – but this was very much support that was available more to some students than others, due to costs or connections, and generative AI tools have the potential to democratise this support to some degree. However, this causes a corresponding worry among many students that the bar has been raised – they feel they are now expected to use these tools or else they are disadvantaged compared to their peers.


3. On one end of the spectrum, some people are producing entire papers or literature reviews with generative AI. Others are using it for translation, or to generate abstracts. At the other end, some might use it for copyediting or for tweaking the style. Where do you think we should draw the line as to what constitutes ‘authorship’?

In some ways, this is not a new debate. As we develop new technologies which change the way we write – the printing press, the word processor, the spell checker, the automatic translator – people have discussed how it changes ‘authorship’. But all these tools have been ways to change or develop the words that someone has already written; generative AI can go far beyond that, producing vastly more material without direct involvement beyond a short prompt.

A lot of people might treat a dialogue with generative AI, and the way they work with those outputs, in the same way as a discussion with a colleague, as a way to thrash out ideas and pull them together. We have found that students are seeing themselves shifting from ‘author’ to ‘editor’, claiming ownership of their work through developing prompts and personalising the output, rather than through having written the text themselves. There is still a concept of ownership, a way of taking responsibility for the outcome, and for the ideas being expressed, but that concept is changing, and it might not be what we currently think of as ‘authorship’.

Sarah Eaton’s work has discussed the concept of ‘Post-plagiarism’ as a way to think about writing in a generative AI world, identifying six tenets of post-plagiarism. One of those is that humans can concede control, but not responsibility; another is that attribution will remain important. This may give us a useful way to consider authorship.

In publishing, ‘authorship’ can be quite firmly defined by the criteria set by a specific journal or publisher. There are different standards in different fields, but one of the most common is the ICMJE definition which sets out four requirements to be considered an author – substantial contribution to the research; drafting or editing the text; having final approval; and agreeing to be accountable for it. In the early discussions around generative AI tools in 2022, there was a general agreement that these could never meet the fourth criteria, and so could never become ‘authors’; they could be used, and their use could be declared, but it did not conceptually rise to the level of authorship as it could not take ownership of the work.

The policy that UCL Press adopted, drawing on those from other institutions, looked at ways to identify potential responsible uses, rather than a blanket ban – which it was felt would lead to people simply not being transparent when they had used it. It prohibited ‘authorship’ by generative AI tools, as is now generally agreed; it required that authors be accountable, and take responsibility for the integrity and validity of their work; and it asked for disclosure of generative AI.

Monitoring and enforcing that is hard – there are a lot of systems claiming to test for generative AI use, but they may not work for all disciplines, or all kinds of content – so it does rely heavily on authors being transparent about how they have used these tools. They are also reliant on peer reviewers flagging things that might indicate a problem. (This also raises the potential of peer reviewers using generative AI to support their assessments – which in turn indicates the need for guidance about how they could use it responsibly, and clear indications on where it is or is not felt to be appropriate.)

Generative AI potentially has an interesting role to play in publishing textbooks, which tend to be more of a survey of a field than original thinking, but do still involve a dialogue with different kinds of resources and different aspects of scholarship. A lot of the major textbook platforms are now considering ways in which they can use generative AI to create additional resources on top of existing textbooks – test quizzes or flash-cards or self-study resources.


4. Do you think readers of scholarly writing would draw the line on ‘authorship’ differently to authors and publishers? Should authors be expected to disclose the use of these tools to their readers? And if we did – is that something that can be enforced?

There is a general consensus emerging among publishers that authors should be disclosing use of AI tools at the point of submission, or revisions, though where the line is drawn there varies. For example, Sage requires authors to disclose the use of generative AI, but not ‘assistive’ AI such as spell-checkers or grammar checkers. The STM Association recently published a draft set of recommendations for using AI, with nine classifications of use. (A commenter in the discussion also noted a recent proposed AI Disclosure Framework, identifying fourteen classes.)

However, we know that some people, especially undergraduates, spend a lot of time interacting with generative AI tools in a whole range of capacities, around different aspects of the study and writing process, which can be very difficult to define and describe – there may not be any lack of desire to be transparent, but it simply might not fit into the ways we ask them to disclose the use of generative AI.

There is an issue about how readers will interpret a disclosure. Some authors may worry that there is a stigma attached to using generative AI tools, and be reluctant to disclose if they worry their work will be penalised, or taken less seriously, as a result. This is particularly an issue in a student writing context, where it might not be clear what will be done with that disclosure – will the work be rejected? Will it be penalised, for example a student essay losing some marks for generative AI use? Will it be judged more sceptically than if there had been no disclosure? Will different markers, or editors, or peer-reviewers make different subjective judgements, or have different thresholds?

These concerns can cause people to hesitate before disclosing, or to avoid disclosing fully. But academics and publishers are dependent on honest disclosure to identify inappropriate use of generative AI, so may need to be careful in how they frame this need to avoid triggering these worries about more minor use of generative AI. Without honest disclosure, we also have no clear idea of what writers are using AI for – which makes it all the harder to develop clear and appropriate policies.

For student writing, the key ‘reader’ is the marker, who will also be the person to whom generative AI use is disclosed. But for published writing, once a publisher has a disclosure of AI use, they may need to decide what to pass along to the reader. Should readers be sent the full disclosure, or is that overkill? It may include things like idea generation, assistance with structure, or checking for more up-to-date references – these might be useful for the publisher to know, but might not need to be disclosed anywhere in the text itself. Conversely, something like images produced by generative AI might need to be explicitly and clearly disclosed in context.

The recent Nature survey mentioned earlier showed that there is no clear agreement among academics as to what is and isn’t acceptable use, and it would be difficult for publishers to draw a clear line in that situation. They need to be guided by the research community – or communities, as it will differ in different disciplines and contexts.

We can also go back to the pre-GenAI assumptions about what used to be expected in scholarly writing, and consider what has changed. In 2003, Diane Pecorari identified the three assumptions for transparency in authorship:

1. that language which is not signaled as quotation is original to the writer;
2. that if no citation is present, both the content and the form are original to the writer;
3. that the writer consulted the source which is cited.

There is a – perhaps implicit – assumption among readers that all three of these are true unless otherwise disclosed. But do those assumptions still hold among a community of people – current students – who are used to the ubiquitous use of generative AI? On the face of it, generative AI would clearly break all three.

If we are setting requirements for transparency, there should also be consequences for breach of transparency – from a publisher’s perspective, if an author has put out a generative AI produced paper with hallucinated details or references, the journal editor or publisher should be able to investigate and correct or retract it, exactly as would be the case with plagiarism or other significant issues.

But there is a murky grey area here – if a paper is otherwise acceptable and of sufficient quality, but does not have appropriate disclosure of generative AI use, would that in and of itself be a reason for retraction? At the moment, this is not on the COPE list of reasons for retraction – it might potentially justify a correction or an editorial note, but not outright retraction.

Conversely, in the student context, things are simpler – if it is determined that work does not belong to the student, whether that be through use of generative AI or straightforward plagiarism, then there are academic misconduct processes and potentially very clear consequences which follow from that. These do not necessarily reflect on the quality of the output – what is seen as critical is the authorship.


5. Do you think ethical use of AI will be integrated into university curriculums in the future? What happens when different institutions have different ideas of what is ‘ethical’ and ‘responsible’?

A working group at UCL put together a first set of guidance on using generative AI in early 2023, and focused on ethics in the context of learning outcomes – what is it that students are aiming to achieve in their degree, and will generative AI help or not in that process? But ethical questions also emerged in terms of whose labour had contributed to these tools, what the environmental impacts where, and importantly whether students were able to opt out of using generative AI. There are no easy answers to any of these, but they very much are ongoing questions.

Recent work from MLA looking at AI literacies for students is also informative here in terms of what it expects students using AI to be aware of.


6. Many students and researchers are concerned about the potential for being falsely accused of using AI tools in their writing – how can we help people deal with this situation? How can people assert their authorship in a world where there is a constant suspicion of AI use?

There was no easy answer here and a general agreement that this is challenging for everyone – it can be very difficult to prove a negative. Increasing the level of transparency around disclosing AI use – and how much AI has been used – will help overall, but maybe not in individual cases.

Style-based detection tools are unreliable and can be triggered by normal academic or second-language writing styles. A lot of individuals have their own assumptions as to what is a ‘clear marker’ of AI use, and these are often misleading, leading to false positives and potentially false accusations. Many of the plagiarism detection services have scaled back or turned off their AI checking tools.

In publishing, a lot of processes have historically been run on a basis of trust – publishers, editors, and reviewers have not fact-checked every detail. If you are asked to disclose AI use and you do not, the system has to trust you did not use it, in the same way that it trusts you obtained the right ethical approvals or that you actually produced the results you claim. Many publishers are struggling with this, and feeling that they are still running to catch up with recent developments.

In academia, we can encourage and support students to develop their own voice in their writing. This is a hard skill to develop, and it takes time and effort, but it can be developed, and it is a valuable thing to have – it makes their writing more clearly their own. The growth of generative AI tools can be a very tempting shortcut for many people to try and get around this work, but there are really no shortcuts here to the investment of time that is needed.

There was a discussion of the possibility of authors being more transparent with their writing process to help demonstrate research integrity – for example, documenting how they select their references, in the way that systematic review does, or using open notebooks? This could potentially be declared in the manuscript, as a section alongside acknowledgements and funding. Students could be encouraged to keep logs of any generative AI prompts they have used and how they are handling them, to be able to disclose this in case of concerns.


7. Are there journals which have developed AI policies that are noticeably more stringent than the general publisher policies, particularly in the humanities? How do we handle it if these policies differ, or if publisher and institutional policies on acceptable AI use disagree?

There are definitely some journals that have adopted more restrictive policies than the general guidance from their publisher, mostly in the STEM fields. We know that many authors may not read the specific author guidelines for a journal before submitting. Potentially we could see journals highlighting these restrictions in the submission process, and requiring the authors to acknowledge they are aware of the specific policies for that journal.


8. The big AI companies often have a lack of respect for authorship, as seen in things like the mass theft of books. Are there ways that we can protect authorship and copyrights from AI tools?

A substantial issue for many publishers, particularly smaller non-commercial ones, is that so much scholarly material is now released under an open-access license that makes it easily available for training generative AI; even if the licenses forbid this, it can be difficult in practice to stop it, as seen in trade publishing. It is making authors very concerned, as they do not know how or where their material will be used, and feel powerless to prevent it.

One potential way forward is by reaching agreements between publishers and AI companies, making agreements on licensing material and ensuring that there is some kind of renumeration. This is more practical for larger commercial publishers with more resources. There is also the possibility of sector-wide collective bargaining agreements, as has been seen with the Writers Guild of America, where writers were able to implement broader guardrails on how their work would be used.

It is clear that the current system is not weighted in favour of the original creators, and some form of compensation would be ideal, but we also need to be careful that any new arrangement doesn’t continue to only benefit a small group.

The issue of Creative Commons licensing regulating the use of material for AI training purposes was discussed – Creative Commons take the position that this work may potentially be allowed under existing copyright law, but they are investigating the possibility of adding a way to signal the author’s position. AI training would be allowed by most of the Creative Commons licenses, but might require specific conditions on the final model (eg displaying attribution or non-commercial restrictions).

A commenter in the discussion also mentioned a more direct approach, where some sites are using tools to obfuscate artwork or building “tarpits” to combat scraping – but these can shade into being malware, so not a solution for many publishers!


9. We are now two and a half years into the ‘ChatGPT era’ of widespread AI text generation. Where do you see it going for scholarly publishing by 2030?

Generative AI use is going to become even more prevalent and ubiquitous, and will be very much more integrated into daily life for most people. As part of that integration, ideally we would see better awareness and understanding of what it can do, and better education on appropriate use in the way that we now teach about plagiarism and citation. That education will hopefully begin at an early stage, and develop alongside new uses of the technology.

Some of our ideas around what to be concerned about will change, as well. Wikipedia was suggested as an analogy – twenty years ago we collectively panicked about the use of it by students, feeling it might overthrow accepted forms of scholarship, but then – it didn’t. Some aspects of GenAI use may simply become a part of what we do, rather than an issue to be concerned with.

There will be positive aspects of this, but also negative ones; we will have to consider how we keep a space for people who want to minimise their use of these tools, and choose not to engage with them, for practical reasons or for ethical ones, particularly in educational contexts.

There are also discussions around the standardisation of language with generative AI – as we lose a diversity of language and of expression, will we also lose the corresponding diversity of thought? Standardised, averaged language can itself be a kind of loss.

The panel concluded by noting that this is very much an evolving space, and encouraged greater feedback and collaboration between publishers and the academic community, funders, and institutions, to try and navigate where to draw the line. The only way forward will be by having these discussions and trying to agree common ground – not just on questions of generative AI, but on all sorts of issues surrounding research integrity and publication ethics.

 

Creativity in Research and Engagement: Making, Sharing and Storytelling

By Naomi, on 3 July 2025

Guest post by Sheetal Saujani, Citizen Science Coordinator in the Office for Open Science & Scholarship

A small room with desks pulled together to create a large table around which several people sit looking at someone who is stood to the right hand side of the room, clearly leading a session to which they are listening. There are double glass doors at the back of the room and on the right-hand side, behind the person standing in front of the group, there is a large screen mounted on the wall.

At the Creativity in Research and Engagement session during the 2025 Open Science and Scholarship Festival, we invited participants to ask a simple question: what if we looked at research and engagement through the lens of creativity?

Together, we explored how creative approaches can unlock new possibilities across research, public engagement, and community participation. Through talks, discussions, and hands-on activities, we discussed visual thinking, storytelling, and participatory methods – tools that help us rethink how we work and connect with others.

Why creativity?

Whether it’s communicating complex science through visual storytelling, turning data into art, or reimagining who gets to ask the research questions in the first place, creative approaches help break down barriers and make research more inclusive and impactful.

Sketchnoting

We began by learning a new skill – sketchnoting – a quick, visual way of capturing ideas with shapes, symbols, diagrams, and keywords rather than full sentences. It’s not about being artistic; it’s about clarity and connection. As we reminded participants “Anyone can draw!”

Throughout the session, it became clear that creativity isn’t about perfection – it’s about connection, experimentation, and finding new ways to involve and inspire others in our work.

Three UCL speakers then shared how they’ve used creative methods in their research and engagement work.

Angharad Green – Turning genomic data into art

Angharad Green, Senior Research Data Steward at UCL’s Advanced Research Computing Centre, shared her work on the evolution of Streptococcus pneumoniae (the bacteria behind pneumonia and meningitis) using genomic data and experimental evolution.

What made her talk stand out was the way she visualised complex data. Using vibrant Muller plots to track changes in bacterial populations over time, she transformed dense genomic information into something accessible and visually compelling. She also ensured the visuals were accessible to people with colour blindness.

The images were so impactful that they earned a place on the cover of Infection Control & Hospital Epidemiology. Angharad’s work is a powerful example of how creative design can not only improve research communication and uncover patterns that might otherwise go unnoticed, but also proves that data can double as art and that science can be both rigorous and imaginative.

“As I looked at the Muller plots,” she said, “I started to see other changes I hadn’t noticed – how one mutation would trigger another.”

Katharine Round – Ghost Town and the art of the undirected lens

Katharine Round, a filmmaker and Lecturer in Ethnographic and Documentary Film in UCL’s Department of Anthropology presented Ghost Town, set in the tsunami-struck city of Kamaishi, Japan. Local taxi drivers reported picking up passengers who then vanished – ghosts, perhaps, or expressions of unresolved grief.

A small room in which lots of desks are joined together to create a large table around which several people are sitting. They are facing a screen at the far end of the room, next to which someone is standing and appears to be speaking. On the table are various pieces of paper, pens, pencils, and mugs.Katharine explored memory, myth, and trauma using a unique method: fixed cameras installed inside taxis, with no filmmaker present. This “abandoned camera” approach created a space that felt intimate and undirected, like a moving confessional booth, allowing deeply personal stories to surface.

By simply asking, “Has anything happened to you since the tsunami that you’ve never spoken about?” the project uncovered raw, unstructured truths, stories that traditional interviews might never reach.

Katharine’s work reminds us that storytelling can be an evocative form of research. By using creative, non-linear methods, she uncovered stories that traditional data collection approaches might have missed. Sometimes, the most powerful insights come when the researcher steps back, listens, and lets the story unfold on its own.

Joseph Cook – Co-creation and creativity in Citizen Science

Joseph Cook leads the UCL Citizen Science Academy at the UCL Institute for Global Prosperity.

He shared how the Academy trains and supports community members to become co-researchers in community projects that matter to them, often co-designed with local councils on topics like health, prosperity, and wellbeing.

Joseph shared a range of inspiring creative work:

  • Zines made by young citizen scientists in Tower Hamlets, including a research rap and reflections on life in the care system.
  • A silk scarf by Aysha Ahmed, filled with symbols of home and belonging drawn from displaced communities in Camden.
  • A tea towel capturing community recipes and food memories from Regent’s Park Estate, part of a project on culture and cohesion.
  • Creative exhibitions such as The Architecture of Pharmacies, exploring healthcare spaces through the lens of lived experience.

Instead of asking communities to answer predefined questions, the Academy invites people to ask their own, reframing participants as experts in their own lives.

Joseph was joined by Mohammed Rahman, a citizen scientist and care leaver, awarded a UCL Citizen Science Certificate through the Academy’s ActEarly ‘Citizen Science with Care Leavers’ programme. Through his zine and audio documentary, Mohammed shared personal insights on wellbeing, support and independence showing how storytelling deepens understanding and drives change.

Laid out on a desk, there is a silk scarf on which are depicted small images and words. There are three people behind the desk, two are standing and one is sitting, all looking at the scarf. One of the people standing is pointing to something on the scarf and appears to be describing this to others who do not appear in the photo.

From thinking to making

After the talks, participants reflected and got creative. They explored evaluation methods like the “4Ls” (Liked, Learned, Lacked, Longed For) and discussed embedding co-design throughout projects, including evaluation, and why it’s vital to  involve communities from the start.

Participants made badges, sketchnoted their reflections, and took on a “Zine in 15 Minutes” challenge, contributing to a collective zine on creativity and community.

Final reflections

Creativity isn’t an add-on – it’s essential. It helps us ask better questions, involve more people, and communicate in ways that resonate. Methods like sketchnoting, visual metaphors, zine-making, and creative media open research and engagement to a wider range of voices and experiences.

Creative work doesn’t need to be academic papers – it can be a rap, a tea towel, or a short film. Creativity sparks insight, supports co-creation, and builds meaningful connection.

Whether through drawing, storytelling, or simply asking different questions, we must continue making space for creativity – in our projects and institutions.

Find out more

Get involved!

The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on Bluesky, and join our mailing list to be part of the conversation!

Open Science & Scholarship Festival 2025: next steps, links and recordings!

By Kirsty, on 25 June 2025

It has been a couple of weeks since our debut collaboration with our friends at LSE and the Francis Crick Institute and I can safely say that the festival was a roaring success. We all would like to extend a huge thank you to everyone that came to any of the events, in person or online, it was great to see so many people engaging with Open Science!

In case you missed it, the festival ran from 2 – 6 June and included an exciting array of sessions including creative workshops, informal networking, case studies, online and in person panel discussions and technology demonstrations. The full programme is still available online, or keep scrolling for links, recordings and upcoming content!

montage of institution logos

Monday 2 June

Open Methods with Protocols.io

This workshop introduced the benefits of publishing your methods and protocols as a separate open access output. As an in person event, there is no available recording, but you can access Protocols.io and their excellent free help and support guidance online.

Creativity in research and engagement

This session of making, sharing and storytelling has its own blog post – read it now!

Tuesday 3 June

Co-producing research with Special Collections: Prejudice and Power case study

UCL Special Collections presented their experiences of using co-creation to engage with rare book and archive collections especially as applied to the recent Prejudice in Power project, that consisted of a range of co-creation, community and academic initiatives that focussed on our holdings to respond to the university’s historic role in promoting eugenics. It also briefly discussed wider co-creation activity in UCL Special Collections, the lessons learned and how these are being embedded in practice.

Resources:

Scaling up Diamond Open Access Journals

Diamond open access (OA) is championed as a more open, equitable and inclusive, community-driven journal publishing model, especially when compared against other commercially owned, author pay and subscription models. Demand is rapidly growing but there is a lack of capacity and funding for journals to sustainably meet it. There are many barriers to solving these complex challenges, but one new initiative called the Open Journals Collective aims to disrupt the current landscape by offering a more equitable, sustainable and alternative solution to the traditional and established payment structures.

During this interactive session we heard from the conveners of the collective to learn about why and how it came about, what it offers and why it is needed. We also heard about the experiences with various OA journal models, as well as perspectives from a journal Editor who resigned from a subscription journal and successfully launched a new and competing diamond open access journal.

Access the recording below or on the UCL Press website.

Professionalising data, software, and infrastructure support to transform open science

This workshop focused on the needs of both researchers and technical support, seeking to understand the answers to some fundamental questions: If you are a researcher – what do you need in terms of technical support and services? If you are a research technology professional – what skill and training do you need to be able to offer this support?

The team in ARC behind this fascinating session have shared a write-up about it which you can read on their blog page.

Wednesday 4 June:

Should reproducibility be the aim for open qualitative research? Researchers’ perspectives

Reproducibility is often touted among quantitative researchers as a necessary step to make studies rigorous. To determine reproducibility, whether the same analyses of the same data produce the same results, the raw data and code must be accessible to other researchers. Qualitative researchers have also begun to consider making their data open. However, for researchers in fields where cultural knowledge plays a key role in the analysis of qualitative data, openness of such data may invite misrepresentation by re-use of the data by researchers unfamiliar with the cultural and social context in which it was produced.

This event asked whether reproducibility should be the aim for open qualitative data, and if not, why should researchers make their qualitative data open and what are the other methods used to establish rigour and integrity in research?

Access the recording below or on the LSE Library YouTube Channel.

How open is possible, how closed is necessary? Navigating data sharing whilst working with personal data

In the interests of transparency and research integrity, researchers are encouraged to open up more of their research process, including sharing data. However, for researchers working with personal data, including interview and medical data, there are important considerations for sharing. This event brought together researchers from a range of disciplines to share their experiences and strategies for open research when working with personal data.

Access the recording below or on the LSE Library YouTube Channel.

Thursday 5 June: Open Research in the Age of Populism

Political shifts around the world, from the Trump administration in the US to Orban’s government in Hungary, are making it more important than ever to have reliable research freely available. However, these governments are also making it more risky to openly share the results of research in many countries and disciplines. Alongside the political censorship of research in some countries there are also changes to research funding, research being misrepresented and used to spread misinformation online, and concerns about the stability of open research infrastructure which is funded by the state. In this session the panellists considered the value of open knowledge, the responsibilities of individual researchers and institutions to be open and how you can protect yourself when making your research openly available.

Access the recording below or on the LSE Library YouTube Channel.

Friday 6 June

Authorship in the era of AI

With the rapid growth of AI tools over the past three years, there has been a corresponding rise in the number of academics and students using them in their own writing. While it is generally agreed that we still expect people to be the “authors” of their work, deciding how to interpret that is often a nuanced and subjective decision by the writer. This in-depth panel discussed how we think about “authorship” for AI-assisted writing.

This session was so in-depth that the panel and the chair have worked together to create a summary of the discussion, complete with the resources and themes shared, which you can read on a separate blog post.

Open Science & Scholarship Festival 2025 now open for booking!

By Kirsty, on 24 April 2025

We are delighted to be able to finally launch the full programme for the Open Science & Scholarship Festival 2025 in collaboration with LSE and the Francis Crick Institute.

montage of institution logos

The festival will run from the 2 – 6 June and includes and exciting array of sessions including creative workshops, informal networking, case studies, online and in person panel discussions and technology demonstrations. (more…)

Announcing: UCL Statement on Principles of Authorship

By Kirsty, on 25 October 2024

As we conclude International Open Access Week, we have been inspired by a wealth of discussions and events across UCL! This week, we have explored balancing collaboration and commercialisation, highlighted the work of Citizen Science initiatives, discussed the role of open access textbooks in education, and addressed key copyright challenges in the age of AI to ensure free and open access to knowledge.

Today, we are excited to introduce the UCL Statement of Principles of Authorship. This new document, shaped through a co-creation workshop and community consultation, provides guidance on equitable authorship practices and aims to foster more inclusive and transparent research collaboration across UCL.


The team at the UCL Office for Open Science & Scholarship is pleased to launch the UCL Statement of Principles of Authorship. These principles have been built up from a co-creation workshop and developed in consultation with our academic community and are now available for wider use, linked from our website.

A diverse group of participants at the 'Challenges of Equity in Authorship' workshop in 2023 are engaged in discussion around tables in a large room with high ceilings and arched windows. A presentation screen displays their reflections, and the open space is filled with bright lighting.

Participants during ‘Challenges of Equity in Authorship’ workshop in 2023

In August 2023, the OOSS Team posted a discussion about the challenges of equity in authorship and the co-production workshop held during that year’s Open Science & Scholarship Conference. We outlined some preliminary considerations that led to the workshop, summarised the discussion and emerging themes, including the need to more widely acknowledge contributions to research outputs, the power dynamics involved in authorship decisions, and ways to make academic language and terminology accessible for contributors outside the academic ‘bubble’.

The outcomes of the workshop were then used as the basis for developing the new Statement of Principles of Authorship. This document provides general advice, recommendations and requirements for authors, designed to complement the UCL Code of Conduct for Research and align with existing published frameworks, such as the Technicians Commitment or CRediT. The document outlines four core principles and a variety of applications for their use across the broad range of subject areas and output types that are produced across the institution. It also proposes standards for affiliations and equitable representation of contributors.

While it is true that academic publishing is a complex and changing environment, these principles are intended as a touchstone for discussions around authorship rather than explicit expectations or policy. They can guide decision making, help understand how affiliations should be presented for best consistency and traceability in the long term, and empower people to request inclusion or make plans to include citizen scientists or other types of collaborators to their work.

We look forward to hearing the many ways that these principles can be used by the community!

For a full overview of our #OAWeek 2024 posts, visit our blog series page. To learn more about the Principles of Authorship and stay updated on open science initiatives across UCL, sign up for our mailing list.

 

From Policy to Practice: UCL Open Science Conference 2024

By Kirsty, on 11 July 2024

Last month, we hosted our 4th UCL Open Science Conference! This year, we focused inward to showcase the innovative and collaborative work of our UCL researchers in our first UCL community-centered conference. We were excited to present a strong lineup of speakers, projects, and posters dedicated to advancing open science and scholarship. The conference was a great success, with nearly 80 registrants and an engaged online audience.

If you missed any sessions or want to revisit the presentations, you can find highlights, recordings, and posters from the event below.

Session 1 – Celebrating Our Open Researchers

The conference began with a celebration of the inaugural winners of the Open Science & Scholarship Awards, recognizing researchers who have significantly contributed to open science. This session also opened nominations for next year’s awards.

Access the full recording of the session 1 on MediaCentral.

Session 2: Policies and Practice

Katherine Welch introduced an innovative approach to policy development through collaborative mosaic-making. Ilan Kelman discussed the ethical limits of open science. He reminded us of the challenges and considerations when opening up research and data to the public. David Perez Suarez introduced the concept of an Open Source Programme Office (OSPO) at UCL and, with Sam Ahern, showcased the Centre of Advanced Research Computing’s unique approach to creating and sharing open educational resources.

Access the full recording of the session 2 on MediaCentral.

Session 3: Enabling Open Science and Scholarship at UCL

This session introduced new and updated services and systems at UCL designed to support open science and scholarship. Highlights included UCL Profiles, Open Science Case Studies, the UCL Press Open Textbooks Project, UCL Citizen Science Academy, and the Open@UCL Newsletter.

Access the full recording of the session 3 on MediaCentral.

Session 4: Research Projects and Collaborations

This session featured presentations on cutting-edge research projects and collaborations transforming scholarly communication and advancing scientific integrity. Klaus Abels discussed the journey of flipping a subscription journal to diamond open-access. Banaz Jalil and Michael Heinrich presented the ConPhyMP guidelines for chemical analysis of medicinal plant extracts, improving healthcare research. Francisco Duran explored social and cultural barriers to data sharing and the role of identity and epistemic virtues in creating transparent and equitable research environments.

Access the full recording of the session 4 on Media Central.

Posters and Networking:

We also hosted a Poster Session and Networking event where attendees explored a variety of posters showcasing ongoing research across UCL’s disciplines, accompanied by drinks and snacks. This interactive session provided a platform for researchers to present their work, exchange ideas, and foster collaborations within and beyond the UCL community.

Participants engaged directly with presenters, learning about research findings and discussing potential synergies for future projects. Themes covered by the posters included innovative approaches to public engagement by UCL’s Institute of Global Prosperity and Citizen Science Academy, as well as discussions on the balance between open access and data security in the digital age.

Explore all the posters presented at the UCL Open Science Conference 2024 on the UCL Research Data Repository. This collection is under construction and will continue to grow.

Reminder for Attendees – Feedback

For those who attended, please take a minute to complete our feedback form. Your input is very important to improve future conferences. We would appreciate your thoughts and suggestions.

A Huge Thank You!

Thank you to everyone who joined us for the UCL Open Science Conference 2024. Your participation and enthusiasm made this event a great success. We appreciate your commitment to advancing open science and scholarship across UCL and beyond, and we look forward to seeing the impact of your work in the years to come.

Please watch the sessions and share your feedback with us. Your insights are invaluable in shaping future events and supporting the open science community.

We look forward to seeing you at next year’s conference!

UCL Open Science & Scholarship Conference 2024: Programme Now Available!

By Rafael, on 13 June 2024

Image of UCL Front Quad and Portico over spring. With less than a week until this year’s UCL Open Science Conference, anticipation is building! We are thrilled to announce that the programme for the UCL Open Science & Scholarship Conference 2024 is now ready. Scheduled for Thursday, June 20, 2024, from 1:00 pm to 5:00 pm BST, both onsite at UCL and online, this year’s conference promises to be an exciting opportunity to explore how the UCL community is leading Open Science and Scholarship initiatives across the university and beyond.

Programme Outline:

1:00-1:05 pm
Welcome and Introductions
Join us as we kick off the conference with a warm welcome and set the stage for the afternoon.

1:05-1:45 pm
Session 1: Celebrating our Open Researchers
Learn about the outstanding contributions of our Open Science champions and their work recognised at the UCL Open Science & Scholarship Awards last year.

1:45-2:45 pm
Session 2: Policies and Practice
Explore discussions on policy development and ethical considerations in Open Science, including talks on collaborative policy-making and the role of Open Source Programme Offices (OSPOs).

2:45-3:15 pm
Coffee Break
Network and engage with our fellow attendees over coffee, tea, and biscuits.

3:15-4:00 pm
Session 3: Enabling Open Science and Scholarship at UCL
Check out services and initiatives that empower UCL researchers to embrace Open Science, including updates on UCL Profiles, UCL Citizen Science Academy, and Open Science Case Studies.

4:00-4:45 pm
Session 4: Research Projects and Collaborations
Discover cutting-edge research projects and collaborations across UCL, including case studies involving the transition to Open Access publishing, reproducible research using medicinal plants, and social and cultural barriers to data sharing.

" "4:45-5:00 pm
Summary and Close of Main Conference
Reflect on key insights from the day’s discussions and wrap up the main conference.

5:00-6:30 pm
Evening Session: Poster Viewing and Networking Event
Engage with our presenters and attendees over drinks and nibbles, while exploring posters showcasing research and discussions in Open Science and Scholarship through diverse perspectives.

For the complete programme details, please access the full document uploaded on the UCL Research Data Repository, or access the QR code.

Join us – Tickets are still available!
Whether you’re attending in person or joining us virtually, we invite you to participate in discussions that shape the future of Open Science and Scholarship at UCL. Sales will close on Monday. Secure your spot now! Register here.

Thank you!
Thank you once more to everyone who submitted their ideas to the Call for Papers and Posters. We received brilliant contributions and are grateful for our packed programme of insightful discussions and projects from our community.

We look forward to welcoming you to the UCL Open Science & Scholarship Conference 2024!

Get involved!

alt=""The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities. Follow us on X, formerly Twitter, LinkedIn, and join our mailing list to be part of the conversation!

 

(Update: Deadline Extended!) Call for Papers & Posters – UCL Open Science Conference 2024 

By Rafael, on 21 March 2024

Theme: Open Science & Scholarship in Practice 
Date: Thursday, June 20th, 2024 1-5pm, followed by Poster display and networking
Location: UCL Institute of Advanced Studies, IAS Common Ground room (G11), South Wing, Wilkins Building 

We are delighted to announce the forthcoming UCL Open Science Conference 2024, scheduled for June 20, 2024. We are inviting submissions for papers and posters showcasing innovative practices, research, and initiatives at UCL that exemplify the application of Open Science and Scholarship principles. This internally focused event aims to showcase the dynamic landscape of Open Science at UCL and explore its practical applications in scholarship and research, including Open Access Publishing, Open Data and Software, Transparency, Reproducibility, Open Educational Resources, Citizen Science, Co-Production, Public Engagement, and other open practices and methodologies. Early career researchers and PhD students from all disciplines are particularly encouraged to participate.

A group of attendees gathered around four rectangular tables engaging in discussions. In the middle of the room, a screen displays the text: "What are the challenges and opportunities that need to be addressed to create equitable conditions in relation to authorship?"

Attendees of the UCL Open Science Conference 2023 participating in a workshop

Conference Format: 

Our conference will adopt a hybrid format, offering both in-person and online participation options, with a preference for in-person attendance. The afternoon will feature approximately four thematic sessions, followed by a poster session and networking opportunities. Session recordings will be available for viewing after the conference. 

Call for papers

Submission Guidelines: 

We invite all colleagues at UCL to submit paper proposals related to Open Science and Scholarship in Practice, some example themes are below. Papers could include original research, case studies, practical implementations, and reflections on Open Science initiatives. Submissions should adhere to the following guidelines: 

  • Abstracts: Maximum 300 words
  • Presentation Length: 15 minutes (including time for questions)
  • Deadline for Abstract Submission: F̶r̶i̶d̶a̶y̶, A̶p̶r̶i̶l̶ 2̶6̶  Friday, May 3. (Deadline Extended!) 

Please submit your abstract proposals using this form.  

Potential Subthemes: 

  • Case Studies and Best Practices in Open Science and Scholarship
  • Open Methodologies, Transparency, and Reproducibility in Research Practices
  • Open Science Supporting Career Development and Progression
  • Innovative Open Data and Software Initiatives
  • Promoting and Advancing Open Access Publishing within UCL
  • Citizen Science, Co-Production, and Public Engagement Case Studies
  • Open Educational Resources to Support Teaching and Learning Experiences

Call for Posters

Session Format: 

The poster session will take place in person during the evening following the afternoon conference. Posters will be displayed for networking and engagement opportunities. Additionally, posters will be published online after the conference, potentially through the Research Data Repository. All attendees are encouraged to participate in the poster session, offering a platform to present their work and engage in interdisciplinary discussions. 

Submission Guidelines: 

All attendees are invited to propose posters showcasing their work related to Open Science and Scholarship in Practice. Posters may include research findings, project summaries, methodological approaches, and initiatives pertaining to Open Science and Scholarship. 

Deadline: Friday, May 24 

Please submit your poster proposals using this form.

Next Steps

A neon colourful sign that says 'Watch this Space'

Photo by Samuel Regan-Asante on Unplash

Notifications of acceptance will be sent in the week ending May 10th for Papers and June 7th for Posters. 

Recordings of the UCL Open Science Conference 2023, are available on this blog post from May 2023.

For additional information about the conference or the calls, feel free to reach out to us at openscience@ucl.ac.uk. 

Watch this space for more news and information about the upcoming UCL Open Science Conference 2024!