X Close

Open@UCL Blog

Home

Menu

Open Access Week Webinar: Who Owns Our Knowledge?

By Naomi, on 3 November 2025

A graphic divided into two halves, on the left is a starry night sky with the silhouette of a person looking up at it in wonder, and against the backdrop of the sky is a large version of the International Open Access Week logo which looks like an open padlock. On the right is a dark purple background with the text 'International Open Access Week' at the top with the logo, and 'Open Access Week 2025' near the bottom, below which is written 'October 20-26, 2025, #OAWeek'

Graphic from openaccessweek.org, photo by Greg Rakozy

To mark this year’s Open Access Week (20-26 October), the UCL Office for Open Science and Scholarship hosted a webinar exploring this year’s theme: Who Owns Our Knowledge?

Facilitated by Bibliometrics Support Officer Andrew Gray, a panel of four speakers from different areas of UCL offered their time and expertise to consider this complex question.

  • Lauren Cantos is the Research Integrity and Assurance Officer in the Compliance and Assurance team. Previously she worked in the Research Ethics team at UCL, and her background is as a Humanities and English researcher.
  • Christine Daoutis is the UCL copyright support officer, based in the library. Her background is in open access, open science and copyright, particularly the ways copyright interacts with open practices.
  • Catherine Sharp is Head of Open Access Services in Library Services [or LCCOS]. She manages the Open Access Team, which delivers Gold open access, including transformative agreements, and Green open access through UCL’s repository, UCL Discovery, for UCL staff and students.
  • Muki Haklay is a Professor of Geographic Information Science at UCL department of Geography. He founded and co-direct the UCL Extreme Citizen Science group. He is an expert in citizen science and contributed to the US Association for Advancing Participatory Science (formerly the Citizen Science Association), and the European Citizen Science Association (ECSA).

The webinar began with a short reflection on the theme from each of the panellists, followed by a discussion structured around these questions:

  1. What does “ownership” mean for research – for outputs and for data? And when we define what “ownership” means, how do we decide who the owners are – or who they should be?
  2. We often think of ownership as linked to “authorship”. A wide range of people contribute to research – including many outside academia – but not all become named as authors. How do we recognise them?
  3. What happens when copyright (or other IP rights) conflict with academic expectations around ownership and authorship?
  4. How is the production and the dissemination of research influenced by commercial considerations around ownership and access?

It was a thought-provoking discussion in which the panellists touched on a wide range of subjects, including considerations of attribution beginning at the outset of a project, recognising contribution from individuals outside of academic structures, understanding copyright concerns when having work published and how UCL’s updated Publications Policy can help with this. As well as answering questions, the session raised other questions and, as is often the case, the complexity of these questions didn’t allow for straightforward answers. As Andrew aptly put it towards the end of the webinar – ‘sometimes saying the question is complicated is an answer in itself’. This particularly resonated with regard to the issue of AI tools failing to attribute authors, and also the matter of widening participation within the production of knowledge.

If this has piqued your interest, or you attended the webinar and would like a recap, you can watch the full recording now:

 

Access the full recording on MediaCentral

Useful Links

A selection of useful resources were shared in the webinar chat:

We are very grateful to the speakers who contributed a lot of insight and provided much to reflect on from this webinar. We hope the conversation around these questions will continue and answers will develop as we navigate the complexities.

alt=""

The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities.

Follow us on Bluesky, LinkedIn, and join our mailing list to be part of the conversation!

Share this post on Bluesky

‘Who Owns Our Knowledge?’ Understanding How Copyright Can Shape the Discourse Around Open Scholarship

By Naomi, on 21 October 2025

Guest post by Christine Daoutis, Copyright Support Officer at UCL

A graphic divided into two halves, on the left is a starry night sky with the silhouette of a person looking up at it in wonder, and against the backdrop of the sky is a large version of the International Open Access Week logo which looks like an open padlock. On the right is a dark purple background with the text 'International Open Access Week' at the top with the logo, and 'Open Access Week 2025' near the bottom, below which is written 'October 20-26, 2025, #OAWeek'

Graphic from openaccessweek.org, photo by Greg Rakozy

The theme of this year’s International Open Access Week is a question – and a call for collaboration. By addressing ‘who owns our knowledge’, it invites diverse communities to recognise and challenge existing assumptions about how scholarship is created, disseminated and built upon; to recognise power dynamics that shape these assumptions; and to make decisions that best serve the interests of the public and the academic community.

Understanding how copyright frames these assumptions, power dynamics and decisions is essential. In the strictest sense, who ‘owns’ scholarship (perceived as the IP rights in the outcomes of research – publications, research data and any other outputs created in the life of a research project) is, after all, defined by legislation and by the terms of publishing agreements and other contracts. In a broader sense, ‘owning’ can determine the ‘what’, the ‘how’ and the ‘who’ of scholarship in the first place: what is selected to be funded and published? How will the outcomes be disseminated? And crucially, who is able (or not able) to access, understand, benefit from and possibly build on the outcomes of a work? While many of these questions depend on IP rights, other factors (including criteria of research quality and impact, academic freedom, linguistic and cultural barriers to access) also influence how we address them.

Against a pale blue background, several arms are each holding up different coloured letters which spell 'Copyright'.

Image from www.freepik.com

Keeping close to this year’s theme, this post will focus on three key approaches related to copyright which should help adopt practices that support open scholarship.

 

  1. Understanding authorship and copyright ownership
    To make a work as open as possible, it is first necessary to establish who the rights owner is, as it is the rights owner who has control over reproducing and disseminating the work. It is natural to assume that the author(s) of a work should be its owner(s). However, this is determined by copyright laws and by contract. In the UK, the first owner of a work is its author. However, if the work was created in the course of employment, the employer is the owner unless there is an agreement that says otherwise (CDPA 11). Understanding – and where necessary, negotiating – ownership empowers authors to make their research widely available and reusable. This involves reading and understanding institutional IP policies and the terms of grant agreements, publisher agreements and collaboration/co-production agreements. In terms of publishing, rights retention policies (covered in another post this week) ensure that authors and their institutions keep key rights enabling them to make their research articles immediately available under the terms of an open licence.
  2. Addressing authorship and ownership in collaborations
    Moral rights – which include the right to be attributed as the author of a work – are just as important as economic rights when addressing copyright. Deciding who is co-author in a work and in what order they should be credited is essential. Further, contributions to a research project that may or may not also involve direct authorship of a publication should also be established and acknowledged. This includes acknowledging contributions by research participants, citizen science participants and anyone who has played an advisory or supporting role in the research by applying standards such as the Contributor Roles Taxonomy (CrediT).
  3. Understanding and using open licences
    Open licences, including Creative Commons licences and open source licences, support the dissemination and reuse of a wide range of works. While research funders have requirements around the use of licences (for example, the CC BY licence for research publications) researchers can also apply licences to a broader range materials (educational resources, images, preprints, datasets). Particularly in the age of AI, understanding how licences such as Creative Commons work is important, both for authors and users of scholarly works. Creative Commons are also introducing ‘preference signals’ to support transparency and reciprocity in how scholarly works are used by AI.

Further Support

The UCL copyright service helps you navigate these issues through training, discussion and opportunities to follow and participate in current debates. To engage with copyright at UCL:

 

alt=""

The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities.

Follow us on Bluesky, LinkedIn, and join our mailing list to be part of the conversation!

Share this post on Bluesky

Authorship in the Era of AI – Panel Discussion

By Naomi, on 9 July 2025

Guest post by Andrew Gray, Bibliometrics Support Officer

This panel discussion at the 2025 Open Science and Scholarship Festival was made up of three professionals with expertise in different aspects of publishing and scholarly writing, across different sectors – Ayanna Prevatt-Goldstein, from the UCL Academic Communication Centre focusing on student writing; Rachel Safer, the executive publisher for ethics and integrity at Oxford University Press, and also an officer of the Committee on Publication Ethics, with a background in journal publishing; and Dhara Snowden, from UCL Press, with a background in monograph and textbook publishing.

We are very grateful to everyone who attended and brought questions or comments to the session.

This is a summary of the discussion from all three panel members, and use of any content from this summary should be attributed to the panel members. If you wish to cite this, please do so as A. Prevatt-Goldstein, R. Safer & D. Snowden (2025). Authorship in the Era of AI. [https://blogs.ucl.ac.uk/open-access/2025/07/09/authorship-in-the-era-of-ai/]

Where audience members contributed, this has been indicated. We have reorganised some sections of the discussion for better flow.

The term ‘artificial intelligence’ can mean many things, and often a wide range of different tools are grouped under the same general heading. This discussion focused on ‘generative AI’ (large language models), and on their role in publishing and authorship rather than their potential uses elsewhere in the academic process.

Due to the length of this write-up, you can directly access each question using the following links:
1. There is a growing awareness of the level of use of generative AI in producing scholarly writing – in your experience, how are people currently using these tools, and how widespread do you think that is? Is it different in different fields? And if so, why?

2. Why do you think people are choosing to use these tools? Do you think that some researchers – or publishers – are feeling that they now have to use them to keep pace with others?

3. On one end of the spectrum, some people are producing entire papers or literature reviews with generative AI. Others are using it for translation, or to generate abstracts. At the other end, some might use it for copyediting or for tweaking the style. Where do you think we should draw the line as to what constitutes ‘authorship’?

4. Do you think readers of scholarly writing would draw the line on ‘authorship’ differently to authors and publishers? Should authors be expected to disclose the use of these tools to their readers? And if we did – is that something that can be enforced?

5. Do you think ethical use of AI will be integrated into university curriculums in the future? What happens when different institutions have different ideas of what is ‘ethical’ and ‘responsible’?

6. Many students and researchers are concerned about the potential for being falsely accused of using AI tools in their writing – how can we help people deal with this situation? How can people assert their authorship in a world where there is a constant suspicion of AI use?

7. Are there journals which have developed AI policies that are noticeably more stringent than the general publisher policies, particularly in the humanities? How do we handle it if these policies differ, or if publisher and institutional policies on acceptable AI use disagree?

8. The big AI companies often have a lack of respect for authorship, as seen in things like the mass theft of books. Are there ways that we can protect authorship and copyrights from AI tools?

9. We are now two and a half years into the ‘ChatGPT era’ of widespread AI text generation. Where do you see it going for scholarly publishing by 2030?


1. There is a growing awareness of the level of use of generative AI in producing scholarly writing – in your experience, how are people currently using these tools, and how widespread do you think that is? Is it different in different fields? And if so, why?

Among researchers, a number of surveys by publishers have suggested that 70-80% of researchers are using some form of AI, broadly defined, and a recent Nature survey suggested this is fairly consistent across different locations and fields. However, there was a difference by career stage, with younger researchers feeling it was more acceptable to use it to edit papers, and by first language, where non-English speakers were more likely to use it for this as well.

There is a sense that publishers in STEM fields are more likely to have guidance and policy for the use of AI tools; in the humanities and social sciences, this is less well developed, and publishers are still in the process of fact-finding and gathering community responses. There may still be more of a stigma around the use of AI in the humanities.

In student writing, a recent survey from HEPI found that from 2024 to 2025, the share of UK undergraduates who used generative AI for generating text had gone from a third of students to two thirds, and only around 8% said they did not use generative AI at all. Heavier users included men, students from more advantaged backgrounds, and students with English as a second or additional language.

There are some signs of variation by discipline in other research. Students in fields where writing is seen as an integral part are more concerned with developing their voice and a sense of authorship, and are less likely to use it for generating text – or at least are less likely to acknowledge it – and where they do, they are more likely to personalise the output. By comparison, students in STEM subjects are more likely to feel that they were being assessed on the content – the language they use to communicate it might be seen as less important.

[For more on this, see A. Prevatt-Goldstein & J. Chandler (forthcoming). In my own words? Rethinking academic integrity in the context of linguistic diversity and generative AI. In D. Angelov and C.E. Déri (Eds.), Academic Writing and Integrity in the Age of Diversity: Perspectives from European and North American Higher Education. Palgrave.)]


2. Why do you think people are choosing to use these tools? Do you think that some researchers – or publishers – are feeling that they now have to use them to keep pace with others?

Students in particular may be more willing to use it as they often prioritise the ideas being expressed over the mode of expressing them, and the idea of authorship can be less prominent in this context. But at a higher level, for example among doctoral students, we find that students are concerned about their contribution and whether perceptions of their authorship may be lessened by using these tools.

A study among publishers found that the main way AI tools were being used was not to replace people at specific tasks, but to make small efficiency savings in the way people were doing them. This ties into the long-standing use of software to assist copyediting and typesetting.

Students and academics are also likely to see it from an efficiency perspective, especially among those who are becoming used to working with generative AI tools in their daily lives, and so are more likely to feel comfortable using it in academic and professional contexts. Academics may feel pressure to use tools like this to keep up a high rate of publication. But the less involvement of time in a particular piece of work might be a trade-off of time spent against quality; we might also see trade-offs in terms of the individuality and nuance of the language, of fewer novel and outlier ideas being developed, as generative AI involvement becomes more common.

Ultimately, though, publishers struggle to monitor researchers’ use of generative AI in their original research – they are dependent on institutions training students and researchers, and on the research community developing clearer norms, and perhaps there is also a role for funders to support educating authors about best practices.

Among all users, a significant – and potentially less controversial – role for generative AI is to help non-native English speakers with language and grammar, and to a more limited degree translation – though quality here varies and publishers would generally recommend that any AI translation should be checked by a human specialist. However, this has its own costs.

With English as a de facto academic lingua franca, students (and academics) who did not have it as a first language were inevitably always at a disadvantage. Support for this could be found – perhaps paying for help, perhaps friends or family or colleagues who could support language learning – but this was very much support that was available more to some students than others, due to costs or connections, and generative AI tools have the potential to democratise this support to some degree. However, this causes a corresponding worry among many students that the bar has been raised – they feel they are now expected to use these tools or else they are disadvantaged compared to their peers.


3. On one end of the spectrum, some people are producing entire papers or literature reviews with generative AI. Others are using it for translation, or to generate abstracts. At the other end, some might use it for copyediting or for tweaking the style. Where do you think we should draw the line as to what constitutes ‘authorship’?

In some ways, this is not a new debate. As we develop new technologies which change the way we write – the printing press, the word processor, the spell checker, the automatic translator – people have discussed how it changes ‘authorship’. But all these tools have been ways to change or develop the words that someone has already written; generative AI can go far beyond that, producing vastly more material without direct involvement beyond a short prompt.

A lot of people might treat a dialogue with generative AI, and the way they work with those outputs, in the same way as a discussion with a colleague, as a way to thrash out ideas and pull them together. We have found that students are seeing themselves shifting from ‘author’ to ‘editor’, claiming ownership of their work through developing prompts and personalising the output, rather than through having written the text themselves. There is still a concept of ownership, a way of taking responsibility for the outcome, and for the ideas being expressed, but that concept is changing, and it might not be what we currently think of as ‘authorship’.

Sarah Eaton’s work has discussed the concept of ‘Post-plagiarism’ as a way to think about writing in a generative AI world, identifying six tenets of post-plagiarism. One of those is that humans can concede control, but not responsibility; another is that attribution will remain important. This may give us a useful way to consider authorship.

In publishing, ‘authorship’ can be quite firmly defined by the criteria set by a specific journal or publisher. There are different standards in different fields, but one of the most common is the ICMJE definition which sets out four requirements to be considered an author – substantial contribution to the research; drafting or editing the text; having final approval; and agreeing to be accountable for it. In the early discussions around generative AI tools in 2022, there was a general agreement that these could never meet the fourth criteria, and so could never become ‘authors’; they could be used, and their use could be declared, but it did not conceptually rise to the level of authorship as it could not take ownership of the work.

The policy that UCL Press adopted, drawing on those from other institutions, looked at ways to identify potential responsible uses, rather than a blanket ban – which it was felt would lead to people simply not being transparent when they had used it. It prohibited ‘authorship’ by generative AI tools, as is now generally agreed; it required that authors be accountable, and take responsibility for the integrity and validity of their work; and it asked for disclosure of generative AI.

Monitoring and enforcing that is hard – there are a lot of systems claiming to test for generative AI use, but they may not work for all disciplines, or all kinds of content – so it does rely heavily on authors being transparent about how they have used these tools. They are also reliant on peer reviewers flagging things that might indicate a problem. (This also raises the potential of peer reviewers using generative AI to support their assessments – which in turn indicates the need for guidance about how they could use it responsibly, and clear indications on where it is or is not felt to be appropriate.)

Generative AI potentially has an interesting role to play in publishing textbooks, which tend to be more of a survey of a field than original thinking, but do still involve a dialogue with different kinds of resources and different aspects of scholarship. A lot of the major textbook platforms are now considering ways in which they can use generative AI to create additional resources on top of existing textbooks – test quizzes or flash-cards or self-study resources.


4. Do you think readers of scholarly writing would draw the line on ‘authorship’ differently to authors and publishers? Should authors be expected to disclose the use of these tools to their readers? And if we did – is that something that can be enforced?

There is a general consensus emerging among publishers that authors should be disclosing use of AI tools at the point of submission, or revisions, though where the line is drawn there varies. For example, Sage requires authors to disclose the use of generative AI, but not ‘assistive’ AI such as spell-checkers or grammar checkers. The STM Association recently published a draft set of recommendations for using AI, with nine classifications of use. (A commenter in the discussion also noted a recent proposed AI Disclosure Framework, identifying fourteen classes.)

However, we know that some people, especially undergraduates, spend a lot of time interacting with generative AI tools in a whole range of capacities, around different aspects of the study and writing process, which can be very difficult to define and describe – there may not be any lack of desire to be transparent, but it simply might not fit into the ways we ask them to disclose the use of generative AI.

There is an issue about how readers will interpret a disclosure. Some authors may worry that there is a stigma attached to using generative AI tools, and be reluctant to disclose if they worry their work will be penalised, or taken less seriously, as a result. This is particularly an issue in a student writing context, where it might not be clear what will be done with that disclosure – will the work be rejected? Will it be penalised, for example a student essay losing some marks for generative AI use? Will it be judged more sceptically than if there had been no disclosure? Will different markers, or editors, or peer-reviewers make different subjective judgements, or have different thresholds?

These concerns can cause people to hesitate before disclosing, or to avoid disclosing fully. But academics and publishers are dependent on honest disclosure to identify inappropriate use of generative AI, so may need to be careful in how they frame this need to avoid triggering these worries about more minor use of generative AI. Without honest disclosure, we also have no clear idea of what writers are using AI for – which makes it all the harder to develop clear and appropriate policies.

For student writing, the key ‘reader’ is the marker, who will also be the person to whom generative AI use is disclosed. But for published writing, once a publisher has a disclosure of AI use, they may need to decide what to pass along to the reader. Should readers be sent the full disclosure, or is that overkill? It may include things like idea generation, assistance with structure, or checking for more up-to-date references – these might be useful for the publisher to know, but might not need to be disclosed anywhere in the text itself. Conversely, something like images produced by generative AI might need to be explicitly and clearly disclosed in context.

The recent Nature survey mentioned earlier showed that there is no clear agreement among academics as to what is and isn’t acceptable use, and it would be difficult for publishers to draw a clear line in that situation. They need to be guided by the research community – or communities, as it will differ in different disciplines and contexts.

We can also go back to the pre-GenAI assumptions about what used to be expected in scholarly writing, and consider what has changed. In 2003, Diane Pecorari identified the three assumptions for transparency in authorship:

1. that language which is not signaled as quotation is original to the writer;
2. that if no citation is present, both the content and the form are original to the writer;
3. that the writer consulted the source which is cited.

There is a – perhaps implicit – assumption among readers that all three of these are true unless otherwise disclosed. But do those assumptions still hold among a community of people – current students – who are used to the ubiquitous use of generative AI? On the face of it, generative AI would clearly break all three.

If we are setting requirements for transparency, there should also be consequences for breach of transparency – from a publisher’s perspective, if an author has put out a generative AI produced paper with hallucinated details or references, the journal editor or publisher should be able to investigate and correct or retract it, exactly as would be the case with plagiarism or other significant issues.

But there is a murky grey area here – if a paper is otherwise acceptable and of sufficient quality, but does not have appropriate disclosure of generative AI use, would that in and of itself be a reason for retraction? At the moment, this is not on the COPE list of reasons for retraction – it might potentially justify a correction or an editorial note, but not outright retraction.

Conversely, in the student context, things are simpler – if it is determined that work does not belong to the student, whether that be through use of generative AI or straightforward plagiarism, then there are academic misconduct processes and potentially very clear consequences which follow from that. These do not necessarily reflect on the quality of the output – what is seen as critical is the authorship.


5. Do you think ethical use of AI will be integrated into university curriculums in the future? What happens when different institutions have different ideas of what is ‘ethical’ and ‘responsible’?

A working group at UCL put together a first set of guidance on using generative AI in early 2023, and focused on ethics in the context of learning outcomes – what is it that students are aiming to achieve in their degree, and will generative AI help or not in that process? But ethical questions also emerged in terms of whose labour had contributed to these tools, what the environmental impacts where, and importantly whether students were able to opt out of using generative AI. There are no easy answers to any of these, but they very much are ongoing questions.

Recent work from MLA looking at AI literacies for students is also informative here in terms of what it expects students using AI to be aware of.


6. Many students and researchers are concerned about the potential for being falsely accused of using AI tools in their writing – how can we help people deal with this situation? How can people assert their authorship in a world where there is a constant suspicion of AI use?

There was no easy answer here and a general agreement that this is challenging for everyone – it can be very difficult to prove a negative. Increasing the level of transparency around disclosing AI use – and how much AI has been used – will help overall, but maybe not in individual cases.

Style-based detection tools are unreliable and can be triggered by normal academic or second-language writing styles. A lot of individuals have their own assumptions as to what is a ‘clear marker’ of AI use, and these are often misleading, leading to false positives and potentially false accusations. Many of the plagiarism detection services have scaled back or turned off their AI checking tools.

In publishing, a lot of processes have historically been run on a basis of trust – publishers, editors, and reviewers have not fact-checked every detail. If you are asked to disclose AI use and you do not, the system has to trust you did not use it, in the same way that it trusts you obtained the right ethical approvals or that you actually produced the results you claim. Many publishers are struggling with this, and feeling that they are still running to catch up with recent developments.

In academia, we can encourage and support students to develop their own voice in their writing. This is a hard skill to develop, and it takes time and effort, but it can be developed, and it is a valuable thing to have – it makes their writing more clearly their own. The growth of generative AI tools can be a very tempting shortcut for many people to try and get around this work, but there are really no shortcuts here to the investment of time that is needed.

There was a discussion of the possibility of authors being more transparent with their writing process to help demonstrate research integrity – for example, documenting how they select their references, in the way that systematic review does, or using open notebooks? This could potentially be declared in the manuscript, as a section alongside acknowledgements and funding. Students could be encouraged to keep logs of any generative AI prompts they have used and how they are handling them, to be able to disclose this in case of concerns.


7. Are there journals which have developed AI policies that are noticeably more stringent than the general publisher policies, particularly in the humanities? How do we handle it if these policies differ, or if publisher and institutional policies on acceptable AI use disagree?

There are definitely some journals that have adopted more restrictive policies than the general guidance from their publisher, mostly in the STEM fields. We know that many authors may not read the specific author guidelines for a journal before submitting. Potentially we could see journals highlighting these restrictions in the submission process, and requiring the authors to acknowledge they are aware of the specific policies for that journal.


8. The big AI companies often have a lack of respect for authorship, as seen in things like the mass theft of books. Are there ways that we can protect authorship and copyrights from AI tools?

A substantial issue for many publishers, particularly smaller non-commercial ones, is that so much scholarly material is now released under an open-access license that makes it easily available for training generative AI; even if the licenses forbid this, it can be difficult in practice to stop it, as seen in trade publishing. It is making authors very concerned, as they do not know how or where their material will be used, and feel powerless to prevent it.

One potential way forward is by reaching agreements between publishers and AI companies, making agreements on licensing material and ensuring that there is some kind of renumeration. This is more practical for larger commercial publishers with more resources. There is also the possibility of sector-wide collective bargaining agreements, as has been seen with the Writers Guild of America, where writers were able to implement broader guardrails on how their work would be used.

It is clear that the current system is not weighted in favour of the original creators, and some form of compensation would be ideal, but we also need to be careful that any new arrangement doesn’t continue to only benefit a small group.

The issue of Creative Commons licensing regulating the use of material for AI training purposes was discussed – Creative Commons take the position that this work may potentially be allowed under existing copyright law, but they are investigating the possibility of adding a way to signal the author’s position. AI training would be allowed by most of the Creative Commons licenses, but might require specific conditions on the final model (eg displaying attribution or non-commercial restrictions).

A commenter in the discussion also mentioned a more direct approach, where some sites are using tools to obfuscate artwork or building “tarpits” to combat scraping – but these can shade into being malware, so not a solution for many publishers!


9. We are now two and a half years into the ‘ChatGPT era’ of widespread AI text generation. Where do you see it going for scholarly publishing by 2030?

Generative AI use is going to become even more prevalent and ubiquitous, and will be very much more integrated into daily life for most people. As part of that integration, ideally we would see better awareness and understanding of what it can do, and better education on appropriate use in the way that we now teach about plagiarism and citation. That education will hopefully begin at an early stage, and develop alongside new uses of the technology.

Some of our ideas around what to be concerned about will change, as well. Wikipedia was suggested as an analogy – twenty years ago we collectively panicked about the use of it by students, feeling it might overthrow accepted forms of scholarship, but then – it didn’t. Some aspects of GenAI use may simply become a part of what we do, rather than an issue to be concerned with.

There will be positive aspects of this, but also negative ones; we will have to consider how we keep a space for people who want to minimise their use of these tools, and choose not to engage with them, for practical reasons or for ethical ones, particularly in educational contexts.

There are also discussions around the standardisation of language with generative AI – as we lose a diversity of language and of expression, will we also lose the corresponding diversity of thought? Standardised, averaged language can itself be a kind of loss.

The panel concluded by noting that this is very much an evolving space, and encouraged greater feedback and collaboration between publishers and the academic community, funders, and institutions, to try and navigate where to draw the line. The only way forward will be by having these discussions and trying to agree common ground – not just on questions of generative AI, but on all sorts of issues surrounding research integrity and publication ethics.

 

Announcing: UCL Statement on Principles of Authorship

By Kirsty, on 25 October 2024

As we conclude International Open Access Week, we have been inspired by a wealth of discussions and events across UCL! This week, we have explored balancing collaboration and commercialisation, highlighted the work of Citizen Science initiatives, discussed the role of open access textbooks in education, and addressed key copyright challenges in the age of AI to ensure free and open access to knowledge.

Today, we are excited to introduce the UCL Statement of Principles of Authorship. This new document, shaped through a co-creation workshop and community consultation, provides guidance on equitable authorship practices and aims to foster more inclusive and transparent research collaboration across UCL.


The team at the UCL Office for Open Science & Scholarship is pleased to launch the UCL Statement of Principles of Authorship. These principles have been built up from a co-creation workshop and developed in consultation with our academic community and are now available for wider use, linked from our website.

A diverse group of participants at the 'Challenges of Equity in Authorship' workshop in 2023 are engaged in discussion around tables in a large room with high ceilings and arched windows. A presentation screen displays their reflections, and the open space is filled with bright lighting.

Participants during ‘Challenges of Equity in Authorship’ workshop in 2023

In August 2023, the OOSS Team posted a discussion about the challenges of equity in authorship and the co-production workshop held during that year’s Open Science & Scholarship Conference. We outlined some preliminary considerations that led to the workshop, summarised the discussion and emerging themes, including the need to more widely acknowledge contributions to research outputs, the power dynamics involved in authorship decisions, and ways to make academic language and terminology accessible for contributors outside the academic ‘bubble’.

The outcomes of the workshop were then used as the basis for developing the new Statement of Principles of Authorship. This document provides general advice, recommendations and requirements for authors, designed to complement the UCL Code of Conduct for Research and align with existing published frameworks, such as the Technicians Commitment or CRediT. The document outlines four core principles and a variety of applications for their use across the broad range of subject areas and output types that are produced across the institution. It also proposes standards for affiliations and equitable representation of contributors.

While it is true that academic publishing is a complex and changing environment, these principles are intended as a touchstone for discussions around authorship rather than explicit expectations or policy. They can guide decision making, help understand how affiliations should be presented for best consistency and traceability in the long term, and empower people to request inclusion or make plans to include citizen scientists or other types of collaborators to their work.

We look forward to hearing the many ways that these principles can be used by the community!

For a full overview of our #OAWeek 2024 posts, visit our blog series page. To learn more about the Principles of Authorship and stay updated on open science initiatives across UCL, sign up for our mailing list.

 

Copyright and Open science in the age of AI: what can we all do to ensure free and open access to knowledge for all?

By Rafael, on 24 October 2024

We are approaching the end of International Open Access Week, and we have been enjoying a series of interesting insights and discussions across UCL!  Earlier this week, we explored the balance between collaboration and commercialisationhighlighted the important work of Citizen Science initiatives and the growing significance of open access textbooks.

Today, Christine Daoutis, UCL Copyright Support Officer, will build on our ongoing series about copyright and open science, focusing on how we can ensure free and open access to knowledge in the age of AI, by addressing copyright challenges, advocating for rights retention policies, and discussing secondary publication rights that benefit both researchers and the public.


Open Access Week 2024 builds on last year’s theme, Community over Commercialisation, aiming not only to continue discussions but to take meaningful action that prioritises the interests of the scholarly community and the public. This post focuses on copyright-related issues that, when addressed by both individual researchers and through institutional, funder, and legal reforms, can help create more sustainable and equitable access to knowledge.

Infographic promoting Plan S for rights retention strategy. It features an illustration of people climbing ladders towards a large key, symbolising control over open access to knowledge. The text reads: "By exercising your rights, you can share your knowledge as you wish and enable everyone to benefit from your research." The hashtag #RetainYourRights is included in the middle section.

 Rights retention infographic. Source: cOAlition-s

Retaining author rights

Broadly speaking, rights retention means that authors of scholarly publications avoid the traditional practice of signing away their rights to publishers, typically done through a copyright transfer agreement or exclusive licence. Instead, as an author, you retain at least some rights that allow you to share and reuse your own research as openly as possible. For example, you could post your work in an open access repository, share it on academic networks, reuse it in your teaching, and incorporate it into other works like your thesis.

Many funders and institutions have specific rights retention policies that address related legal issues. If such a policy applies, and publishers are informed in advance, authors typically need to retain rights and apply an open licence (usually CC BY) to the accepted manuscript at the point of submission.

Rights retention ensures that your research can be made open access without relying on unsustainable pay-to-publish models, and without facing delays or restrictions from publishers’ web posting policies. Importantly, rights retention is not limited to published research—it can be applied to preprints, data, protocols, and other outputs throughout the research process.

Secondary Publication Rights (SPRs)

Secondary publication rights (SPRs) refer to legislation that allows publicly funded research to be published in an open access repository or elsewhere, at the same time as its primary publication in academic journals. Some European countries already have SPRs, as highlighted by the Knowledge Rights 21 study conducted by LIBER, and LIBER advocates for #ZeroEmbargo on publicly funded scientific publications. There are ongoing calls to harmonise and optimise these rights across countries, ensuring that the version of record becomes immediately available upon publication, overriding contractual restrictions imposed by publishers.

SPRs can apply to different types of research output and are meant to complement rights retention policies. However, introducing SPRs depends on copyright reform, which is not an action individual researchers can take themselves, though it’s still useful to be aware of developments in this area.

The image is a digital collage featuring a blue and green silhouette of a human head composed of circuit patterns on the right. The left side of the background is filled with various tech-themed icons surrounding a prominent "MACHINE LEARNING" label. A hand reaches towards the different icons, interacting with and exploring AI concepts

Source: Computer17293866, CC BY-SA 4.0, via Wikimedia Commons

Artificial Intelligence and your rights

The rise of Generative AI (GenAI) has introduced broader issues affecting researchers, both as users and as authors of copyrighted works. These include:

  • Clauses in subscription agreements that seek to prevent researchers from using resources their institution has subscribed to for AI-related purposes.
  • Publishers forming agreements with AI companies to share content from journal articles and books for AI training purposes, often without clear communication to authors. A recent deal between Taylor & Francis and Microsoft for $10 million has raised concerns among scholars about how their research will be used by AI tools. In some cases, authors are given the option to opt in, as seen with Cambridge Press.
  • For works already licensed for reuse, such as articles under a CC BY licence or those used under copyright exceptions, questions arise about how the work will be reused, for what purposes, and how it will be attributed.

While including published research in AI training should help improve the accuracy of models and reduce bias, researchers should have enough information to understand and decide how their work is reused. Creative Commons is exploring ‘preference signals’ for authors of CC-licensed works to address this issue.

The key issue is that transferring your copyright or exclusive rights to a publisher restricts what you can do with your own work and allows the publisher to reuse your work in ways beyond your control, including training AI models.

Using Copyright exceptions in research

UK copyright law includes exceptions (known as ‘permitted acts’) for non-commercial research, private study, criticism, review, quotation, and illustration for instruction. As a researcher, you can rely on these exceptions as long as your use qualifies as ‘fair dealing’, as previously discussed in a blog post during Fair Dealing Week. Text and data mining for non-commercial research is also covered by an exception, allowing researchers to download and analyse large amounts of data to which they have lawful access.

Relying on copyright exceptions involves evaluating your purpose and, for some exceptions, making a decision around what is ‘fair’. This also involves some assessment of risk. Understanding copyright exceptions helps you exercise your rights as users of knowledge and make confident assessments as to whether and when a copyright exception is likely to apply, and when permission is necessary. [see links for UK legislation at the end of this article]

The hands of diverse individuals hold up large, colorful letters spelling "COPYRIGHT" against a light blue background. Each letter features a different bright color, creating a vibrant and playful display.

Source: www.freepik.com

Engage with copyright at UCL

The conversations sparked during Open Access Week continue throughout the year at UCL as part of ongoing copyright support and education. To engage further with these issues, you can:

Useful Legislation

‘Challenges of Equity in Authorship’ co-production workshop initial discussions

By Harry, on 4 August 2023

Post by Kirsty Wallis, OOSS Coordinator/ Harry Ortiz Venegas, OOSS Support Officer

Those of us that actively support Open Science initiatives often recognise that there is a way to go and in some places there are big changes that may need to be made in order to succeed. Being UCL, a research-intensive university, we recognise and embrace the role of higher education institutions within this transformation and commit to facilitating the necessary dialogues inside the academic field, our student and staff body, and the wider community.

The Office for Open Science & Scholarship (OOSS) team, part of the Library, Culture, Collections and Open Science (LCCOS) department, is one of the crucial actors inside our institution in embracing Open Science values and promoting and advocating for these complex transitions to happen.

We propose that one of the changes that needs to happen is around the concept of authorship and what it means to all of the actors involved in research. We recognise that there are already a number of changes happening in this area, with initiatives like CRedIT, and rights retention for authors, but we wanted to look at it from a different angle. In the OOSS, we focus very heavily on the diversity and inclusiveness of our support services and the research we have at UCL, and so we work hard to allow the participation of diverse stakeholders in the design of open, accessible and inclusive research practices.

Resonating with the UCL Open Science Conference 2023 theme ‘Open Science and the Case for Social Justice’, the team proposed facilitating a workshop at the end of the day to discuss some of the long-standing issues concerning credit and authorship in research practice.

As the invitation to the final activity from the conference said, ‘Often, participants in research projects do not get credit for their significant contributions in the process, but what role should they have? People involved in a research project can hold a plethora of roles, from community leaders, patients, and citizen scientists outside the academy, to academics, research assistants, technicians, librarians, data stewards and coders within. How can we promote fairer practices and encompass all of these roles in our research outputs?’

With a clear idea in mind, it was necessary to design a participatory workshop that included researchers, but also the less-heard voices and collaborators who do not often figure in academic reports. In this session, two outstanding teams from UCL joined the adventure, the Co-Production Collective, a diverse and growing community of people from various backgrounds who come together to learn, connect, and champion co-production for lasting change. Providing consultancy, delivering training and presentations, and participating in the design and implementation of research projects, all with community members involved. And The Institute for Global Prosperity (IGP), part of The Bartlett, UCL Faculty for the Built Environment. Focused on redesigning prosperity for the 21st century, changing how we conceive and run our economies, and reworking our relationship with the planet. IGP’s vision is to build a prosperous, sustainable, global future, underpinned by the principles of fairness and justice, and allied to a realistic, long-term vision of humanity’s place in the world. As they both state on their web pages.

All teams circulated the invitation with their networks to ensure participation from a range of people, not only from academic backgrounds. Ending up in a hybrid event with around 60 participants. To promote the discussion, the workshop team prepared the ground with the case study ‘Co-Producing Prosperity Research in Informal Settlements in Tanzania’, an IGP project. Raising questions around how crucial it is to acknowledge all the contributions to knowledge production and language barriers in current publishing models. Followed by lived experience cases presented in first person by three members of the Co-Production Collective. Involving diverse perspectives, engagement levels, and roles in research projects.

The facilitators divided the in-person assistants around circular tables and the online people into break-out rooms to discuss ‘What challenges and opportunities need to be addressed to create equitable conditions in relation to authorship?’.

Each table were asked to summarise their conversations, sharing some of their ideas at the end of the session. People from the conference committee took notes to share with the OOSS team and report the workshop’s principal outcomes. These outcomes will be folded into the wider work being undertaken at UCL currently around preparing a statement on authorship for our community.

There were a number of themes that came out of the discussions and what was the most interesting for the facilitators was the extent of the consensus on many of the core points.

There was widespread agreement that all contributors to research should be acknowledged, and that they should be credited in any publications they take an active part in. There was also agreement that decisions about roles in the project and its outputs should be discussed and agreed at the outset of the project, with non-academic participants such as technicians, librarians, citizen scientists and other types of participants being given enough information to make an informed decision about what role they would like to take in publications and if that takes place, if and how they would like to be credited.

As we described at the outset of this post, we realise that this is not easy to unpick and the real value in these discussions will come from the challenges identified and opportunities we can pursue. It is easy to see the benefits that creating more equitable conditions in authorship can provide, allowing knowledge to be more granular and diversifying the opinions that can be represented, but the workshop also allowed us to dig into some real practical issues, some of which are presented below.

One major theme that emerged was in relation to research culture and the institutional inertia with regards to publishing. The lingering ‘publish or perish’ attitude in some subject areas leads to a very rapid turnaround on papers, and a perceived unwillingness to dilute credit with other names, especially in subject areas where positionality in the author list has value. There were also issues raised around the power dynamics associated with authorship and where control lies over this process, with the people who wrote the article, or the PI/research team leader who has ultimate control.

Another theme was more practical in nature and was related to systems and affiliations. In many cases it is very difficult to include an unaffiliated author, both in some publisher systems and even in some metadata schema. Also being able to give access to institutional systems and tools is also often associated with an affiliated email address. Lastly, in many cases, it is assumed all authors of a paper are able to take equal responsibility for it (CReDiT is changing this, by allowing people to be associated with the role they played, but it is early days), but in the case of a controversial topic, an unaffiliated author may be at risk as they are unable to access the support that the university will provide for its community, such as access to legal support or a press office.

The final significant theme was around language, style and terminology. Some groups pointed out that some of the understanding inherent to academia has very little meaning outside of the bubble of the university, and while external team members associated with a project will be trained to work to the integrity and ethical standards of the project, they may not be able to commit to the academic language, theoretical structures or terminology required to be involved in publications.

The good news is that all of these themes (and a lot of the other points we weren’t able to cover here) can be turned into opportunities. The first theme around research culture I think we are already addressing by starting this conversation and committing to including these findings in UCL statements and associated guidance on authorship. We will be consulting widely among the academic community and beyond throughout the process and hopefully this will allow us to challenge some of the issues raised about power dynamics and point out where people can and should be opening up their author lists to new individuals.

Another opportunity that came up in the sessions was around other types of publication. The discussion was framed around the traditional article/book, but the point was raised that there are a wide range of outputs that can come out of a project that can acknowledge different individuals, from the technical such as data, software or code, to presentations and posters, giving new individuals the chance to represent the research they have done in a new environment, and even media such as videos or exhibitions. There are definitely opportunities outside the traditional and this needs to be reflected and tied into the wider Open Science movement where we are shifting the focus onto new forms of output. It is also important that in this, space is given to the participants and citizen scientists to express what would be the most effective way of communicating the research results back to the community they effect.

This is just a very short summary of what was an intense and very nuanced conversation across around ten separate breakout groups and we were immensely grateful to the whole community for engaging with the workshop and being so open and honest about their experiences to allow us such insight to take forward into our explorations of authorship in the OOSS. The Co-Production Collective shared some interesting reflections about the workshop discussions on their webpage, exposing how participants contributing from the live-experience field are commonly left out in credits, authorship and contribution acknowledgements.

The April 24th conference resonated among members of their collective to take take a step forward, telling, one of them commented that “it made me pluck up the courage to ask to be an author on a project I set up and did the initial work on, and the professor received it really well and said well done for getting in touch and rightfully asking as these things can be daunting and missed…”