X Close

Open@UCL Blog

Home

Menu

Archive for the 'Ethics' Category

‘Who Owns Our Knowledge?’ Reflections from UCL Citizen Science and Research Data Management

By Naomi, on 23 October 2025

Guest post by Sheetal Saujani, Citizen Science Coordinator, and Christiana McMahon, Research Data Support Officer

A graphic divided into two halves, on the left is a starry night sky with the silhouette of a person looking up at it in wonder, and against the backdrop of the sky is a large version of the International Open Access Week logo which looks like an open padlock. On the right is a dark purple background with the text 'International Open Access Week' at the top with the logo, and 'Open Access Week 2025' near the bottom, below which is written 'October 20-26, 2025, #OAWeek'

Graphic from openaccessweek.org, photo by Greg Rakozy

This year’s theme for International Open Access Week 2025, “Who Owns Our Knowledge?”, asks us to reflect on how knowledge is created, shared, and controlled, and whose voices are included in that process. It’s a question that aligns closely with UCL’s approach to citizen science, which promotes openness, collaboration and equity in research.

Citizen science provides a powerful lens to examine how knowledge is co-produced with communities. It recognises that valuable knowledge comes not only from academic institutions but also but also from lived experience, community knowledge, and shared exploration.

Five people are sitting around a long table, and seem to be listening to one person speak. There are lots of resources laid out on the table, including sheets of paper, pens, post-it notes and posters. There is also a badge making machine, as well as a few mugs.

Photo by Sheetal Saujani, at a Citizen Science and Public Engagement workshop

Through initiatives like the UCL Citizen Science Academy and UCL Citizen Science Certificate, we support researchers and project leads to work in partnership with the public, enabling people from all backgrounds to take part in research that matters to them. These programmes are designed to be inclusive and hands-on, helping to build confidence, skills and shared responsibility.

For those of us working in academia, this theme reminds us that open access isn’t just about making papers free to read – it’s about changing how research is produced. Involving citizen scientists in forming research questions, collecting data, and interpreting findings opens up the research process itself, not just access to its outputs.

The Principles for Citizen Science at UCL emphasise respectful partnerships, transparency, and fair recognition. They reflect our belief that citizen scientists are co-creators whose insights – rooted in everyday experience and local knowledge – bring depth and relevance to academic work.

A graphic which has the acronyms 'Fair' and 'Care' in large letters, with what they stand for written under each letter: F - Findable, A - Accessible, I - Interoperable, R - Reusable and C - Collective Benefit, A - Authority to Control, R - Responsibility, E - Ethics

Graphic from gida-global.org/care

In particular, the fifth principle for Citizen Science at UCL states that CARE Principles for Indigenous Data Governance should be considered when working with marginalised communities and Indigenous groups. These principles are: Collective Benefit, Authority to Control, Responsibility, and Ethics, which remind researchers that creating knowledge from Indigenous data must be to the benefit of Indigenous Peoples, nations and communities. These Principles support Indigenous Peoples in establishing more control over their data and its use in research. The Research Data Management Team encourage staff and students to engage with the CARE Principles in addition to the FAIR principles.

So, who owns our knowledge? At UCL, we believe the answer should be: everyone. Through citizen science and its principles, we’re building a future where knowledge is created collectively, shared responsibly and made openly accessible – because it belongs to the communities that help shape it.

alt=""

The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities.

Follow us on Bluesky, LinkedIn, and join our mailing list to be part of the conversation!

Share this post on Bluesky

Authorship in the Era of AI – Panel Discussion

By Naomi, on 9 July 2025

Guest post by Andrew Gray, Bibliometrics Support Officer

This panel discussion at the 2025 Open Science and Scholarship Festival was made up of three professionals with expertise in different aspects of publishing and scholarly writing, across different sectors – Ayanna Prevatt-Goldstein, from the UCL Academic Communication Centre focusing on student writing; Rachel Safer, the executive publisher for ethics and integrity at Oxford University Press, and also an officer of the Committee on Publication Ethics, with a background in journal publishing; and Dhara Snowden, from UCL Press, with a background in monograph and textbook publishing.

We are very grateful to everyone who attended and brought questions or comments to the session.

This is a summary of the discussion from all three panel members, and use of any content from this summary should be attributed to the panel members. If you wish to cite this, please do so as A. Prevatt-Goldstein, R. Safer & D. Snowden (2025). Authorship in the Era of AI. [https://blogs.ucl.ac.uk/open-access/2025/07/09/authorship-in-the-era-of-ai/]

Where audience members contributed, this has been indicated. We have reorganised some sections of the discussion for better flow.

The term ‘artificial intelligence’ can mean many things, and often a wide range of different tools are grouped under the same general heading. This discussion focused on ‘generative AI’ (large language models), and on their role in publishing and authorship rather than their potential uses elsewhere in the academic process.

Due to the length of this write-up, you can directly access each question using the following links:
1. There is a growing awareness of the level of use of generative AI in producing scholarly writing – in your experience, how are people currently using these tools, and how widespread do you think that is? Is it different in different fields? And if so, why?

2. Why do you think people are choosing to use these tools? Do you think that some researchers – or publishers – are feeling that they now have to use them to keep pace with others?

3. On one end of the spectrum, some people are producing entire papers or literature reviews with generative AI. Others are using it for translation, or to generate abstracts. At the other end, some might use it for copyediting or for tweaking the style. Where do you think we should draw the line as to what constitutes ‘authorship’?

4. Do you think readers of scholarly writing would draw the line on ‘authorship’ differently to authors and publishers? Should authors be expected to disclose the use of these tools to their readers? And if we did – is that something that can be enforced?

5. Do you think ethical use of AI will be integrated into university curriculums in the future? What happens when different institutions have different ideas of what is ‘ethical’ and ‘responsible’?

6. Many students and researchers are concerned about the potential for being falsely accused of using AI tools in their writing – how can we help people deal with this situation? How can people assert their authorship in a world where there is a constant suspicion of AI use?

7. Are there journals which have developed AI policies that are noticeably more stringent than the general publisher policies, particularly in the humanities? How do we handle it if these policies differ, or if publisher and institutional policies on acceptable AI use disagree?

8. The big AI companies often have a lack of respect for authorship, as seen in things like the mass theft of books. Are there ways that we can protect authorship and copyrights from AI tools?

9. We are now two and a half years into the ‘ChatGPT era’ of widespread AI text generation. Where do you see it going for scholarly publishing by 2030?


1. There is a growing awareness of the level of use of generative AI in producing scholarly writing – in your experience, how are people currently using these tools, and how widespread do you think that is? Is it different in different fields? And if so, why?

Among researchers, a number of surveys by publishers have suggested that 70-80% of researchers are using some form of AI, broadly defined, and a recent Nature survey suggested this is fairly consistent across different locations and fields. However, there was a difference by career stage, with younger researchers feeling it was more acceptable to use it to edit papers, and by first language, where non-English speakers were more likely to use it for this as well.

There is a sense that publishers in STEM fields are more likely to have guidance and policy for the use of AI tools; in the humanities and social sciences, this is less well developed, and publishers are still in the process of fact-finding and gathering community responses. There may still be more of a stigma around the use of AI in the humanities.

In student writing, a recent survey from HEPI found that from 2024 to 2025, the share of UK undergraduates who used generative AI for generating text had gone from a third of students to two thirds, and only around 8% said they did not use generative AI at all. Heavier users included men, students from more advantaged backgrounds, and students with English as a second or additional language.

There are some signs of variation by discipline in other research. Students in fields where writing is seen as an integral part are more concerned with developing their voice and a sense of authorship, and are less likely to use it for generating text – or at least are less likely to acknowledge it – and where they do, they are more likely to personalise the output. By comparison, students in STEM subjects are more likely to feel that they were being assessed on the content – the language they use to communicate it might be seen as less important.

[For more on this, see A. Prevatt-Goldstein & J. Chandler (forthcoming). In my own words? Rethinking academic integrity in the context of linguistic diversity and generative AI. In D. Angelov and C.E. Déri (Eds.), Academic Writing and Integrity in the Age of Diversity: Perspectives from European and North American Higher Education. Palgrave.)]


2. Why do you think people are choosing to use these tools? Do you think that some researchers – or publishers – are feeling that they now have to use them to keep pace with others?

Students in particular may be more willing to use it as they often prioritise the ideas being expressed over the mode of expressing them, and the idea of authorship can be less prominent in this context. But at a higher level, for example among doctoral students, we find that students are concerned about their contribution and whether perceptions of their authorship may be lessened by using these tools.

A study among publishers found that the main way AI tools were being used was not to replace people at specific tasks, but to make small efficiency savings in the way people were doing them. This ties into the long-standing use of software to assist copyediting and typesetting.

Students and academics are also likely to see it from an efficiency perspective, especially among those who are becoming used to working with generative AI tools in their daily lives, and so are more likely to feel comfortable using it in academic and professional contexts. Academics may feel pressure to use tools like this to keep up a high rate of publication. But the less involvement of time in a particular piece of work might be a trade-off of time spent against quality; we might also see trade-offs in terms of the individuality and nuance of the language, of fewer novel and outlier ideas being developed, as generative AI involvement becomes more common.

Ultimately, though, publishers struggle to monitor researchers’ use of generative AI in their original research – they are dependent on institutions training students and researchers, and on the research community developing clearer norms, and perhaps there is also a role for funders to support educating authors about best practices.

Among all users, a significant – and potentially less controversial – role for generative AI is to help non-native English speakers with language and grammar, and to a more limited degree translation – though quality here varies and publishers would generally recommend that any AI translation should be checked by a human specialist. However, this has its own costs.

With English as a de facto academic lingua franca, students (and academics) who did not have it as a first language were inevitably always at a disadvantage. Support for this could be found – perhaps paying for help, perhaps friends or family or colleagues who could support language learning – but this was very much support that was available more to some students than others, due to costs or connections, and generative AI tools have the potential to democratise this support to some degree. However, this causes a corresponding worry among many students that the bar has been raised – they feel they are now expected to use these tools or else they are disadvantaged compared to their peers.


3. On one end of the spectrum, some people are producing entire papers or literature reviews with generative AI. Others are using it for translation, or to generate abstracts. At the other end, some might use it for copyediting or for tweaking the style. Where do you think we should draw the line as to what constitutes ‘authorship’?

In some ways, this is not a new debate. As we develop new technologies which change the way we write – the printing press, the word processor, the spell checker, the automatic translator – people have discussed how it changes ‘authorship’. But all these tools have been ways to change or develop the words that someone has already written; generative AI can go far beyond that, producing vastly more material without direct involvement beyond a short prompt.

A lot of people might treat a dialogue with generative AI, and the way they work with those outputs, in the same way as a discussion with a colleague, as a way to thrash out ideas and pull them together. We have found that students are seeing themselves shifting from ‘author’ to ‘editor’, claiming ownership of their work through developing prompts and personalising the output, rather than through having written the text themselves. There is still a concept of ownership, a way of taking responsibility for the outcome, and for the ideas being expressed, but that concept is changing, and it might not be what we currently think of as ‘authorship’.

Sarah Eaton’s work has discussed the concept of ‘Post-plagiarism’ as a way to think about writing in a generative AI world, identifying six tenets of post-plagiarism. One of those is that humans can concede control, but not responsibility; another is that attribution will remain important. This may give us a useful way to consider authorship.

In publishing, ‘authorship’ can be quite firmly defined by the criteria set by a specific journal or publisher. There are different standards in different fields, but one of the most common is the ICMJE definition which sets out four requirements to be considered an author – substantial contribution to the research; drafting or editing the text; having final approval; and agreeing to be accountable for it. In the early discussions around generative AI tools in 2022, there was a general agreement that these could never meet the fourth criteria, and so could never become ‘authors’; they could be used, and their use could be declared, but it did not conceptually rise to the level of authorship as it could not take ownership of the work.

The policy that UCL Press adopted, drawing on those from other institutions, looked at ways to identify potential responsible uses, rather than a blanket ban – which it was felt would lead to people simply not being transparent when they had used it. It prohibited ‘authorship’ by generative AI tools, as is now generally agreed; it required that authors be accountable, and take responsibility for the integrity and validity of their work; and it asked for disclosure of generative AI.

Monitoring and enforcing that is hard – there are a lot of systems claiming to test for generative AI use, but they may not work for all disciplines, or all kinds of content – so it does rely heavily on authors being transparent about how they have used these tools. They are also reliant on peer reviewers flagging things that might indicate a problem. (This also raises the potential of peer reviewers using generative AI to support their assessments – which in turn indicates the need for guidance about how they could use it responsibly, and clear indications on where it is or is not felt to be appropriate.)

Generative AI potentially has an interesting role to play in publishing textbooks, which tend to be more of a survey of a field than original thinking, but do still involve a dialogue with different kinds of resources and different aspects of scholarship. A lot of the major textbook platforms are now considering ways in which they can use generative AI to create additional resources on top of existing textbooks – test quizzes or flash-cards or self-study resources.


4. Do you think readers of scholarly writing would draw the line on ‘authorship’ differently to authors and publishers? Should authors be expected to disclose the use of these tools to their readers? And if we did – is that something that can be enforced?

There is a general consensus emerging among publishers that authors should be disclosing use of AI tools at the point of submission, or revisions, though where the line is drawn there varies. For example, Sage requires authors to disclose the use of generative AI, but not ‘assistive’ AI such as spell-checkers or grammar checkers. The STM Association recently published a draft set of recommendations for using AI, with nine classifications of use. (A commenter in the discussion also noted a recent proposed AI Disclosure Framework, identifying fourteen classes.)

However, we know that some people, especially undergraduates, spend a lot of time interacting with generative AI tools in a whole range of capacities, around different aspects of the study and writing process, which can be very difficult to define and describe – there may not be any lack of desire to be transparent, but it simply might not fit into the ways we ask them to disclose the use of generative AI.

There is an issue about how readers will interpret a disclosure. Some authors may worry that there is a stigma attached to using generative AI tools, and be reluctant to disclose if they worry their work will be penalised, or taken less seriously, as a result. This is particularly an issue in a student writing context, where it might not be clear what will be done with that disclosure – will the work be rejected? Will it be penalised, for example a student essay losing some marks for generative AI use? Will it be judged more sceptically than if there had been no disclosure? Will different markers, or editors, or peer-reviewers make different subjective judgements, or have different thresholds?

These concerns can cause people to hesitate before disclosing, or to avoid disclosing fully. But academics and publishers are dependent on honest disclosure to identify inappropriate use of generative AI, so may need to be careful in how they frame this need to avoid triggering these worries about more minor use of generative AI. Without honest disclosure, we also have no clear idea of what writers are using AI for – which makes it all the harder to develop clear and appropriate policies.

For student writing, the key ‘reader’ is the marker, who will also be the person to whom generative AI use is disclosed. But for published writing, once a publisher has a disclosure of AI use, they may need to decide what to pass along to the reader. Should readers be sent the full disclosure, or is that overkill? It may include things like idea generation, assistance with structure, or checking for more up-to-date references – these might be useful for the publisher to know, but might not need to be disclosed anywhere in the text itself. Conversely, something like images produced by generative AI might need to be explicitly and clearly disclosed in context.

The recent Nature survey mentioned earlier showed that there is no clear agreement among academics as to what is and isn’t acceptable use, and it would be difficult for publishers to draw a clear line in that situation. They need to be guided by the research community – or communities, as it will differ in different disciplines and contexts.

We can also go back to the pre-GenAI assumptions about what used to be expected in scholarly writing, and consider what has changed. In 2003, Diane Pecorari identified the three assumptions for transparency in authorship:

1. that language which is not signaled as quotation is original to the writer;
2. that if no citation is present, both the content and the form are original to the writer;
3. that the writer consulted the source which is cited.

There is a – perhaps implicit – assumption among readers that all three of these are true unless otherwise disclosed. But do those assumptions still hold among a community of people – current students – who are used to the ubiquitous use of generative AI? On the face of it, generative AI would clearly break all three.

If we are setting requirements for transparency, there should also be consequences for breach of transparency – from a publisher’s perspective, if an author has put out a generative AI produced paper with hallucinated details or references, the journal editor or publisher should be able to investigate and correct or retract it, exactly as would be the case with plagiarism or other significant issues.

But there is a murky grey area here – if a paper is otherwise acceptable and of sufficient quality, but does not have appropriate disclosure of generative AI use, would that in and of itself be a reason for retraction? At the moment, this is not on the COPE list of reasons for retraction – it might potentially justify a correction or an editorial note, but not outright retraction.

Conversely, in the student context, things are simpler – if it is determined that work does not belong to the student, whether that be through use of generative AI or straightforward plagiarism, then there are academic misconduct processes and potentially very clear consequences which follow from that. These do not necessarily reflect on the quality of the output – what is seen as critical is the authorship.


5. Do you think ethical use of AI will be integrated into university curriculums in the future? What happens when different institutions have different ideas of what is ‘ethical’ and ‘responsible’?

A working group at UCL put together a first set of guidance on using generative AI in early 2023, and focused on ethics in the context of learning outcomes – what is it that students are aiming to achieve in their degree, and will generative AI help or not in that process? But ethical questions also emerged in terms of whose labour had contributed to these tools, what the environmental impacts where, and importantly whether students were able to opt out of using generative AI. There are no easy answers to any of these, but they very much are ongoing questions.

Recent work from MLA looking at AI literacies for students is also informative here in terms of what it expects students using AI to be aware of.


6. Many students and researchers are concerned about the potential for being falsely accused of using AI tools in their writing – how can we help people deal with this situation? How can people assert their authorship in a world where there is a constant suspicion of AI use?

There was no easy answer here and a general agreement that this is challenging for everyone – it can be very difficult to prove a negative. Increasing the level of transparency around disclosing AI use – and how much AI has been used – will help overall, but maybe not in individual cases.

Style-based detection tools are unreliable and can be triggered by normal academic or second-language writing styles. A lot of individuals have their own assumptions as to what is a ‘clear marker’ of AI use, and these are often misleading, leading to false positives and potentially false accusations. Many of the plagiarism detection services have scaled back or turned off their AI checking tools.

In publishing, a lot of processes have historically been run on a basis of trust – publishers, editors, and reviewers have not fact-checked every detail. If you are asked to disclose AI use and you do not, the system has to trust you did not use it, in the same way that it trusts you obtained the right ethical approvals or that you actually produced the results you claim. Many publishers are struggling with this, and feeling that they are still running to catch up with recent developments.

In academia, we can encourage and support students to develop their own voice in their writing. This is a hard skill to develop, and it takes time and effort, but it can be developed, and it is a valuable thing to have – it makes their writing more clearly their own. The growth of generative AI tools can be a very tempting shortcut for many people to try and get around this work, but there are really no shortcuts here to the investment of time that is needed.

There was a discussion of the possibility of authors being more transparent with their writing process to help demonstrate research integrity – for example, documenting how they select their references, in the way that systematic review does, or using open notebooks? This could potentially be declared in the manuscript, as a section alongside acknowledgements and funding. Students could be encouraged to keep logs of any generative AI prompts they have used and how they are handling them, to be able to disclose this in case of concerns.


7. Are there journals which have developed AI policies that are noticeably more stringent than the general publisher policies, particularly in the humanities? How do we handle it if these policies differ, or if publisher and institutional policies on acceptable AI use disagree?

There are definitely some journals that have adopted more restrictive policies than the general guidance from their publisher, mostly in the STEM fields. We know that many authors may not read the specific author guidelines for a journal before submitting. Potentially we could see journals highlighting these restrictions in the submission process, and requiring the authors to acknowledge they are aware of the specific policies for that journal.


8. The big AI companies often have a lack of respect for authorship, as seen in things like the mass theft of books. Are there ways that we can protect authorship and copyrights from AI tools?

A substantial issue for many publishers, particularly smaller non-commercial ones, is that so much scholarly material is now released under an open-access license that makes it easily available for training generative AI; even if the licenses forbid this, it can be difficult in practice to stop it, as seen in trade publishing. It is making authors very concerned, as they do not know how or where their material will be used, and feel powerless to prevent it.

One potential way forward is by reaching agreements between publishers and AI companies, making agreements on licensing material and ensuring that there is some kind of renumeration. This is more practical for larger commercial publishers with more resources. There is also the possibility of sector-wide collective bargaining agreements, as has been seen with the Writers Guild of America, where writers were able to implement broader guardrails on how their work would be used.

It is clear that the current system is not weighted in favour of the original creators, and some form of compensation would be ideal, but we also need to be careful that any new arrangement doesn’t continue to only benefit a small group.

The issue of Creative Commons licensing regulating the use of material for AI training purposes was discussed – Creative Commons take the position that this work may potentially be allowed under existing copyright law, but they are investigating the possibility of adding a way to signal the author’s position. AI training would be allowed by most of the Creative Commons licenses, but might require specific conditions on the final model (eg displaying attribution or non-commercial restrictions).

A commenter in the discussion also mentioned a more direct approach, where some sites are using tools to obfuscate artwork or building “tarpits” to combat scraping – but these can shade into being malware, so not a solution for many publishers!


9. We are now two and a half years into the ‘ChatGPT era’ of widespread AI text generation. Where do you see it going for scholarly publishing by 2030?

Generative AI use is going to become even more prevalent and ubiquitous, and will be very much more integrated into daily life for most people. As part of that integration, ideally we would see better awareness and understanding of what it can do, and better education on appropriate use in the way that we now teach about plagiarism and citation. That education will hopefully begin at an early stage, and develop alongside new uses of the technology.

Some of our ideas around what to be concerned about will change, as well. Wikipedia was suggested as an analogy – twenty years ago we collectively panicked about the use of it by students, feeling it might overthrow accepted forms of scholarship, but then – it didn’t. Some aspects of GenAI use may simply become a part of what we do, rather than an issue to be concerned with.

There will be positive aspects of this, but also negative ones; we will have to consider how we keep a space for people who want to minimise their use of these tools, and choose not to engage with them, for practical reasons or for ethical ones, particularly in educational contexts.

There are also discussions around the standardisation of language with generative AI – as we lose a diversity of language and of expression, will we also lose the corresponding diversity of thought? Standardised, averaged language can itself be a kind of loss.

The panel concluded by noting that this is very much an evolving space, and encouraged greater feedback and collaboration between publishers and the academic community, funders, and institutions, to try and navigate where to draw the line. The only way forward will be by having these discussions and trying to agree common ground – not just on questions of generative AI, but on all sorts of issues surrounding research integrity and publication ethics.

 

Ethics of Open Science: Science as Activism

By Kirsty, on 2 April 2025

Guest post by Ilan Kelman, Professor of Disasters and Health, building on his captivating presentation in Session 2 of the UCL Open Science Conference 2024.

Many scientists accept a duty of ensuring that their science is used to help society. When we are publicly funded, we feel that we owe it to the public to offer Open Science for contributing to policy and action.

Some scientists take it a step further. Rather than merely making their science available for others to use, they interpret it for themselves to seek specific policies and actions. Open Science becomes a conduit for the scientist to become an activist. Positives and negatives emerge, as shown by the science of urban exploration and of climate change.

Urban exploration

‘Urban exploration’ (urbex), ‘place-hacking’, and ‘recreational trespass’ refer to people accessing infrastructure which is off-limits to the public, such as closed train stations, incomplete buildings, and utility systems. As per the third name, it sometimes involves trespassing and it is frequently dangerous, since sites are typically closed off for safety and security reasons.

Urbex research does not need to involve the infrastructure directly, perhaps through reviewing existing material or interviewing off-site. It can, though, involve participating in accessing the off-limits sites for documenting experiences through autoethnography or participant-observer. As such, the urbex researcher could be breaking the law. In 2014, one researcher was granted a conditional discharge, 20 months after being arrested for involvement in urbex while researching it.

Open Science for urbex research has its supporters and detractors. Those stating the importance of the work and publicising it point to the excitement of learning about and documenting a city’s undercurrents, creative viewing and interacting with urban environments, the act of bringing sequestered spaces to the public while challenging authoritarianism, the need to identify security lapses, and making friends. Many insist on full safety measures, even while trespassing.

Detractors explain that private property is private and that significant dangers exist. People have died. Rescues and body recoveries put others at risk. Urbex science might be legitimate, particularly to promote academic freedom, but it should neither be glorified nor encourage foolhardiness.

This situation is not two mutually exclusive sides. Rather, different people prefer different balances. Urbex Open Science as activism can be safe, legal, and fun—also as a social or solo hobby. Thrill-seekers for social media influence and income would be among the most troublesome and the least scientific.

Figure 1: Unfinished and abandoned buildings are subjects of ‘urbex’ research (photo by Ilan Kelman).

Climate everything?

Humanity is changing the Earth’s climate rapidly and substantively with major, deleterious impacts on society. Open Science on climate change has been instrumental in popularising why human-caused climate change is happening, its implications, how we could avert it, and actions to tackle its negative impacts.

Less clear is the penchant for some scientists to use Open Science to try to become self-appointed influencers and activists beyond their expertise. They can make grandiose public pronouncements on climate change science well outside their own work, even contradicting their colleagues’ published research. An example is an ocean physicist lamenting the UK missing its commitments on climate change’s Paris Agreement, despite the agreement being unable to meet its own targets, and then expressing concerns about “climate refugees” which legally cannot exist.

A meme distributed by some scientists states that cats kill more birds than wind turbines, yet no one tries to restrict cats! Aside from petitions and studies about restricting cats, the meme never explains how cats killing birds justifies wind turbines killing birds, particularly when kill-avoiding strategies exist. When a scientist’s social media postings are easily countered, it undermines efforts to suggest that scientists ought to be listened to regarding climate change.

Meanwhile, many scientists believe they can galvanise action by referring to “climate crisis” or “climate emergency” rather than to “climate change”. From the beginnings of this crisis/emergency framing, political concerns were raised about the phrasing. Now, evidence is available of the crisis/emergency wording leading to negative impacts for action.

In fact, scientist activism aiming to “climat-ify” everything leads to non-sensical phrasing. From “global weirding” to “climate chaos”, activist terminology can reveal a lack of understanding of the basics of climate science—such as climate, by definition, being mathematically chaotic. A more recent one is “climate obstruction”. When I asked how we could obstruct the climate since the climate always exists, I never received an answer.

Figure 2: James Hansen, climate scientist and activist (photo by Ilan Kelman).

Duty for accuracy and ethics

Scientists have a duty for accuracy and ethics, which Open Science should be used for. Fulfilling this duty contributes to credibility and clarity, rather than using Open Science to promote either subversive or populist material, simply for the sake of activism, without first checking its underlying science and the implications of publicising it. When applied appropriately, Open Science can and should support accurate and ethical activism.

Ethics of Open Science: Navigating Scientific Disagreements

By Kirsty, on 6 March 2025

Guest post by Ilan Kelman, Professor of Disasters and Health, building on his captivating presentation in Session 2 of the UCL Open Science Conference 2024.

Open Science reveals scientific disagreements to the public, with advantages and disadvantages. Opportunities emerge to demonstrate the scientific process and techniques for sifting through diverging ideas and evidence. Conversely, disagreements can become personal, obscuring science, scientific methods, and understandable disagreements due to unknowns, uncertainties, and personality clashes. Volcanology and climate change illustrate.

Volcanology

During 1976, a volcano rumbled on the Caribbean island of Guadeloupe which is part of France. Volcanologists travelled there to assess the situation leading to public spats between those who were convinced that a catastrophic eruption was likely and those who were unconcerned, indicating that plenty of time would be available for evacuating people if dangers worsened. The authorities decided to evacuate more than 73,000 people, permitting them to return home more than three months later when the volcano quieted down without having had a major eruption.

Aside from the evacuation’s cost and the possible cost of a major eruption without an evacuation, volcanologists debated for years afterwards how everyone could have dealt better with the science, the disagreements, and the publicity. Open Science could support all scientific viewpoints being publicly available as well as how this science could be and is used for decision making, including navigating disagreements. It might mean that those who shout loudest are heard most, plus media can sell their wares by amplifying the most melodramatic and doomerist voices—a pattern also seen with climate change.

Insults and personality clashes can mask legitimate scientific disagreements. For Guadeloupe, in one commentary responding to intertwined scientific differences and personal attacks, the volcanologist unhelpfully suggests their colleagues’ lack of ‘emotional stability’ as part of numerous, well-evidenced scientific points. In a warning prescient for the next example, this scientist indicates difficulties if Open Science means conferring credibility to ‘scientists who have specialized in another field that has little or no bearing on [the topic under discussion], and would-be scientists with no qualification in any scientific field whatever’.

Figure 1: Chile’s Osorno volcano (photo by Ilan Kelman).

Climate change, tropical cyclones, and anthropologists

Tropical cyclones are the collective term for hurricanes, typhoons, and cyclones. The current scientific consensus (which can change) is that due to human-caused climate change, tropical cyclone frequency is decreasing while intensity is increasing. On occasion, anthropologists have stated categorically that tropical cyclone numbers are going up due to human-caused climate change.

I responded to a few of these statements with the current scientific consensus, including foundational papers. This response annoyed the anthropologists even though they have never conducted research on this topic. I offered to discuss the papers I mentioned, an offer not accepted.

There is a clear scientific disagreement between climate change scientists and some anthropologists regarding projected tropical cyclone trends under human-caused climate change. If these anthropologists publish their unevidenced viewpoint as Open Science, it offers fodder to the industries undermining climate change science and preventing action on human-caused climate change. They can point to scientists disputing the consensus of climate change science and then foment further uncertainty and scepticism about climate change projections.

One challenge is avoiding censorship of, or shutting down scientific discussions with, the anthropologists who do not accept climate change science’s conclusions. It is a tricky balance between permitting Open Science across disciplines, including to connect disciplines, and not fostering or promoting scientific misinformation.

Figure 2: Presenting tropical cyclone observations (photo by Ilan Kelman).

Caution, care, and balance

Balance is important between having scientific discussions in the open and avoiding scientists levelling personal attacks at each other or spreading incorrect science, both of which harm all science. Some journals use an open peer review process in which the submitted article, the reviews, the response to the reviews, all subsequent reviews and responses, and the editorial decision are freely available online. A drawback is that submitted manuscripts are cited as being credible, including those declined for publication. Some journals identify authors and reviewers to each other, which can reduce snide remarks while increasing possibilities for retribution against negative reviews.

Even publicly calling out bullying does not necessarily diminish bullying. Last year, after I privately raised concerns about personal attacks against me on an anthropology email list due to a climate change posting I made, I was called “unwell” and “unhinged” in private emails which were forwarded to me. When I examined the anthropology organisation’s policies on bullying and silencing, I found them lacking. I publicised my results. The leaders not only removed me from the email list against the email list’s own policies, but they also refused to communicate with me. That is, these anthropologists (who are meant to be experts in inter-cultural communication) bullied and silenced me because I called out bullying and silencing.

Awareness of the opportunities and perils of Open Science for navigating scientific disagreements can indicate balanced pathways for focusing on science rather than on personalities. Irrespective, caution and care can struggle to overcome entirely the fact that scientists are human beings with personalities, some of whom are ardently opposed to caution, care, and disagreeing well.

Whose data is it anyway? The importance of Information Governance in Research

By Kirsty, on 11 February 2025

Guest post by Preeti Matharu, Jack Hindley, Victor Olago, Angharad Green (ARC Research Data Stewards), in celebration of International Love Data Week 2025

Research data is a valuable yet vulnerable asset. Research data is a valuable yet vulnerable asset. Researchers collect and analyse large amounts of personal and sensitive data ranging from health records to survey responses, and this raises an important question – whose data is it anyway?

If data involve human subjects, then participants are the original owners of their personal data. They grant permission to researchers to collect and use their data through informed consent. Therefore, responsibility for managing and protecting their data, in line with legal, regulatory, ethical requirements, and policies lie with researchers and their institution. Hence, maintaining a balance between participant rights and researcher needs.

Under the General Data Protection Regulation (GDPR) in the UK and EU, participants have the right to access, update and request deletion of their data, whilst researchers must comply with the law to ensure research integrity. However, under the Data Protection Act, research data processed in the public interest must be retained irrespective of participant rights, including the rights to erase, access and rectify. UCL must uphold this requirement while ensuring participant confidentiality is not compromised.

Information governance consists of policies, procedures and processes adopted by UCL to ensure research data is managed securely and complies with legal and operational requirements.

Support for information governance in research is now provided by Data Stewards within ARC RDM IG. That’s a long acronym, let’s break it down.

  • ARC: Advanced Research Computing – UCL’s research innovative centre and provides 1. Secure digital infrastructure and 2. Teaching software.
  • RDM: Research Data Management – assist researchers with data management.
  • IG: Information governance – advise researchers on compliance for managing sensitive data.

Data Stewards – we support researchers with data management throughout the research study, provide guidance on data security awareness training, data security requirements for projects, and compliance with legal and regulatory standards, encompassing the Five Safes Framework principles. Additionally, we advise on sensitive data storage options, such as a Trusted Research Environment (TRE) or the Data Safe Haven (DSH).

Furthermore, we emphasise the importance of maintaining up-to-date and relevant documentation and provide guidance on FAIR (Findable, Accessible, Interoperable, Reusable) data principles.

As stated above, data can be vulnerable. UCL must implement strong security controls including encryption, access control and authentication, to protect sensitive data, such as personal health data and intellectual property. Sensitive data refers to data whose unauthorised disclosure could cause potential harm to participants or UCL.

UCL’s Information Security Management System (ISMS) is a systematic approach to managing sensitive research data to ensure confidentiality, integrity, and availability. It is a risk management process involving people, processes and IT systems. The key components include information management policy, identifying and assessing risks, implementing security controls to mitigate identified risks, training users and continuous monitoring. The ISMS is crucial in research:

  1. It protects sensitive data; without stringent security measures, data is at risk of being accessed by unauthorised individuals leading to potential theft.
  2. It ensures legal and regulatory compliance i.e. GDPR and UCL policies. Non-compliance results in hefty fines, legal action and reputational damage.
  3. Research ethics demand participant data is handled with confidentiality. The ISMS ensures data management practices, data anonymisation, and controlled access whilst reinforcing ethical responsibility.
  4. It reduces the risk of phishing attacks and ransomware.
  5. It ensures data integrity and reliability – tampered or corrupted data can lead to invalid research and waste of resources.

UCL practices for Information Governance in research:

In response to the question, whose data is it anyway? Data may be generated by participants, but the overall responsibility to use, process, protect, ethically manage lies upon the researchers and UCL. Additionally, beyond compliance and good information governance, it is about ensuring research integrity and safeguarding the participants who make research possible.

Ethics of Open Science: Managing dangers to scientists

By Kirsty, on 5 February 2025

Guest post by Ilan Kelman, Professor of Disasters and Health, building on his captivating presentation in Session 2 of the UCL Open Science Conference 2024.

Open Science brings potential dangers to scientists and ways of managing those dangers. In doing so, opportunities emerge to show the world the harm some people face, such as the murder of environmental activists and investigations of child sexual abuse, hopefully leading to positive action to counter these problems.

Yet risks can appear for scientists. Even doing basic climate change science has led to death threats. Two examples in this blog indicate how to manage dangers to scientists.

Disaster diplomacy

Disaster diplomacy research examines how and why disaster-related activities—before, during, and after a disaster—do and do not influence all forms of conflict and cooperation, ranging from open warfare to signing peace deals. So far, no example has been identified in which disaster-related activities, including a major calamity, led to entirely new and lasting conflict or cooperation. An underlying reason to favour enmity or amity is always found, with disaster-related activities being one reason among many to pursue already decided politics.

The 26 December 2004 tsunamis around the Indian Ocean devastated Sri Lanka and Aceh in Indonesia, both of which had been wracked by decades of violent conflict. On the basis of ongoing, secret negotiations which were spurred along by the post-earthquake/tsunami humanitarian effort, a peace deal was reached in Aceh and it held. Simultaneously in Sri Lanka, the disaster relief was deliberately used to continue the conflict which was eventually ended by military means. In both locations, the pre-existing desire for peace and conflict respectively produced the witnessed outcome.

This disaster diplomacy conclusion is the pattern for formal processes, such as politicians, diplomats, celebrities, businesses, non-governmental organisations, or media leading the work. It is less certain for informal approaches: individuals helping one another in times of need or travelling to ‘enemy states’ as tourists or workers—or as scientists.

Openly publishing on disaster diplomacy could influence conflict and cooperation processes by suggesting ideas which decision-makers might not have considered. Or it could spotlight negotiations which detractors seek to scuttle. If a scientist had published on the closed-door Aceh peace talks, the result might have emulated Sri Lanka. The scientist would then have endangered a country as well as themselves by being blamed for perpetuating the violence.

Imagine if South Korea’s President, seeking a back door to reconciliation with North Korea, sends to Pyongyang flood engineers and scientists who regularly update their work online. They make social gaffes, embarrassing South Korea, or are merely arrested and made scapegoats on the whim of North Korea’s leader who is fed up with the world seeing what North Korea lacks. The scientists and engineers are endangered as much as the reconciliation process.

Open Science brings disaster diplomacy opportunities by letting those involved know what has and has not worked. It can lead to situations in which scientists are placed at the peril of politics.

Figure 1: Looking across the Im Jin River into North Korea from South Korea (photo by Ilan Kelman).

Underworlds

Scientists study topics in which people are in danger, such as child soldiers, human trafficking, and political movements or sexualities that are illegal in the country being examined. The scientists can be threatened as much as the people being researched. In 2016, a PhD student based in the UK who was researching trade unions in Cairo was kidnapped, tortured, and murdered.

In 2014, a PhD student based in the UK was one of a group placed on trial in London for ‘place-hacking’ or ‘urban exploring’ (urbex), in which they enter or climb disused or under-construction infrastructure. Aside from potentially trespassing, these places are often closed for safety reasons. The scientist places themselves in danger to research this subculture on-site, in action.

All these risks are manageable and they are managed. Any such research in the UK must go through a rigorous research ethics approval process alongside a detailed risk assessment. This paperwork can take months, to ensure that the dangers have been considered and mitigated, although when conducted improperly, the process itself can be detrimental to research ethics.

Many urbex proponents offer lengthy safety advice and insist that activities be conducted legally. Nor should researchers necessarily shy away from hard subject matter because a government dislikes the work.

Open Science publishing on these topics can remain ethical by ensuring anonymity and confidentiality of sources as well as not publishing when the scientist is in a place where they could be in danger. This task is not always straightforward. Anonymity and confidentiality can protect criminals. Scientists might live and work in the country of research, so they cannot escape the danger. How ethical is it for a scientist to be involved in the illegal activities they are researching?

Figure 2: The Shard in London, a desirable  place for ‘urban exploring’ when it was under construction (photo by Ilan Kelman).

Caution, care, and balance

Balance is important between publishing Open Science on topics involving dangers and not putting scientists or others at unnecessary peril while pursuing the research and publication. Awareness of the potential drawbacks of doing the research and of suitable research ethics, risk assessments, and research monitoring can instil caution and care without compromising the scientific process or Open Science.

Ethics of Open Science: Managing dangers to the public

By Kirsty, on 17 December 2024

Guest post by Ilan Kelman, Professor of Disasters and Health, building on his captivating presentation in Session 2 of the UCL Open Science Conference 2024.

Open Science brings risks and opportunities when considering dangers to the public from conducting and publishing science. Opportunities mean detailing societal problems and responses to them, which could galvanise positive action to make society safer. Examples are the effectiveness of anti-bullying techniques, health impacts from various paints, and companies selling cars they knew were dangerous.

Risks emerge if pursuing or publicising research might change or create dangers to the public. Highlighting how pickpockets or street scams operate help the public protect themselves, yet could lead the perpetrators to changing their operations, making them harder to detect. Emphasising casualties from cycling could lead to more driving, increasing the health consequences from air pollution and vehicle crashes.

The latter might be avoided by comparing cycling’s health benefits and risks, including with respect to air pollution and crashes. Meanwhile, understanding pickpocketing awareness and prevention should contribute to reducing this crime over the long-term, if people learn from the science and take action.

In other words, context and presentation matter for risks and opportunities from Open Science regarding dangers to the public. Sometimes, though, the context is that science can be applied nefariously.

Explosives research

Airplane security is a major concern for travellers, with most governments implementing stringent measures at airports and in the air. Legitimate research questions for public safety relate to smuggling firearms through airport security and the bomb resistance of different aircraft.

Fiction frequently speculates, including in movies. A Fish Called Wanda showed a loaded gun getting past airport security screening while Non-Stop portrayed a bomb placed aboard a commercial flight.

Desk analyses could and should discuss these scenes’ dramatism and level of realism, just as the movies are analysed in other ways. Scientists could and should work with governments, security organisations, airport authorities, and airline companies to understand threats to aviation and countering them.

Open Science could compromise the gains from this collaboration. It could reveal the bomb type required to breach an airport’s fuselage or the key ways to get a weapon on board. The satirical news service, The Onion, lampooned the presumption of publicising how to get past airport security.

The front half of an aeroplane. The engines can be seen on the left of the image and the nose nearly reaches the right side of the image. The plane is white and labeled with Lufthansa.

Figure 1: We should research a cargo hold’s explosion resistance, but why publicise the results? (photo by Ilan Kelman).

Endangering activists

The public can endanger themselves by seeking Open Science. I ran a research project examining corporate social responsibility for Arctic petroleum with examples in Norway and Russia. In one Russian site, locals showed our researcher decaying oil and gas infrastructure, including leaks. These local activists were assured of confidentiality and anonymity, which is a moral imperative as well as a legal requirement.

Not all of them supported this lack of identification. They preferred entirely Open Science, hoping that researchers outside of Russia would have the credibility and influence to generate action for making their community and environment safer and healthier. They were well aware of the possible consequences of them being identified (or of publicising enough information to make them identifiable). They were willing to take these risks, hoping for gain.

The top of a square tower built of bright red brick. The tower has a narrow section on top and a green pointed roof.

Figure 2: Trinity Tower, the Kremlin, Moscow, Russia during petroleum research (photo by Ilan Kelman).

We were not permitted to accede to their requests. We certainly published on and publicised our work, using as much Open Science as we could without violating our research ethics approval, as both an ethical and legal duty. We remain inspired and concerned that the activists, seeking to save their own lives, could pursue citizen science which, if entirely open as some of them would prefer, could place them in danger.

Caution, care, and balance

Open Science sometimes brings potential dangers to the public. Being aware of and cautious about these problems means being able to prevent them. Then, a balance can be achieved between needing Open Science and not worsening or creating dangers.

Ethics of Open Science: Privacy risks and opportunities

By Kirsty, on 22 November 2024

Guest post by Ilan Kelman, Professor of Disasters and Health, building on his captivating presentation in Session 2 of the UCL Open Science Conference 2024.

Open Science brings risks and opportunities regarding privacy. Making methods, data, analyses, disagreements, and conclusions entirely publicly available demonstrates the scientific process, including its messiness and uncertainties. Showing how much we do not know and how we aim to fill in gaps excites and encourages people about science and scientific careers. It also holds scientists accountable, since any mistakes can be identified and corrected, which is always an essential part of science.

Given these advantages, Open Science offers so much to researchers and to those outside research. It helps to make science accessible to anyone, notably for application, while supporting exchange with those inspired by the work.

People’s right to privacy, as an ethical and legal mandate, must still be maintained. If a situation might worsen by Open Science not respecting privacy, irrespective of it being legal, then care is required to respect those who would want or might deserve privacy. Anonymity and confidentiality are part of research ethics precisely to achieve a balance. Irrespective, Open Science might inadvertently reveal information sources or it could be feasible to identify research participants who would prefer not to be exposed. Being aware of possible pitfalls assists in preventing them.

Disaster decisions

Some research could be seen as violating privacy. Disaster researchers seek to understand who dies in disasters, how, and why, in order to improve safety for everyone and to save lives. The work can examine death certificates and pictures of dead bodies. Publicising all this material could violate the privacy and dignity of those who perished and could augment the grief of those left behind.

Sometimes, research hones in on problematic actions for improving without blaming, whereas society more widely might seek to judge. A handful of studies has examined the blood alcohol level of drivers who died while driving through floodwater, which should never be attempted even when sober (Figure 1). In many cases, the driver was above the legal limit for blood alcohol level. Rather than embarrassing the deceased by naming-and-shaming, it would help everyone to use the data as an impetus to tackle simultaneously the separate and unacceptable decisions to drive drunk, to drive drugged, and to drive through floodwater.

Yet storytelling can be a powerful communication technique to encourage positive behavioural change. If identifying details are used, then it must involve the individuals’ or their kin’s full and informed consent. Even with this consent, it might not be necessary to provide the full details, as a more generic narrative can remain emotional and effective. Opportunities for improving disaster decisions emerge in consensual sharing, so that it avoids violating privacy—while also being careful regarding the real need to publish the specifics of any particular story.

Photo by Ilan Kelman researching the dangerous behaviour of people driving through floodwater. A white car drives through a flooded road, creating a splash. Bare trees line the roadside under a clear sky, and a road sign is partially submerged in water.
Figure 1: Researching the dangerous behaviour of people driving through floodwater, with the number plate blurred to protect privacy (photo by Ilan Kelman).

Small sample populations

Maintaining confidentiality and anonymity for interviewees can be a struggle where interviewees have comparatively unique experiences or positions and so are easily identifiable. Governments in jurisdictions with smaller populations might employ only a handful of people in the entire country who know about a certain topic. Stating that an interviewee is “A national government worker in Eswatini specialising in international environmental treaties” or “A megacity mayor” could narrow it down to a few people or to one person.

A similar situation arises with groups comprising a small number of people from whom to select interviewees, such as “vehicle business owners in Kiruna, Sweden”, “International NGO CEOs”, or specific elites. Even with thousands of possible interviewees, for instance “university chiefs” or “Olympic athletes”, quotations from the interview or locational details might make it easy to narrow down and single out a specific interviewee.

Interviewee identification can become even simpler when basic data on interviewees, such as sex and age range, are provided, as is standard in research papers. Providing interview data in a public repository is sometimes expected, with the possibility of full transcripts, so that others can examine and use those data. The way someone expresses themselves might make them straightforward to pinpoint within a small group of potential interviewees.

Again, risks and opportunities regarding privacy focus on consent and on necessity of listing details. Everyone including any public figure has some level of a right to privacy (Figure 2). Where consent is not given to waive confidentiality or anonymity, then the research process—including reviewing and publishing academic papers—needs to accept that not all interviewee details or data can or should be shared. With consent, care is still required to ensure that identifying individuals or permitting them to be discovered really adds to the positive impacts from the research.

The photo captures Ralph Nader, American politician, author, and consumer advocate, mid-speech at a podium. His expression is earnest and determined as he addresses the audience. He is dressed in a suit and tie, with a brown brick wall behind him. He is speaking towards a microphone.
Figure 2: Ralph Nader, an American politician and activist, still has a right to privacy when not speaking in public (photo by Ilan Kelman).

Caution, care, and balance

With caution and care, always seeking a balance with respect to privacy, any difficulties emerging from Open Science can be prevented. Of especial importance is not sacrificing many of the immense and much-needed gains from Open Science.