X Close

Open@UCL Blog

Home

Menu

Archive for November, 2020

The new Wellcome Trust OA policy and DORA: a UCL webinar

Kirsty18 November 2020

Here at the Office for Open Science and Scholarship we are pleased to announce another webinar!

This time we have David Carr from the Wellcome Trust leading this fascinating webinar based around their new Open Access policy which comes into force in January.

  • David Carr: New Wellcome Trust OA policy outline and overview of changes
  • Ralitsa Madsen: Proposal on DORA alignment across multiple funders
  • Dr Paul Ayris & Catherine Sharp: How we are implementing the new policy at UCL

Join us on Wednesday 16th December at 12 noon for a lunchtime webinar, and plenty of time to ask all of your questions!

Sign up on Eventbrite to get your link to join the session


Full event description:

In their new Open Access policy, which will come into force in January 2021, the Wellcome Trust has introduced a requirement that organisations receiving funding must commit to the core principles set out by DORA.

In this webinar we will be hearing from David Carr, the Programme Manager for Open Research at the Wellcome Trust, who will be outlining this policy and sharing his thoughts on these changes. Following that we will be hearing from Dr Ralitsa Madsen, Sir Henry Wellcome Postdoctoral Fellow at the UCL Cancer Institute about her proposal to develop a protocol on DORA alignment that all research funders should follow. The final speakers of the session will be our own Dr Paul Ayris, Pro-Vice Provost for UCL Library Services and the Office for Open Science & Scholarship, and Catherine Sharp, Head of Open Access Services with an outline of how UCL is implementing both the DORA principles, and the Wellcome policy in general.

We will have plenty of time for discussion among our speakers and for them to answer your questions so please join us for an interesting session before the policy comes into force

Open Access Week: the first ReproHack ♻ @ UCL

Kirsty17 November 2020

The Research Software Development Group hosted the first ReproHack at UCL as part of the Open Access Week events run this year by the Office for Open Science and Scholarship. This was not only the first event of this type at UCL, but the first time a reprohack ran for a full week.

What’s a Reprohack?

A ReproHack is a hands-on reproducibility hackathon where participants attempt to reproduce the results of a research paper from published code and data and share their experiences with the group and the papers authors.

As it normally happens on hackathons, this is also a learning experience! During a Reprohack the participants, besides contributing to measure the reproducibility of published papers, also learn how to implement better reproducibility practices into their research and appreciate the high value of sharing code for Open Science.

An important aspect of the Reprohacks is that the authors themselves are the ones who put forward their papers to be tested. If you’ve published a paper and provided code and data with it, you can submit your papers for future editions of Reprohack! Your paper may be chosen by future reprohackers and provide you with feedback about the reproducibility of your paper! The feedback form is well designed so you get a complete overview of what went well and what could be improved.

Reprohacks are open to all domains! Any programming language used in the papers are accepted, however papers with code using open source programming languages are more likely to be to chosen by the participants as it may be easier to install it on their computers.

In this particular edition, the UCL Research Software Development Group was available throughout the week to provide support to the reprohackers.

What did I miss?

This was the first Reprohack at UCL! You missed all the excitement that first-time events bring with them! But do not worry, there will be more Reprohacks!

This event was particularly challenging with the same difficulties we have been fighting for the last nine months trying to run events online, but we had gain some experience already with other workshops and training sessions we run so everything went smoothly!

The event started with a brief introduction of what the event was going to be like, an ice-breaker to get the participants talking and a wonderful keynote by Daniela Ballari entitled “Why computational reproducibility is important?”. Daniela provided a great introduction to the event (did you know that only a ~20% of the published literature has only the “potential” of being computationally reproduced? and that most of it can’t be because either the software is not free, the data provided is incomplete, or it misses which version of the software was used? [Culina, 2020]), linking to resources like The Turing Way and providing five selfish reasons to work on reproducibility. She put these reasons in context of our circles of influences like how these practices benefits the author, their team, the reviewers and the overall community. The questions and answers that followed the talk were also very insightful! Daniela is a researcher in Geoinformation and Geostatics and never trained as a software developer, so she had to learn her way to make her research reproducible and her efforts in that front were highlighted in the selfish reasons she proposed in her talk.

The rest of the event consisted on ReproHacking-hacking-hacking! We separated into groups and started to choose papers. We then disconnected from the call and each participant or team worked as they preferred over the next days to try to reproduce the paper(s) they chose. At the end of the week we reconvened together to share how far we’d got and what we learned on the way.

In total we reviewed four papers, only one participant managed to reproduce the whole paper, the rest (me included) were stuck on the process. We found that full reproducibility is not easy! If the version of a software is not mentioned, then it becomes very difficult to find why something is not working as it should. But we also had a lot of fun and the participants were happy that there is a community at UCL that fights for reproducibility!

This ReproHack also counted with Peter Schmidt interviewing various participants for Code for thought, a podcast that will be published soon! Right now he’s the person running RSE Stories on this side of the Atlantic, a podcast hosted by Vanessa Sochat.

What’s next?

We will run this again! When? Not sure. We would like to run it twice a year, maybe again during the Open Access week and another session sometime between March-April. Are you interested in helping to organise it? Give me a shout! We can make a ReproHack that fits better for our needs (and our researchers!)

Thanks

Million thanks to Daniela Ballari, her talk was very illustrative and helpful to set the goals of the event!

Million thanks to Anna Krystalli too, a fellow Research Software Engineer at the University of Sheffield as she was the creator of this event and provided a lot of help to get us ready! She’s a Software Sustainability Institute Fellow and the SSI gave the initial push for this to exist. We also want to thank the RSE group at Sheffield as we were using some of their resources to run the event!

I also want to thank the organisers of ReproHack in LatinR (thanks Florencia!) as their event was just weeks before ours and seeing how they organised was super helpful!

Reproducibility events and initiatives from the UKRN

Kirsty11 November 2020

Improve your workflow for reproducible science: We recently hosted this workshop on reproducible data analysis in R Markdown and Git, led by data scientist Mine Cetinkaya-Rundel. The recording can be found here and the materials here.

Open science in covid-19 research, ReproducibiliTea journal club (Dec 2nd):Dr Lonni Besançon will present his paper ‘Open Science Saves Lives: Lessons from the COVID-19 Pandemic’. Further details and registration can be found here.

Funding for activities to develop data skills/software: UCL are offering £3000 for projects (£600 each) that support community-based activities which either contribute to the development of software and data skills, foster interdisciplinary research through the reuse of tools and resources (e.g. algorithms, data and software) or strengthen positive attributes of the community. The aim is also to provide PhD students, research and professional service staff opportunities to develop their leadership and advocacy skills. Deadline to apply is the 23 November 2020 (noon). Further details can be found here. 

 

Case study: Disseminating early research findings to influence decision makers

Nazlin Bhimani6 November 2020

A classroom in Uganda

Photograph by Dr Simone Datzberger

Recently a researcher asked for our advice on the best way to disseminate her preliminary findings from a cross-disciplinary research project on COVID-19. She wanted to ensure policy makers in East Africa had immediate access to the findings so that they could make informed decisions. The researcher was aware that traditional models of publishing were not appropriate, not simply because of the length of time it generally takes for an article to be peer-reviewed and published, but because the findings would, most likely, be inaccessible to her intended audience in a subscription-based journal.

The Research Support and Open Access team advised the researcher to take a two-pronged approach which would require her to: (1) upload the working paper with the preliminary findings in a subject-specific open-access preprint service; and (2) to publicise the research findings in an online platform that is both credible and open access. We suggested she use SocArXiv and publish a summary of her findings in The Conversation Africa, which has a special section on COVID-19. The Conversation has several country-specific editions for Australia, Canada English, Canada French, France, Global Perspectives, Indonesia, New Zealand, Spain, United Kingdom and the United States, and is a useful vehicle to get academic research read by decision makers and the members of the public. We also suggested that the researcher publicise the research on the IOE London Blog.

What are ‘working papers’ & ‘preprint services’?

UCL’s Institute of Education has a long-standing tradition of publishing working papers to signal work-in-progress, share initial findings, and elicit feedback from other researchers working in the same area. The preprint service used thus far at the IOE is RePEc (Research Papers in Economics), which includes papers in education and the related social sciences). RePEc is indexed by the database publisher EBSCO (in EconLit) and by Google Scholar and Microsoft Academic Search. Commercial platforms such as ResearchGate also trawl through RePEc and index content. Until it was purchased by Elsevier in May 2016, the Social Science Research Network or SSRN was the other popular preprint repository used by IOE researchers although its content is indexed mainly for its conference proceedings. The sale of SSRN to Elsevier resulted in a fallout between authors and the publisher, and this resulted in SocArXiv entering the scene. SocArXiv is an open access, open source preprint server for the social sciences which accepts text files, data and code. It is the brainchild of the non-profit Centre for Open Science (COS) whose mission is to increase openness, integrity and reproducibility of research – values that are shared by UCL and are promoted on this blog and by the newly formed Office of Open Science and Scholarship (for more information see also the Pillars of Open Science). In the spirit of openness, most papers on SocArXiv use the creative commons license CC-By Attribution-NonCommercial-NonDerivatives 4.0 International, which safeguards the rights of the author. As papers on SocArXiv are automatically assigned Digital Object Identifiers (DOI), they discoverable on the web, particularly as Scholar indexes SocArXiv content.

What are the benefits of using preprint servers?

Whilst research repositories such as UCL Discovery are curtailed by publisher policies on what research can be made open access, this is not always the case for papers submitted on preprint subject repositories. Without wanting to repeat what my colleague Patrycja Barczymska has already written in her post on preprints I can confirm that in addition to signalling the research findings and eliciting feedback, other benefits to depositing in preprint servers include the enhanced discoverability as most will automatically generate DOIs at the time a paper is uploaded, the possibility of obtaining early citations and the alternative metrics that indicate interest (e.g. the number of downloads, mentions, etc.)  that services such as SocArXiv provide. Researchers can also list open-access working papers in funding applications.

Does uploading a working paper in a pre-print server hinder the publication of the final paper?

Researchers are concerned, and rightly so, that publishers may not publish their final research output if preliminary findings are deposited in preprint servers as working papers. However, more often than not, working papers are exactly that – work in progress. They are not the final article that gets submitted for publication.  It is also likely that the preliminary findings and conclusions in the working paper will be somewhat different from the final version of the paper. It is worth knowing that some of the key social sciences publishers, such as SageSpringer, and Taylor and Francis / Routledge and Wiley, explicitly state that they will accept content that has been deposited on a preprint server, as long as it is a non-commercial preprint service. In other words, researchers must not upload the working papers on platforms such as academia.edu and ResearchGate.

These ‘preprint-friendly’ publishers simply ask that the author informs them of the existence of a preprint and provides the DOI of the working paper at the time of submitting their article. Some ask that authors update the preprint to include the bibliographic details, including the new DOI, when their article is published, and that authors add a statement requesting readers to cite the published article rather than the preprint publication. Although a definitive list of individual journal policies does not exist, submission guidelines generally clarify issues related to preprints. Researchers may want to use the Sherpa Romeo service (and Sherpa Juliet for key funder policies) to obtain additional information.

More than a success story

The above case demonstrates how preliminary research findings can be shared expeditiously and in an open environment to aid the decision-making process.  It also demonstrates that open-access subject-specific preprint services can be beneficial to promoting both the research and the researcher, and that there is now wider acceptance among publishers that the traditional models of publishing are not always viable. This is especially true where cutting-edge research is required as in the case of research on COVID-19.