X Close

Open@UCL Blog

Home

Menu

Archive for the 'Awards' Category

Announcing: UCL Open Science & Scholarship Award Winners 2024!

By Rafael, on 2 October 2024

Image of a gold medal with a blue ribbon on a dark surface sprinkled with shiny confetti, symbolising celebration and achievement. The medal features a laurel wreath design and the inscription 'You're A Winner.'

On behalf of the UCL Office for Open Science & Scholarship and the UKRN local leads, we would like to thank everyone who engaged with the nominations and showed us how amazing the research community at UCL is. We were overwhelmed by the support for this process, and the judging panel had a really hard job selecting just a few winners from the over 50 applications and nominations we received!

We will be presenting the awards in a small ceremony during International Open Access Week on 2-3:30pm, Wednesday 23rd October (for the full list of events for that week, check our blog post). A selection of winners and honourable mentions will present their work, followed by a small reception sponsored by UCL Press.

We have limited tickets available due to the small venue, but tickets are available on Eventbrite for UCL staff and students.

Full information about all of these projects will be available on the day of the awards, so watch this space!

Category: Students

Winners:
• Sophie Ka Ling Lau and Divya Balain, postgraduate students at the Faculty of Brain Sciences and Life Sciences

Honourable mentions:
• Beth Downe, MSc in Ecology and Data Science at Division of Biosciences
• Gabrielle Pengu Shao, Undergraduate student in Geography

Category: Non-academic Staff

Winners:
• Dr Eirini-Christina Saloniki, Senior Research Fellow in Health Economics (NIHR ARC North Thames) in the Department of Applied Health Research
• William Lammons, Patient and Public Involvement and Engagement Lead for the Applied Research Collaboration North Thames

Category: Open Publishing

Winner:
• Dr Emily Gardner, Research Fellow in the Department of Genetics & Genomic Medicine

Honourable mentions:
• Dr Deborah Padfield, Associate Professor at the Slade School of Fine Art
• Dr Adam Parker, Lecturer in the Division of Psychology and Language Sciences (with David Shanks, Courtenay Norbury, and Daryl Lee)

Category: Open-Source Software/Analytical Tools

Winner:
• Alessandro Felder (on behalf of the BrainGlobe team), Research Software Engineer in the Neuroinformatics Unit at the Sainsbury Wellcome Centre and technical lead for the BrainGlobe initiative

Honourable mentions:
• Hengrui Zhang, PhD student at the Institute of Health Informatics
• Mathilde Ripart, PhD student at the Great Ormond Street Institute of Child Health
• Prof Justyna Petke, Professor of Software Engineering at the Centre for Research on Evolution, Search and Testing
• Dr Enny van Beest, Senior Research Associate at the Institute of Ophthalmology, and Dr Célian Bimbard, Senior Research Fellow, Institute of Ophthalmology

Category: Advocating for Open Science/Community Building

Winner:
• Dr Joseph Cook, Lead of the UCL Citizen Science Academy at the Institute for Global Prosperity

Honourable mentions:
• Claire Waddington, PhD student at the Dementia Research Centre
• Fan Cheng, PhD student at the Faculty of Population Health Sciences

Book your tickets now and join us in celebrating the incredible open science work happening at UCL!

For more information about the UCL Open Science and Scholarship Awards, visit our webpage. You can also stay connected by following us on LinkedIn or BlueSky, and be sure to subscribe to our newsletter for the latest updates on the awards and all things open science at UCL!

Open Source Software Design for Academia

By Kirsty, on 27 August 2024

Guest post by Julie Fabre, PhD candidate in Systems Neuroscience at UCL. 

As a neuroscientist who has designed several open source software projects, I’ve experienced firsthand both the power and pitfalls of the process. Many researchers, myself included, have learned to code on the job, and there’s often a significant gap between writing functional code and designing robust software systems. This gap becomes especially apparent when developing tools for the scientific community, where reliability, usability, and maintainability are crucial.

My journey in open source software development has led to the creation of several tools that have gained traction in the neuroscience community. One such project is bombcell: a software designed to assess the quality of recorded neural units. This tool replaces what was once a laborious manual process and is now used in over 30 labs worldwide. Additionally, I’ve developed other smaller toolboxes for neuroscience:

These efforts were recognized last year when I received an honourable mention in the UCL Open Science and Scholarship Awards.

In this post, I’ll share insights gained from these experiences. I’ll cover, with some simplified examples from my toolboxes:

  1. Core design principles
  2. Open source best practices for academia

Disclaimer: I am not claiming to be an expert. Don’t view this as a definitive guide, but rather as a conversation starter.


Follow Julie’s lead: Whether you’re directly involved in open source software development or any other aspect of open science and scholarship, or if you simply know someone who has made important contributions, consider applying yourself or nominating a colleague for this year’s UCL Open Science and Scholarship Awards to gain recognition for outstanding work!


Part 1: Core Design Principles

As researchers, we often focus on getting our code to work, but good software design goes beyond just functionality. In order to maintain and build upon your software, following a few principles from the get go will elevate software from “it works” to “it’s a joy to use, maintain and contribute to”.

1. Complexity is the enemy

A primary goal of good software design is to reduce complexity. One effective way to simplify complex functions with many parameters is to use configuration objects. This approach not only reduces parameter clutter but also makes functions more flexible and maintainable. Additionally, breaking down large functions into smaller, more manageable pieces can significantly reduce overall complexity.

Example: Simplifying a data analysis function

For instance, in bombcell we run many different quality metrics, and each quality metric is associated with several other parameters. In the main function, instead of inputting all the different parameters independently:

[qMetric, unitType] = runAllQualityMetrics(plotDetails, plotGlobal, verbose, reExtractRaw, saveAsTSV, removeDuplicateSpikes, duplicateSpikeWindow_s, detrendWaveform, nRawSpikesToExtract, spikeWidth, computeSpatialDecay, probeType, waveformBaselineNoiseWindow, tauR_values, tauC, computeTimeChunks, deltaTimeChunks, presenceRatioBinSize, driftBinSize, ephys_sample_rate, nChannelsIsoDist, normalizeSpDecay, (... many many more parameters ...), rawData, savePath);

they are all stored in a ‘param’ object that is passed onto the function:

[qMetric, unitType] = runAllQualityMetrics(param, rawData, savePath);

This approach reduces parameter clutter and makes the function more flexible and maintainable.

 2. Design for change

Research software often needs to adapt to new hypotheses or methodologies. When writing a function, ask yourself “what additional functionalities might I need in the future?” and design your code accordingly. Implementing modular designs allows for easy modification and extension as research requirements evolve. Consider using dependency injection to make components more flexible and testable. This approach separates the creation of objects from their usage, making it easier to swap out implementations or add new features without affecting existing code.

Example: Modular design for a data processing pipeline

Instead of a monolithic script:

function runAllQualityMetrics(param, rawData, savePath)
% Hundreds of lines of code doing many different things
(...)
end

Create a modular pipeline that separates each quality metric into a different function:

function qMetric = runAllQualityMetrics(param, rawData, savePath)
nUnits = length(rawData);
for iUnit = 1:nUnits
% step 1: calculate percentage spikes missing
qMetric.percSpikesMissing(iUnit) = bc.qm.percSpikesMissing(param, rawData);
% step 2: calculate fraction refractory period violations
qMetric.fractionRPviolations(iUnit) = bc.qm.fractionRPviolations(param, rawData);
% step 3: calculate presence ratio
qMetric.presenceRatio(iUnit) = bc.qm.presenceRatio(param, rawData);
(...)
% step n: calculate distance metrics
qMetric.distanceMetric(iUnit) = bc.qm.getDistanceMetric(param, rawData);
end
bc.qm.saveQMetrics(qMetric, savePath)
end

This structure allows for easy modification of individual steps or addition of new steps without affecting the entire pipeline.

In addition, this structure allows us to define new parameters easily that can then modify the behavior of the subfunctions. For instance we can add different methods (such as adding the ‘gaussian’ option below) without changing how any of the functions are called!

param.percSpikesMissingMethod = 'gaussian';
qMetric.percSpikesMissing(iUnit) = bc.qm.percSpikesMissing(param, rawData);

and then, inside the function:

function percSpikesMissing = percSpikesMissing(param, rawData);
if param.percSpikesMissingMethod == 'gaussian'
(...)
else
(...)
end
end

3. Hide complexity

Expose only what’s necessary to use a module or function, hiding the complex implementation details. Use abstraction layers to separate interface from implementation, providing clear and concise public APIs while keeping complex logic private. This approach not only makes your software easier to use but also allows you to refactor and optimize internal implementations without affecting users of your code.

Example: Complex algorithm with a simple interface

For instance, in bombcell there are many parameters. When we run the main script that calls all quality metrics, we also want to ensure all parameters are present and are in a correct format.

function qMetric = runAllQualityMetrics(param, rawData, savePath)
% Complex input validation that is hidden to the user
param_complete = bc.qm.checkParameterFields(param);

% Core function that calcvulates all quality metrics
nUnits = length(rawData);

for iUnit = 1:nUnits
% steps 1 to n
(...)
end

end

Users of this function don’t need to know about the input validation or other complex calculations. They just need to provide input and options.

4. Write clear code

Clear code reduces the need for extensive documentation and makes your software more accessible to collaborators. Use descriptive and consistent variable names throughout your codebase. When dealing with specific quantities, consider adding units to variable names (e.g., ‘time_ms’ for milliseconds) to improve clarity. You can add comments to explain non-obvious logic and to add general outlines of the steps in your code. Following consistent coding style and formatting guidelines across your project also contributes to overall clarity.

Example: Improving clarity in a data processing function

Instead of an entirely mysterious function

function [ns, sr] = ns(st, t)
ns = numel(st);
sr = ns/t;

Add more descriptive variable and function names and add function headers:

function [nSpikes, spikeRate] = numberSpikes(theseSpikeTimes, totalTime_s)
% Count the number of spikes for the current unit
% ------
% Inputs
% ------
% theseSpikeTimes: [nSpikesforThisUnit × 1 double vector] of time in seconds of each of the unit's spikes.
% totalTime_s: [double] of the total recording time, in seconds.
% ------
% Outputs
% ------
% nSpikes: [double] number of spikes for current unit.
% spikeRate_s : [double] spiking rare for current unit, in seconds.
% ------
nSpikes = numel(theseSpikeTimes);
spikeRate_s = nSpikes/totalTime_s;
end

5. Design for testing

Incorporate testing into your design process from the beginning. This not only catches bugs early but also encourages modular, well-defined components.

Example: Testable design for a data analysis function

For the simple ‘numberSpikes’ function we define above, we can have a few tests to cover various scenarios and edge cases to ensure the function works correctly. For instance, we can test a normal case with a few spikes and an empty spike times input.

function testNormalCase(testCase)
theseSpikeTimes = [0.1, 0.2, 0.3, 0.4, 0.5]; totalTime_s = 1;
[nSpikes, spikeRate] = numberSpikes(theseSpikeTimes, totalTime_s);
verifyEqual(testCase, nSpikes, 5, 'Number of spikes should be 5');
verifyEqual(testCase, spikeRate, 5, 'Spike rate should be 5 Hz');
end

function testEmptySpikeTimes(testCase)
theseSpikeTimes = [];
totalTime_s = 1;
[nSpikes, spikeRate] = numberSpikes(theseSpikeTimes, totalTime_s);
verifyEqual(testCase, nSpikes, 0, 'Number of spikes should be 0 for empty input');
verifyEqual(testCase, spikeRate, 0, 'Spike rate should be 0 for empty input');
end

This design allows for easy unit testing of individual components of the analysis pipeline.

Part 2: Open Source Best Practices for Academia

While using version control and having a README, documentation, license, and contribution guidelines are essential, I have found that these practices have the most impact:

Example Scripts and Toy Data

I have found that the most useful thing you can provide with your software are example scripts, and even better, provide toy data that loads in your example script. Users can then quickly test your software and see how to use it on their own data — and are then more likely to adopt it. If possible, package the example scripts in Jupyter notebooks/MATLAB live scripts (or equivalent) demonstrating key use cases. In bombcell, we provide a small dataset (Bombcell Toy Data on GitHub) and a MATLAB live script that runs bombcell on this small toy dataset (Getting Started with Bombcell on GitHub). 

Issue-Driven Improvement

To manage user feedback effectively, enforce the use of an issue tracker (like GitHub Issues) for all communications. This approach ensures that other users can benefit from conversations and reduces repetitive work. When addressing questions or bugs, consider if there are ways to improve documentation or add safeguards to prevent similar issues in the future. This iterative process leads to more robust and intuitive software.

Citing

Make your software citable quickly. Before (or instead) of publishing, you can generate a citable DOI using software like Zenodo. Consider also publishing in the Journal of Open Source Software (JOSS) for light peer review. Clearly outline how users should cite your software in their publications to ensure proper recognition of your work.

Conclusion

These practices can help create popular, user-friendly, and robust academic software. Remember that good software design is an iterative process, and continuously seeking feedback and improving your codebase (and sometimes entirely rewriting/refactoring parts) will lead to more robust code.

To go deeper into principles of software design, I highly recommend reading “A Philosophy of Software Design” by John Ousterhout or “The Good Research Code Handbook” by Patrick J. Mineault.

Get involved! 

alt=""The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Join our mailing list, and follow us on X, formerly Twitter and LinkedIn, to stay connected for updates, events, and opportunities.

 

 

 

UCL Open Science & Scholarship Awards – Update from Mike and Gesche!

By Kirsty, on 21 August 2024

As part of our work at the Office this year, we’ve made it a priority to stay connected with all of our award winners. Some of them shared their experiences during our conference, and we’re already well on our way to planning another exciting Awards ceremony for this year’s winners!

You can apply now for the UCL Open Science & Scholarship Awards 2024 to celebrate UCL students and staff who are advancing and promoting open science and scholarship. The awards are open to all UCL students, PhD candidates, professional services, and academic staff across all disciplines. There’s still time to submit your applications and nominations in all categories— the deadline is 1 September!

To give you some inspiration for what’s possible in open science, Mike Fell has given us an update on the work that he and Gesche have done since receiving their award last year:


In autumn last year, we were surprised and really happy to hear we’d received the first UCL Open Scholarship Awards. Even more so when we heard at the ceremony about the great projects that others at UCL are doing in this space.

The award was for work we’d done (together with PhD colleague Nicole Watson) to improve transparency, reproducibility, and quality (TReQ) or research in applied multidisciplinary areas like energy. This included producing videos, writing papers, and delivering teaching and related resources.

Of course, it’s nice for initiatives you’ve been involved in to be recognized. But even better have been some of the doors this recognition has helped to open. Shortly after getting the award, we were invited to write an opinion piece for PLOS Climate on the role of open science in addressing the climate crisis. We also engaged with leadership at the Center for Open Science.

More broadly – although it’s always hard to draw direct connections – we feel the award has had career benefits. Gesche was recently appointed Professor of Environment & Human Health at University of Exeter, and Director the European Centre for Environment and Human Health. As well as highlighting her work on open science, and the award, in her application, this now provides an opportunity to spread the work further beyond the bounds of UCL and our existing research projects.

There’s still a lot to do, however. While teaching about open science is now a standard part of the curriculum for graduate students in our UCL department (and Gesche planning this for the ECEHH too), we don’t have a sense that this is common in energy research, other applied research fields, and education more broadly. It’s still quite rare to see tools like pre-analysis plans, reporting guidelines, and even preprints employed in energy research.

A new research centre we are both involved in, the UKRI Energy Demand Research Centre, has been up and running for a year, and with lots of the setup stage now complete and staff in place, we hope to pick up a strand of work in this area. Gesche is the data champion for the Equity theme of that centre. The new focus must be on how to better socialize open research practices and make them more a part of the culture of doing energy research. We look forward to continuing to work with UCL Open Science in achieving that goal.

Get involved!

alt=""The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Join our mailing list, and follow us on X, formerly Twitter and LinkedIn, to stay connected for updates, events, and opportunities.