X Close

Open@UCL Blog

Home

Menu

Shaping Tomorrow’s Healthcare: Unlocking the Power of Open-Source AI and Robotics

By Naomi, on 17 September 2025

Guest post by Miguel Xochicale, Senior Research Engineer at UCL, leading work on open-source technologies from robotics to AI tools designed to improve healthcare.

In this blog, I would like to reflect on the journey so far, the challenges and rewards of trying to build something meaningful with modest means, and to look ahead with ambition to explore how open-source AI and robotics can help shape the next generation of healthcare.

In October 2023, I was honoured to receive the UCL Open Science & Scholarship Award in the category ‘Professional Services Staff Activities’. This was in recognition of initiating a half-day workshop titled ‘Open-source software for surgical technologies’. I had the privilege of hosting seven distinguished speakers, with an audience of around twenty participants. Hence, that first workshop became the springboard for subsequent events in 2024 and 2025, each growing gradually in terms of co-organisers, speakers, activities, and participants. What started as a small gathering with limited resources and time has begun to take shape as the early foundations of a community that we would like to keep building.

A group of about 30 people stand on a stage, posing for the photo, in front of a large screen on which is written “Healing Through Collaboration: Open-Source Software in Surgical, Biomedical and AI Technologies”.

The 2025 workshop took place at the 17th Hamlyn Symposium on Medical Robotics, organised by the Hamlyn Centre for Robotic Surgery at Imperial College London and held at the Royal Geographical Society, London, UK.

The workshop has grown steadily across 2023, 2024 and 2025. 2023: A half-day workshop with 7 speakers from software engineering and academia, 25 participants, and 4 organisers. 2024: A full-day workshop featuring 10 speakers from academia and industry, 6 posters archived on zenodo, 30 participants, and 3 co-organisers. 2025: A full-day workshop with 13 speakers and 7 panellists from academia, industry and regulatory backgrounds. It included 6 posters, each with a two-page abstract, supported by 6 organisers and 2 volunteers. The event sold out, with all 52 seats filled.

The challenges of leading such workshops require careful planning well in advance, ideally starting a year beforehand. This includes checking the availability and interest of co-organisers, aligning the agenda, and building relationships with new speakers and collaborators from different institutions and industries, thinking that such relationships should extend beyond purely scientific or engineering goals, fostering an environment where people also enjoy working together. In organising such workshops, we were always careful to balance responsibilities so that no one felt overwhelmed.

However, despite these considerations, our most recent workshop was scheduled too tightly, leaving little space for meaningful conversations or questions. From this, we learnt the value of tailoring the workshop to the audience, setting clear aims for the community, and creating win–win situations for everyone involved.

We also recognised that funding and sponsorship are essential. They can help cover costs such as materials (souvenirs, t-shirts, stickers), support for guest speakers, and sponsorship for students from around the world. Just as importantly, they would allow us to be compensated for the time we dedicate to organising these events.

What started as a half-day workshop in 2023 on open-source software for surgical technologies has quickly grown into a movement. By 2024 and 2025, it had developed into full-day workshops, “Healing Through Collaboration: Open-Source Software in Surgical, Biomedical, and AI Technologies”, bringing together co-organisers, speakers, and volunteers dedicated to shaping the future of healthcare with open-source AI and robotics. Each year, the community grows, the insights deepen, and the vision becomes sharper. We are now looking for like-minded collaborators, sponsors, and co-organisers to help drive this effort forward. The momentum is here, together, we can redefine what’s possible for open-source innovation in healthcare. By pooling our skills, resources, and passion, we have the chance not just to advance technology, but to transform patient outcomes and make healthcare more open, accessible, and equitable worldwide.

Get Involved!

Help us continue building a vibrant community, by following our GitHub organisation, starring our repositories including the website for workshops, creating issues or pull requests to improve materials, or contributing to the writing of our white-paper in its GitHub repository.

We are always looking for like-minded people who share our vision of open-source software, hardware and technologies benefiting everyone, everywhere. If you are interested in driving healthcare forward with open source, please get in touch with me or join our Discord server for networking, discussions, and event updates. Recorded talks will also be available on the symposium’s YouTube channel. Many more opportunities to get involved are on the way.

Author Biography:

Miguel Xochicale specialises in medical imaging, MedTech, SurgTech, biomechanics, and clinical translation, and is currently exploring physical AI and embodied AI, with a strong focus on open, accessible innovation. Miguel aims to turn cutting-edge research into real-world solutions with lasting impact. Key areas of his work include: End-to-end real-time AI workflows for surgery; Eye movement analysis for neurological disorders; AI-assisted echocardiography; Sensor fusion combining wearables, EEG devices, and medical imaging; Generative AI for fetal ultrasound scans; Human–robot and child–robot interaction in healthcare and low-resource settings; Physical and embodied AI with multimodal data. He is committed to transforming healthcare through safe, scalable, and open AI solutions. If you are interested in collaborating, whether in research, academic-industry partnerships, or developing AI-powered healthcare software, let’s connect.

 

alt=""

The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Stay connected for updates, events, and opportunities.

Follow us on Bluesky, LinkedIn, and join our mailing list to be part of the conversation!

Share this post on Bluesky

Open Source Software Design for Academia

By Kirsty, on 27 August 2024

Guest post by Julie Fabre, PhD candidate in Systems Neuroscience at UCL. 

As a neuroscientist who has designed several open source software projects, I’ve experienced firsthand both the power and pitfalls of the process. Many researchers, myself included, have learned to code on the job, and there’s often a significant gap between writing functional code and designing robust software systems. This gap becomes especially apparent when developing tools for the scientific community, where reliability, usability, and maintainability are crucial.

My journey in open source software development has led to the creation of several tools that have gained traction in the neuroscience community. One such project is bombcell: a software designed to assess the quality of recorded neural units. This tool replaces what was once a laborious manual process and is now used in over 30 labs worldwide. Additionally, I’ve developed other smaller toolboxes for neuroscience:

These efforts were recognized last year when I received an honourable mention in the UCL Open Science and Scholarship Awards.

In this post, I’ll share insights gained from these experiences. I’ll cover, with some simplified examples from my toolboxes:

  1. Core design principles
  2. Open source best practices for academia

Disclaimer: I am not claiming to be an expert. Don’t view this as a definitive guide, but rather as a conversation starter.


Follow Julie’s lead: Whether you’re directly involved in open source software development or any other aspect of open science and scholarship, or if you simply know someone who has made important contributions, consider applying yourself or nominating a colleague for this year’s UCL Open Science and Scholarship Awards to gain recognition for outstanding work!


Part 1: Core Design Principles

As researchers, we often focus on getting our code to work, but good software design goes beyond just functionality. In order to maintain and build upon your software, following a few principles from the get go will elevate software from “it works” to “it’s a joy to use, maintain and contribute to”.

1. Complexity is the enemy

A primary goal of good software design is to reduce complexity. One effective way to simplify complex functions with many parameters is to use configuration objects. This approach not only reduces parameter clutter but also makes functions more flexible and maintainable. Additionally, breaking down large functions into smaller, more manageable pieces can significantly reduce overall complexity.

Example: Simplifying a data analysis function

For instance, in bombcell we run many different quality metrics, and each quality metric is associated with several other parameters. In the main function, instead of inputting all the different parameters independently:

[qMetric, unitType] = runAllQualityMetrics(plotDetails, plotGlobal, verbose, reExtractRaw, saveAsTSV, removeDuplicateSpikes, duplicateSpikeWindow_s, detrendWaveform, nRawSpikesToExtract, spikeWidth, computeSpatialDecay, probeType, waveformBaselineNoiseWindow, tauR_values, tauC, computeTimeChunks, deltaTimeChunks, presenceRatioBinSize, driftBinSize, ephys_sample_rate, nChannelsIsoDist, normalizeSpDecay, (... many many more parameters ...), rawData, savePath);

they are all stored in a ‘param’ object that is passed onto the function:

[qMetric, unitType] = runAllQualityMetrics(param, rawData, savePath);

This approach reduces parameter clutter and makes the function more flexible and maintainable.

 2. Design for change

Research software often needs to adapt to new hypotheses or methodologies. When writing a function, ask yourself “what additional functionalities might I need in the future?” and design your code accordingly. Implementing modular designs allows for easy modification and extension as research requirements evolve. Consider using dependency injection to make components more flexible and testable. This approach separates the creation of objects from their usage, making it easier to swap out implementations or add new features without affecting existing code.

Example: Modular design for a data processing pipeline

Instead of a monolithic script:

function runAllQualityMetrics(param, rawData, savePath)
% Hundreds of lines of code doing many different things
(...)
end

Create a modular pipeline that separates each quality metric into a different function:

function qMetric = runAllQualityMetrics(param, rawData, savePath)
nUnits = length(rawData);
for iUnit = 1:nUnits
% step 1: calculate percentage spikes missing
qMetric.percSpikesMissing(iUnit) = bc.qm.percSpikesMissing(param, rawData);
% step 2: calculate fraction refractory period violations
qMetric.fractionRPviolations(iUnit) = bc.qm.fractionRPviolations(param, rawData);
% step 3: calculate presence ratio
qMetric.presenceRatio(iUnit) = bc.qm.presenceRatio(param, rawData);
(...)
% step n: calculate distance metrics
qMetric.distanceMetric(iUnit) = bc.qm.getDistanceMetric(param, rawData);
end
bc.qm.saveQMetrics(qMetric, savePath)
end

This structure allows for easy modification of individual steps or addition of new steps without affecting the entire pipeline.

In addition, this structure allows us to define new parameters easily that can then modify the behavior of the subfunctions. For instance we can add different methods (such as adding the ‘gaussian’ option below) without changing how any of the functions are called!

param.percSpikesMissingMethod = 'gaussian';
qMetric.percSpikesMissing(iUnit) = bc.qm.percSpikesMissing(param, rawData);

and then, inside the function:

function percSpikesMissing = percSpikesMissing(param, rawData);
if param.percSpikesMissingMethod == 'gaussian'
(...)
else
(...)
end
end

3. Hide complexity

Expose only what’s necessary to use a module or function, hiding the complex implementation details. Use abstraction layers to separate interface from implementation, providing clear and concise public APIs while keeping complex logic private. This approach not only makes your software easier to use but also allows you to refactor and optimize internal implementations without affecting users of your code.

Example: Complex algorithm with a simple interface

For instance, in bombcell there are many parameters. When we run the main script that calls all quality metrics, we also want to ensure all parameters are present and are in a correct format.

function qMetric = runAllQualityMetrics(param, rawData, savePath)
% Complex input validation that is hidden to the user
param_complete = bc.qm.checkParameterFields(param);

% Core function that calcvulates all quality metrics
nUnits = length(rawData);

for iUnit = 1:nUnits
% steps 1 to n
(...)
end

end

Users of this function don’t need to know about the input validation or other complex calculations. They just need to provide input and options.

4. Write clear code

Clear code reduces the need for extensive documentation and makes your software more accessible to collaborators. Use descriptive and consistent variable names throughout your codebase. When dealing with specific quantities, consider adding units to variable names (e.g., ‘time_ms’ for milliseconds) to improve clarity. You can add comments to explain non-obvious logic and to add general outlines of the steps in your code. Following consistent coding style and formatting guidelines across your project also contributes to overall clarity.

Example: Improving clarity in a data processing function

Instead of an entirely mysterious function

function [ns, sr] = ns(st, t)
ns = numel(st);
sr = ns/t;

Add more descriptive variable and function names and add function headers:

function [nSpikes, spikeRate] = numberSpikes(theseSpikeTimes, totalTime_s)
% Count the number of spikes for the current unit
% ------
% Inputs
% ------
% theseSpikeTimes: [nSpikesforThisUnit × 1 double vector] of time in seconds of each of the unit's spikes.
% totalTime_s: [double] of the total recording time, in seconds.
% ------
% Outputs
% ------
% nSpikes: [double] number of spikes for current unit.
% spikeRate_s : [double] spiking rare for current unit, in seconds.
% ------
nSpikes = numel(theseSpikeTimes);
spikeRate_s = nSpikes/totalTime_s;
end

5. Design for testing

Incorporate testing into your design process from the beginning. This not only catches bugs early but also encourages modular, well-defined components.

Example: Testable design for a data analysis function

For the simple ‘numberSpikes’ function we define above, we can have a few tests to cover various scenarios and edge cases to ensure the function works correctly. For instance, we can test a normal case with a few spikes and an empty spike times input.

function testNormalCase(testCase)
theseSpikeTimes = [0.1, 0.2, 0.3, 0.4, 0.5]; totalTime_s = 1;
[nSpikes, spikeRate] = numberSpikes(theseSpikeTimes, totalTime_s);
verifyEqual(testCase, nSpikes, 5, 'Number of spikes should be 5');
verifyEqual(testCase, spikeRate, 5, 'Spike rate should be 5 Hz');
end

function testEmptySpikeTimes(testCase)
theseSpikeTimes = [];
totalTime_s = 1;
[nSpikes, spikeRate] = numberSpikes(theseSpikeTimes, totalTime_s);
verifyEqual(testCase, nSpikes, 0, 'Number of spikes should be 0 for empty input');
verifyEqual(testCase, spikeRate, 0, 'Spike rate should be 0 for empty input');
end

This design allows for easy unit testing of individual components of the analysis pipeline.

Part 2: Open Source Best Practices for Academia

While using version control and having a README, documentation, license, and contribution guidelines are essential, I have found that these practices have the most impact:

Example Scripts and Toy Data

I have found that the most useful thing you can provide with your software are example scripts, and even better, provide toy data that loads in your example script. Users can then quickly test your software and see how to use it on their own data — and are then more likely to adopt it. If possible, package the example scripts in Jupyter notebooks/MATLAB live scripts (or equivalent) demonstrating key use cases. In bombcell, we provide a small dataset (Bombcell Toy Data on GitHub) and a MATLAB live script that runs bombcell on this small toy dataset (Getting Started with Bombcell on GitHub). 

Issue-Driven Improvement

To manage user feedback effectively, enforce the use of an issue tracker (like GitHub Issues) for all communications. This approach ensures that other users can benefit from conversations and reduces repetitive work. When addressing questions or bugs, consider if there are ways to improve documentation or add safeguards to prevent similar issues in the future. This iterative process leads to more robust and intuitive software.

Citing

Make your software citable quickly. Before (or instead) of publishing, you can generate a citable DOI using software like Zenodo. Consider also publishing in the Journal of Open Source Software (JOSS) for light peer review. Clearly outline how users should cite your software in their publications to ensure proper recognition of your work.

Conclusion

These practices can help create popular, user-friendly, and robust academic software. Remember that good software design is an iterative process, and continuously seeking feedback and improving your codebase (and sometimes entirely rewriting/refactoring parts) will lead to more robust code.

To go deeper into principles of software design, I highly recommend reading “A Philosophy of Software Design” by John Ousterhout or “The Good Research Code Handbook” by Patrick J. Mineault.

Get involved! 

alt=""The UCL Office for Open Science and Scholarship invites you to contribute to the open science and scholarship movement. Join our mailing list, and follow us on X, formerly Twitter and LinkedIn, to stay connected for updates, events, and opportunities.