X Close

Centre for Advanced Research Computing

Home

ARC is UCL's research, innovation and service centre for the tools, practices and systems that enable computational science and digital scholarship

Menu

Archive for the 'Collaboration' Category

EHR">SAFEHR

By Sarah Keating, on 16 December 2024

What’s in a name?

Electronic Healthcare Records are a vital source of data for a wide range of research, ranging from the implementation of new medical devices to the investigation of cures for disease. However, this data cannot just be made available to anyone, a written request must be vetted and approved, appropriate data must be extracted and anonymised and finally the anonymised data must be deposited in a secure location i.e. the whole process and the data itself must be SAFE. 

A flower that is the Slack logo that we use

What is SAFEHR?

SAFEHR (pronounced simply as safer) is a cross institutional collaboration between University College London Hospital and ARC whose primary goal is to support healthcare researchers at UCL and the hospital.

Data for research

We facilitate researchers requiring data by providing a single point of contact for their requests for multimodal data from the hospital. In order to achieve this we have developed 

The word research as a key on a computer keyboard

  • An online form which captures all the necessary information for an application for data
  • OMOP-ES – a tool that given a cohort extracts the data in the The OMOP Common Data Model
  • PIXL – a pipeline for extracting images from the hospital systems 
  • A pipeline to anonymize free text notes using Cogstack
  • Code that links all these together and delivers them to the Data Safe Haven
        •  

Data for education

We have also been instrumental in establishing a process and set of guidelines for the use of this data in Education. Students doing courses that may require healthcare data will in future be able to use data that is realistic.

Data for anyone

Working with a project developed by the Turing Institute we are working on a pipeline that will produce a synthetic set of any data we extract that can be made publicly available for anybody to use.

Data we can provide

Records

Images

Reports

Demographics

X-rays

Radiology

Admissions

MRI

MS Clinicians notes

Measurements

CT (coming soon)

 

much more…

   

Have you noticed it’s all about the data!!

The SAFEHR Launch

SAFEHR was officially launched on December 4th 2024 to an audience of about 50 people including some prestigious names from both UCL and UCLH. We presented all aspects of our pipeline from making an inquiry, browsing our data catalogue, getting approval to actually receiving data. The slides from the presentation are available and you can find more information on our website

 

A plus sign and three connected hexagons to denote collaboration.

Acknowledgements

We acknowledge funding support from UCL, the UCLH Biomedical Research Centre, the Institute of Neurology, Dementia Research Institute and projects led by Prof. Gary Royle and Dr Arman Eshaghi.             

Research by Nick Youngson CC BY-SA 3.0 Pix4free

Alignment of the Agile Manifesto to a Research Context

By Monika Byrne Svata, on 14 November 2024

This article proposes aligning the language of the original Agile Manifesto – written over 20 years ago, for software development in a commercial setting – with our current context of digital research projects involving research software engineering, data science, data stewardship, and research infrastructure development.

This work was inspired by discussion about the wording of the Agile Manifesto during the regular Agile Training for Research projects that we run in UCL’s Advanced Research Computing Centre (ARC) for senior staff in our Collaborations team. To gain wider input from colleagues we devoted two ARC “Collaboration Hour” sessions to this topic, with additional conversation held on Slack, some email input, and a period where a draft of this article was available for internal comment.

We hold that the core ideas behind Agile, such as responding to change, valuing people interactions, etc., are valid and beneficial in a research context. However, the specific expression of these may be able to be improved on – in true Agile fashion! Our aim is that this will make it easier to apply the Agile principles in the management of our Collaborations projects, by removing the cognitive dissonance caused by the language inspired by a different context. By publishing this article, we hope that others will see a similar benefit, and we invite feedback from the community.

 

The Original Agile Manifesto

It originated in February 2001 in a meeting of representatives from emerging ‘lightweight’ software development approaches in response to the need for an alternative to documentation driven, heavyweight software development processes.

Although there are many frameworks to aid the application of Agile approaches for particular settings, the manifesto emphasises that the change of culture within organisations and teams is the key element and the condition of the success of implementing Agile ways of working.

“While the Manifesto provides some specific ideas… there is a deeper theme of values based on trust and respect for each other and promoting organizational models based on people, collaboration, and building the types of organizational communities in which we would want to work.”

“So, in the final analysis, the meteoric rise of interest in – and sometimes tremendous criticism of – Agile Methodologies is about the mushy stuff of values and culture.”

For a fuller history, visit the Agile Manifesto website.

The original Agile Manifesto contains 4 Agile Values, and 12 Agile Principles.

Below we give the original text of each alongside our updated version and discuss the reasons for our proposed revisions.

 

 

Key Terms

Although the wording of each of the values and principles has been considered separately, to make sure that it reflects the best of both the original meaning and its application to research/academia, we found it useful to give an initial consideration and space for discussion to some of the repeated key terms and the reality of research projects.

 

Original wording Discussion about new wording
Customer ‘Customer’ implies negotiation and a zero-sum game, rather than a collaboration with a common goal. This also applies to the term ‘Client’.

‘End user’ is a specific term that might not correctly reflect the reality of a research project or correctly describe the collaborators.

‘Collaborators’, ‘our collaborators’, ‘all collaborators’ feel like the terms best describing this role.

Valuable software / Working Software Terms like ‘valuable software’, ‘working software’ or ‘digital artefacts’ are too limiting, as the outputs of collaborations projects are often other than software (e.g. research, teaching/education, service, etc.)

The suggested terms that felt acceptable included ‘desired outcome’, ‘academic output’, ‘the research’, ‘research outcome’.

Developers Research Technical Professionals – RTP
Business people Depending on the context of the individual principles, terms like ‘researchers’ or ‘domain experts’ felt appropriate.
Major current areas of pain for research projects The original context of the Agile Manifesto, expressed in the Agile Values, was that it was responding to the reality of rigid overplanning and over-documenting, where any change, learning, or other deviation from the original assumptions was seen as disruptive and a risk.

As the reality of research projects in 2024 carries different issues and risks, we wanted to keep these in mind, so that the values address these.

Some of the pain points of research projects highlighted in the discussion:

  • Insufficient documentation (leaving ‘breadcrumbs’ behind)
  • The scale and ambiguity of the research outcomes
  • Parallel working on multiple projects
  • Limited longevity of the projects and teams due to grant work

 

 

Revised Agile Values

Below is the original wording of Agile Values followed by the new wording that is the result of ARC-wide discussion, and in our view best represents their application to research projects.

 

Original:

  • Individuals and interactions over processes and tools.
  • Working software over comprehensive documentation.
  • Customer collaboration over contract negotiation.
  • Responding to change over following a plan. 

That is, while there is value in the items on the right, we value the items on the left more.

 

Agile Values for Research Projects:

In these statements, while there is value in the items on the right, we value the items on the left more.

  • Individuals and interactions supported by suitable processes and tools.
  • Working solutions supported by adequate documentation.
  • Collaboratively responding to change supported by agile planning.

 

Discussion Points:

  • To highlight the importance of all elements of delivery (including documentation, tools, processes, planning etc.), we agreed to move the sentence stressing this point to the start. For the same reason, we changed the word ‘over’ for ‘supported by’.
  • To denote that the processes and tools are in service of the main outcome, we added the word ‘suitable’.
  • The term ‘comprehensive’ documentation has been updated to ‘adequate’ documentation to reflect that the detail, format, and amount of documentation needs to be fit for purpose rather than a goal or outcome in its own right.
  • ‘Contract negotiation’ in research is different than in a business setting, being typically less adversarial and restricted to agreement with funders. The concept as evoked in the original values applies more to the process of requirements elicitation and jointly planning for the project delivery, so we agreed to merge the values related to contracts and to planning, with the overarching theme of collaborative work. This is to stress that the nature of scoping, planning and delivery of research projects is collaborative and evolving, rather than a fixed result of prior negotiations.

 

Revised Agile Principles

 

For each principle we set out the original wording followed by the new wording that is the result of the ARC-wide discussion and best represents their application to research projects.

Included are also some of the main discussion points to clarify the thought process that went into the updated wording.

 

Principle 1

 

Original:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

 

New Wording for Research Projects:

Our highest priority is early and continuous delivery of valuable outputs through meaningful collaboration.

 

Discussion Points:

  • How to define ‘customer’. Suggestions included ‘domain experts and users’, ‘the world, ‘collaborators’, and ‘researchers’. In the end, we agreed that highlighting a ‘customer’ in this principle is unnecessary, as the purpose of the collaboration project is not aimed only at one of the parties (regardless their name).
  • The word ‘satisfy’ implies that our contribution is to deliver someone else’s. requirements as opposed to actively collaborate on research as equal partners.
  • The words ‘early’ and ‘continuous’ carry the key point of this principle, therefore we made sure they are included in the new version.
  • The output of our projects is not necessarily ‘valuable software’ but it might be research, training, software, digital solutions or data management to enable research, or a combination of the above.

 

 

Principle 2

 

Original:

Welcome changing requirements, even late in development.  Agile processes harness change for the customer’s competitive advantage.

 

New Wording for Research Projects:

Welcome changing requirements, even late in development. Agile processes harness change for the benefit of the collaborative research outcome.

 

Discussion Points:

  • ‘Customer’ and ‘competitive advantage’ do not apply well to research projects, and it is important to define what it is we are trying to maximise.
  • The words ‘welcome changing requirement’ and ‘even late in the development’ are key in this message and we made sure they make it to the latest version. It is understood that this doesn’t mean indiscriminate implementation of any change (early or late); rather it means the ability to assess changes and deal with them appropriately being an expected part of the process.

 

 

Principle 3

 

Original:

Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

 

New Wording for Research Projects:

Deliver meaningful outputs frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

 

Discussion Points:

  • The ‘working software’ is not the only possible output – as discussed before. The considered options for this principle were ‘research outcome’ or ‘output’.
  • The term ‘research outcome’ was found to be closer in meaning to the end result of the project, whereas ‘output’ can be a partial deliverable or result of any kind (software functionality, bug fix, result or a partial achievement of a particular research question, documentation of rules/requirements, update of data etc.). As the point of this principle is to stress frequent delivery of interim outputs, the term ‘outputs’ was found best suited.
  • ‘Meaningful’ output has been added to denote the principle of producing an output that is not only a part of the final deliverables (e.g. final documentation – valuable as it is) but crucially steers the project towards better understanding of the requirements or solution and achieving its main goals.

 

 

Principle 4

 

Original:

Businesspeople and developers must work together daily throughout the project.

 

New Wording for Research Projects:

Domain experts and research technology professionals aim to work together daily throughout the project.

 

Discussion Points:

  • ‘Businesspeople’ in research mean anyone who is bringing the knowledge of the research domain that we are collaborating on. This can be researchers, post-docs, user representatives (e.g. in cases of co-creation), business representatives (in cases of collaboration with industry), etc. ‘Domain experts’ was agreed to cover all these possibilities.
  • ‘Research Technology Professionals’ covers all professions within ARC (Research Software Engineers, Research Data Scientists, Research Data Stewards, Research Infrastructure Developers, PRISMs) and is a term used by UKRI.
  • Although the RTPs are also domain experts in their own right, the point of this principle is that the technical aspects of the project should be worked on in very close collaboration with the non-technical experts. Therefore, we kept the demarcation of the technical and non-technical experts for this principle, rather than covering them by the term ‘collaborators’ as we do in some of the other principles.
  • The ambition of working together ‘daily’ has been challenged in this discussion as it is a very challenging requirement that is rarely practicable. However, as the principles denote the recommended ideal (e.g. team members on full time on a single project, the product owner with good availability and direct accountability), it is very useful and important to have this principle stated in its ideal undiluted form. For the cases where compromises need to be found (e.g. team members on part time, low availability of collaborators etc.), it is useful to understand the reasons for these compromises and what are the most reasonable adjustments.
  • Due to the ambitious nature of ‘daily’ communication and collaboration, the word ‘must’ was viewed as too strong and was rephrased as an aim.

 

 

Principle 5

 

Original:

Build projects around motivated individuals. Give them the environment and support they need and trust them to get the job done.

 

Wording for Research Projects – No Change:

Build projects around motivated individuals. Give them the environment and support they need and trust them to get the job done.

 

Discussion Points:

  • There was no challenge about the content or wording of this principle. Arguably, in the context of research, this principle is not only as relevant as in commercial setting, but also closer to the ethos of the individuals and teams working in this environment compared to business.

 

 

Principle 6

 

Original:

The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

 

New Wording for Research Projects:

The most efficient and effective method of communication in a research team is synchronous conversation.

 

Discussion Points:

  • In our current working environment, it is not reasonable to assume that teams are physically collocated, therefore physical face-to-face conversation is frequently not feasible. Synchronous conversation (such as Teams call, Slack huddle or similar) is the next best option.
  • Synchronous conversation is not everyone’s preferred method of communication and there are situations where conveying information might be better suited to other media. However, when it comes to communication, synchronous communication enables richer and more nuanced information exchange in faster and more efficient ways than asynchronous communication and therefore is essential to establish as a regular communication channel for a team.
  • In the original principles, the ‘development’ team might imply mainly the involvement of technical professionals. However, as the outcome of research projects is often research rather than software, this principle applies to all members of the team, including the domain experts.

 

 

Principle 7

 

Original:

Working software is the primary measure of progress

 

New Wording for Research Projects:

Research outputs are the primary measure of progress.

 

Discussion Points:

  • The definition of the key output of the research projects that can be produced regularly and in the interim before the end of the project. We chose ‘research outputs’ in favour of ‘research outcome’, as the ‘research outcome’ is often reached only at the end of the project.

 

 

Principle 8

 

Original:

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

 

New Wording for Research Projects:

Agile processes promote development at a sustainable pace for the whole team, without having to increase intensity to meet deadlines.

 

Discussion Points:

  • It is unreasonable and unnecessary to expect ‘indefinite’ delivery. In contrast to a commercial setting, the duration of collaboration within a research team is often limited by grants and therefore specifying ‘any duration’ is equally unnecessary. The key message of this principle is that the ways of working should be ‘sustainable’ to all members of the team, while it lasts.
  • As the term ‘sustainable’ is often associated with environmental impact, which is not the point of this principle, we have added ‘pace’ to the original wording for clarity.

 

 

Principle 9

 

Original:

Continuous attention to technical excellence and good design enhances agility.

 

Wording for Research Projects – No Change:

Continuous attention to technical excellence and good design enhances agility.

 

Discussion Points:

  • There is no obvious challenge in translating this principle from a commercial to a research setting and there were no other suggestions raised in the discussion.
  • The frequent discussion in relation to this principle is how it relates to the previous principles of embracing change and producing outputs early – but this is related to adoption of agile ways of working themselves, rather than their adaptation to research/academia.

 

 

Principle 10

 

Original:

Simplicity – the art of maximizing the amount of work not done – is essential.

 

Wording for Research Projects – No Change:

Simplicity – the art of maximizing the amount of work not done – is essential.

 

Discussion Points:

  • As the wording ‘maximising the amount of work not done’ is purposely bold and provocative, it sparked a discussion as to whether this statement is encouraging not putting sufficient effort into the project. However, it has been agreed by majority that it is clear that this statement is encouraging prioritisation and efficiency rather than avoiding doing work that is legitimate and important (whatever the nature of that work might be, including documentation, refactoring, search for efficient solutions, etc.).

 

 

Principle 11

 

Original:

The best architectures, requirements, and designs emerge from self-organizing teams.

 

Wording for Research Projects – No Change:

The best architectures, requirements, and designs emerge from self-organizing teams.

 

Discussion Points:

  • The principle feels congruent with our understanding of the best ways of working for research projects and didn’t raise any challenge in the discussion.
  • Although not brought up in the discussion, one of the relevant points might be a discussion how to include the role of ARC Project Manager to the construct of the flat Agile team (especially for Scrum).

 

 

Principle 12

 

Original:

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.

 

Wording for Research Projects – No Change:

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.

 

Discussion Points:

  • There was general agreement and no challenge to this principle in the discussion. Everyone in the team is familiar with team or sprint retrospectives and broadly in agreement about their usefulness.
  • The challenge in this space might be in details of the practice of retrospectives (or similar techniques) – their frequency, who runs this meeting, who attends the meeting – to make sure that it brings the intended benefits, and in the ways the learnings are actively fed back into the working practices of the team.

 

 

 

 

 

 

 

RSE Initiatives – 6 months in

By Amanda Ho-Lyn, on 7 June 2024

What?

At ARC I think it would be fair to say we strive to develop and improve not only on an individual level, but also on a group level. One of the ways we are doing this is through our RSE (Research Software Engineer) Initiatives – aiming to advance/evolve the RSE team to improve collaboration and delivery of the best possible software. They involve taking a more objective look at the current processes within our department and determining, by consensus, whether some of these processes need to be updated, or if a new solution should be devised. These are not overnight quick-fixes but rather, slow & steady progressions in the right direction.

We’ve focussed on 3 main areas: Professional DevelopmentGood Practices and Knowledge Sharing.

As we’ve recently reached the 6 month mark of embarking on this journey, I thought I’d share an overview of each initiative’s aim and how we’re doing.


Professional Development

Notable people: Connor Aird, Stef Piatek & soon to be Paul Smith

This is about understanding how we currently decide to upskill (soft and technical) ourselves, what opportunities there are and how we can enable and support more/better opportunities.

The way we decided to figure out what people are doing regarding their professional (and to some degree personal) development was by interviewing them.

At the time of writing almost all the interviews have been completed and data gathered, being prepared for analysis.

Good Practices

Notable people: Haroon Chughtai, Kimberly Meechan & Emily Dubrovska

This looks at how much we engage with establishing and following best practices with technologies, languages and tools. We also want to determine whether there are areas where we could formalise/document this for future RSEs – a notable example is within the Python Tooling Community.

We decided it would be worth modelling the approaches of the Python Tooling Community and seeing whether there are other language/technology communities within ARC that don’t have best practice guidance but would benefit from it. This was done through a survey.

At the time of writing, the next groups of interest are Web Development and DevOps – both in the stages of requirements gathering/gaining an idea of what guidance could be documented or be built on, as well as looking into how it could best be delivered. 

Knowledge Sharing

Notable people: George Svarovsky & Amanda Ho-Lyn

This is about understanding how we currently share knowledge across the group – particularly project information – and how we can improve our current systems to be more usable and make information more accessible.

We decided to do a survey to see how people felt about how information is currently shared and also how much they actually felt they knew about different aspects. There were also some mentions of discontent about where information was posted and shared across a plethora of platforms.

At the time of writing, we have added a mini landing page to the ARC GitHub (note that you must be part of the org to see it) in an attempt to centralise relevant links to various places – this is a living thing and can be updated as necessary. We have also sent out a survey (thank you to those who took the time to complete it) and have plans to act on the results – see my post with more details about this (coming soon).

 

Thanks to everyone who’s been a part of this and continues to help us improve – especially to Asif who is forging the way ahead. And keep an eye out for more surveys! 😁

 

The importance of collaboration: The latest engagement between DiRAC and ARC

By Connor Aird, on 26 April 2024

When time is scarce on a research project, it is important to continuously plan and effectively collaborate with the whole team. A good example of this is the DiRAC project, Spontaneous Symmetry Breaking in 3d Models of Fermions with Prof Simon Hands (PI) which, due to a funding deadline, had to be delivered in 5 weeks. This project aims to explore the phase diagram of a relativistic field theory of fermions using a code base developed by Prof Hands et al, known as thirring-rhmc. However, the collaboration with ARC and Prof Hands covered a much smaller scope.  

Aims 

Our aim was to migrate the work of a PhD student (Dr Jude Worthy) into the default branch of the thirring-rhmc code base. Once this was completed, the intention was that some performance improvements could be investigated as part of the project. Jude’s work implemented a higher accuracy but consequently lower performance formulation (Wilson kernel) of something already implemented in the code base (Shamir kernel). This reduction in performance is the reason for the desire to gain some performance improvements. However, from the PI’s initial comments, it was clear that the key aim remained the code migration – “…I’m increasingly convinced it only makes sense to pursue this research program further if an improved formulation is employed, so the Shamir -> Wilson transition as essential”. 

Obstacles 

Several obstacles threatened the success of this project. Development on the original version of thirring-rhmc had continued throughout Jude’s PhD but unfortunately git had not been used to develop the Wilson kernel. Therefore, the two codes had diverged significantly with no clear indication as to what degree. Due to this divergence, it was vital to develop a continuous testing suite to have any chance of success. However, the outputs of thirring-rhmc are statistical in nature and can, whilst remaining correct, vary significantly with only slight changes to the code. Therefore, a lot of domain specific knowledge would be required to design these tests. 

What we did 

This project’s strict time constraints required us to take a methodical approach to planning our work. For each task, we defined a clear definition of done and ensured we understood how that individual piece of work helped progress towards our key aim. Continuously planning our tasks in this way was essential to our success. 

The lack of clarity around what changes in the Wilson kernel were significant meant our first task was to set up reliable unit tests. With these tests in place, we could confidently alter the code and catch any breaking changes we might introduce. Helpfully, some stale tests were already present in the repository. With Simon’s domain knowledge, we were able to update these existing tests to create a working test suite. When these tests failed and highlighted issues we couldn’t solve independently, we were able to quickly reach a solution through regular communication with Simon. Simon’s domain knowledge was an invaluable asset throughout the project. As a bonus, we were able to demonstrate the confidence regular testing gave us when carrying out large refactors and migrations. This will hopefully increase the chances of Simon’s research team continuing to maintain and build upon these tests, therefore preventing the tests going stale again. This is a great example of how close collaboration between RSEs and Researchers can benefit both parties. 

This close collaboration and communication with Simon helped to quickly increase our knowledge of the code base and research domain. Due to this better understanding, we identified the likely causes of two known issues with Jude’s code. Most notably, we identified that the inflated value of an input parameter was a key reason for the Wilson kernels reduced performance.  

Conclusion 

To conclude, RSEs and Researchers work best together when they effectively communicate. Siloing the domain knowledge of these two parties only reduces the chances of success. Our projects are collaborations and can only succeed if we work in this way from the very beginning. 

Randomising Blender scene properties for semi-automated data generation

By Ruaridh Gollifer, on 12 December 2023

Blender is a free and open-source software for 3D geometry rendering. Uses include modelling, simulation, animation, virtual reality applications, and more recently synthetic datasets generation. This last application is of particular interest in the field of medical imaging, where often there is limited real data that can be used to train machine learning models. By creating large amounts of synthetic but realistic data, we can improve the performance of models in tasks such as polyp detection in image guided surgery. Synthetic data generation has other advantages since using tools like Blender gives us more control and we can generate a variety of ground truth data from segmentation masks to optic flow fields, which in real data would be very challenging to generate or would involve extensive time consuming manual labelling. Another advantage of this approach is that often we can easily scale up our synthetic datasets by randomising parameters of the modelled 3D geometry. There can be challenges to make the data realistic and representative of the real data. 

The Problem 

The aim was to develop an add-on that would help researchers and medical imaging experts determine which range of parameter values make realistic synthetic images. Prior to the project, the dataset generation involved a more laborious process of manually creating scenes in Blender with parameters changed manually for introducing variation in the datasets. A more efficient process was needed during the prototyping of synthetic dataset generation to decide what range of parameters make sense visually, and therefore in the future, to more easily extend to other use cases.

What we did 

In collaboration with the UCL Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), research software engineers from ARC have developed a Blender add-on to randomise relevant parameters for the generation of datasets for polyp detection within the colon. The add-on was originally developed to render a highly diverse and (near) photo-realistic synthetic dataset of laparoscopic surgery camera views. To replicate the different camera positions used in surgery as well as the shape and appearance of the tissues, we focused on randomising three main components of the scene: camera transforms (camera orientation and location), geometry and materials. However, we allowed for more flexibility beyond these 3 main groups of parameters, implementing utilities to randomise other user-defined properties. The software also allows the following features: 1) setting the minimum and maximum bounds through an input file, 2) setting a randomisation seed for reproducibility, 3) exporting output parameters for a chosen number of frames to an output file. The add-on includes testing through Pytest, documentation for users and developers, example input and output files and a sample Blender scene.

The outcomes 

Version 1.0.0 of the Blender Randomiser is available under a BSD 3-Clause License. The GitHub repo is public where the software can be downloaded and installed with instructions provided on how to use the add-on. Examples of what can be produced in Blender can be found at the UCL Research Data Repository (N.B. these examples were produced manually prior to completion of this project).

Developer notes are also available to allow contributions. 

 

Sofia Minano and Ruaridh Gollifer

k-Plan now available to researchers!

By Sam Cunliffe, on 11 December 2023

One of ARC’s longest-running collaborations is with the Biomedical Ultrasound Group. Over the past three years, we’ve been developing a graphical user interface to simulate ultrasound treatment plans!

The k-Plan Logo

This software is called k-Plan, and licences are now available for sale through UCL’s commercial partner, BrainBox (who also sell ultrasound transducers).

Screenshot of the k-Plan GUI

If you’re interested in medical ultrasound, and think this software might help you: you can read the full UCL press release, or you can see some more snapshots of k-Plan in action.

The people behind the work…

Our collaboration is managed and led by Bradley Treeby. As well as me, there’s a full roster of research software engineers who’ve worked hard at various times over the last three years to make this happen:

  • Panayiotis Georgiou, ex-UCL now ARM.
  • Timothy Spain, ex-UCL now NERSC, 🇳🇴.
  • Ilektra Christidi, ARC, UCL.
  • Alessandro Felder, ARC, UCL.
  • Orod Razeghi, ex-UCL now University of Cambridge.
  • Idil Ozdemir, ARC, UCL.
  • Connor Aird, ARC, UCL.

We also have collaborators from the Brno University of Technology who work behind the scenes on the middleware and back-end of k-Plan and run the planning simulations in the cloud.

Simulating light propagation through matter.

By Sam Cunliffe, on 31 October 2023

Observing how light interacts with materials allows us to develop non-invasive medical imaging techniques, that rely on these interactions to assemble an image or infer an appropriate diagnosis.

Light interacts with materials in many different ways. One of the most commonly observed interactions is dispersion; which causes white light to split into individual colours, creating phenomena like rainbows (light from the sun dispersing through raindrops). Another commonly observed interaction is refraction; which causes light to change direction as it passes between two materials, responsible for straight objects like straws appearing to be disjointed when placed into water. To completely describe what is going on in these interactions, we have to use a system of equations known as Maxwell’s equations. We also have to consider some additional parameters that describe the particular material(s) that the light is interacting with. In their most general form, Maxwell’s equations are very complex but have the advantage that almost all materials and interactions can be modelled by them. Solving these equations is, in general, impossible to do with pen and paper, so we need software to do this for us.

Software like this has a wide variety of applications in biomedical optics; notably optical coherence tomography (non-invasive medical imaging of the eye), multiphoton microscopy, and wavefront shaping. For example; we can use this software to model light propagating in the retina: simulating a retina scan. Then we can perform a retina scan for a patient in real life, and use our simulation to better understand the scan. Retinal scans often hint at a particular change to the retina, without being definitive, in the early stages of disease. We can use our simulation to test what types of changes to a retina can lead to observed signatures in an image and therefore help in achieving a diagnosis.

The Problem

In collaboration with the UCL Medical Physics and Biomedical Engineering department, developers from ARC have worked to open up a legacy C and MATLAB library which simulates light propagating through matter. This software was initially developed as part of a PhD thesis approximately 20 years ago and has been continuously developed since then. However, the need to rapidly answer research questions led to the code becoming less sustainable and harder for others to use. Whilst the core functionality was already there; the library needed updating to a more modern language and aligning with the FAIR4SW principles.

What we did

The aim of the project was to be able to provide users with a program that they can give custom input which describes the material they want to simulate, pass this to the software and receive an output they can use in further analysis. We wanted users not to have to worry about the internal workings of the software; only having to download the library code, build and install it once, and be ready for future analyses. We used modern build tools to standardise the build and install of the software, we aimed to make our instructions as straightforward and operating-system-independent as possible. We also set up automated testing of the software and wrote example scripts that users can modify to easily create input files in the correct format.

The outcomes

Version 1.0.1 of the Time Domain Maxwell Solver (TDMS), is now available under a GPL-3.0 license. You can download from GitHub, and install and run on all operating systems. The project has a public-facing website and a growing collection of examples. We also have developer documentation so anyone can contribute in the future.

TDMS 1.0.1 now has a number of new features, including the option to switch between different solver methods (how the simulation is performed), select custom regions over which to compute (to save wasting computation time), and the ability select different techniques for extracting output information through interpolation.

The ARC software engineers were a joy to work with. They brought knowledge of modern software engineering practice and quickly understood the code, and the underlying physics, as required to very effectively re-engineer the code. This collaboration with ARC will hopefully allow for a new range of users to access TDMS and significantly increase its impact.

Will Graham and Sam Cunliffe