X Close

Centre for Advanced Research Computing

Home

ARC is UCL's research, innovation and service centre for the tools, practices and systems that enable computational science and digital scholarship

Menu

Archive for the 'Medical imaging' Category

Workshop on Open-Source Software for Surgical Technologies

By m.xochicale, on 20 December 2024

To champion the creation of sustainable, robust, and equitable digital healthcare systems that prevent the perpetuation of healthcare inequalities, ARC researchers took the lead in organising the second workshop on Open-Source Software for Surgical Technologies at the Hamlyn Symposium on Medical Robotics on June 28th, 2024.

The workshop focused on a key question: how can we transform open-source software libraries into sustainable, long-term supported tools that are translatable to clinical practice? To address this, the event brought together engineers, researchers, and clinicians from academia and industry to present their work, discuss current progress, challenges, and trends, and lay the foundation for building a collaborative community around Open-Source Software Innovations in Surgical, Medical and AI Technologies.

In this post, we are excited to share recordings of our exceptional lineup of speakers and celebrate the poster awardees from the workshop, along with Zenodo links to other posters. The talks and posters spanned a variety of topics, including certification, commercialisation, and case studies of open-source software in research and industry scenarios. This workshop highlighted the profound impact of open-source software in advancing surgical technologies and medical innovation.

Speakers

Watch all the recorded talks on this YouTube Playlist.

️Poster awardees
Congratulations to all the awardees for their outstanding contributions to advancing innovation in surgical technologies!

Best Poster Award
Martin Huber et al. from King’s College London “LBR-Stack: ROS 2 and Python Integration of KUKA FRI for Med and IIWA Robots”
GitHub: https://github.com/lbr-stack/lbr_fri_ros2_stack/
arXiv: https://arxiv.org/abs/2311.12709

Runner-Up Awards (Three-Way Tie)

  • Keisuke Ueda et al. from Medical DATAWAY “Automated Surgical Report Generation Using In-context Learning with Scene Labels from Surgical Videos” Poster in Zenodo https://zenodo.org/records/12518729
  • Mikel De Iturrate Reyzabal et al. from King’s College London “PyModalSurgical. An image-space modal analysis library for surgical videos: generating haptic and visual feedback” Poster in Zenodo: https://zenodo.org/records/12204075​
  • Ewald Ury et al. from KU LEUVEN “Markerless Augmented Reality Guidance System for Maxillofacial Surgery”

See other posters Peter Kazanzides et al., dVRK-Si: The Next Generation da Vinci Research Kit, Reza Haqshenas et al., OptimUS: an open-source fast full-wave solver for calculating acoustic wave propagation with applications in biomedical ultrasound.

Get in touch

We can’t wait to see you again next year!
Warm regards, Eva, Stephen, & Miguel

 

 

 

#HSMR24 #HamlynSymposium2024 #Healthcare #OpenSource #ArtificialIntelligence #SurgTech #MedTech #AITech

Randomising Blender scene properties for semi-automated data generation

By Ruaridh Gollifer, on 12 December 2023

Blender is a free and open-source software for 3D geometry rendering. Uses include modelling, simulation, animation, virtual reality applications, and more recently synthetic datasets generation. This last application is of particular interest in the field of medical imaging, where often there is limited real data that can be used to train machine learning models. By creating large amounts of synthetic but realistic data, we can improve the performance of models in tasks such as polyp detection in image guided surgery. Synthetic data generation has other advantages since using tools like Blender gives us more control and we can generate a variety of ground truth data from segmentation masks to optic flow fields, which in real data would be very challenging to generate or would involve extensive time consuming manual labelling. Another advantage of this approach is that often we can easily scale up our synthetic datasets by randomising parameters of the modelled 3D geometry. There can be challenges to make the data realistic and representative of the real data. 

The Problem 

The aim was to develop an add-on that would help researchers and medical imaging experts determine which range of parameter values make realistic synthetic images. Prior to the project, the dataset generation involved a more laborious process of manually creating scenes in Blender with parameters changed manually for introducing variation in the datasets. A more efficient process was needed during the prototyping of synthetic dataset generation to decide what range of parameters make sense visually, and therefore in the future, to more easily extend to other use cases.

What we did 

In collaboration with the UCL Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), research software engineers from ARC have developed a Blender add-on to randomise relevant parameters for the generation of datasets for polyp detection within the colon. The add-on was originally developed to render a highly diverse and (near) photo-realistic synthetic dataset of laparoscopic surgery camera views. To replicate the different camera positions used in surgery as well as the shape and appearance of the tissues, we focused on randomising three main components of the scene: camera transforms (camera orientation and location), geometry and materials. However, we allowed for more flexibility beyond these 3 main groups of parameters, implementing utilities to randomise other user-defined properties. The software also allows the following features: 1) setting the minimum and maximum bounds through an input file, 2) setting a randomisation seed for reproducibility, 3) exporting output parameters for a chosen number of frames to an output file. The add-on includes testing through Pytest, documentation for users and developers, example input and output files and a sample Blender scene.

The outcomes 

Version 1.0.0 of the Blender Randomiser is available under a BSD 3-Clause License. The GitHub repo is public where the software can be downloaded and installed with instructions provided on how to use the add-on. Examples of what can be produced in Blender can be found at the UCL Research Data Repository (N.B. these examples were produced manually prior to completion of this project).

Developer notes are also available to allow contributions. 

 

Sofia Minano and Ruaridh Gollifer

k-Plan now available to researchers!

By Sam Cunliffe, on 11 December 2023

One of ARC’s longest-running collaborations is with the Biomedical Ultrasound Group. Over the past three years, we’ve been developing a graphical user interface to simulate ultrasound treatment plans!

The k-Plan Logo

This software is called k-Plan, and licences are now available for sale through UCL’s commercial partner, BrainBox (who also sell ultrasound transducers).

Screenshot of the k-Plan GUI

If you’re interested in medical ultrasound, and think this software might help you: you can read the full UCL press release, or you can see some more snapshots of k-Plan in action.

The people behind the work…

Our collaboration is managed and led by Bradley Treeby. As well as me, there’s a full roster of research software engineers who’ve worked hard at various times over the last three years to make this happen:

  • Panayiotis Georgiou, ex-UCL now ARM.
  • Timothy Spain, ex-UCL now NERSC, 🇳🇴.
  • Ilektra Christidi, ARC, UCL.
  • Alessandro Felder, ARC, UCL.
  • Orod Razeghi, ex-UCL now University of Cambridge.
  • Idil Ozdemir, ARC, UCL.
  • Connor Aird, ARC, UCL.

We also have collaborators from the Brno University of Technology who work behind the scenes on the middleware and back-end of k-Plan and run the planning simulations in the cloud.