Creating Custom Font Symbols
By Amanda Ho-Lyn, on 5 June 2024
Why?
There are various reasons why researchers may want to include custom symbols to a font; they may want to represent a new concept or add characters from historical sources that are not available in a font yet. Being able to do so makes their work more accessible to colleagues and the public. In the cases of ancient languages, there often aren’t fonts that support these glyphs as they are still being researched and aren’t widely used outside of their respective communities. There are also arguments to be made for using custom glyphs in art and design.
How?
Figure out your glyph codes
With the Unicode standard we have the capability to facilitate this through the private use areas (PUAs) wherein we can assign codes to the custom glyphs. This provides the glyphs with a universal ID allowing them to be packaged and used more widely – this can be especially important when dealing with more than one font which may make use of the PUAs as we want to limit clashing of codes and will allow the font to be usable across different projects that may have use for the custom glyphs.
If you have an existing font and want to check if it uses some PUA codes, open the font in a charset checker and specify the Private Use Area block (you can also check for other things). If you’re free and clear, then you have no restrictions when it comes to picking the codes within the PUA block.
Draw your glyphs
You’ll need to design your glyphs with a specialist tool. Since we’re working with custom Unicode ranges in this case, that’s a specific bit of functionality to look for. I used glyphr studio which was a free online tool and it worked really well. It’s a good idea to know how many glyphs you’ll need to draw as you have to specify a range, though it can be updated. If you can work from references that will make this process much simpler. You’ll also want to spend some time getting used to the interface if you’ve not used a font designer before. With glyphr studio you define the outline points and it automatically fills in the space between. Do also remember to define a height (typically automatic) and width (not automatic) so that your glyphs will display properly.
You can fine tune the curvature, width etc of the points and once you have the basis down you can always come back to fine tune and tweak the glyphs until you’re satisfied. This is definitely the most time consuming but can be fun once you’re accustomed to the interface.
Export
Export your file as an .otf (OpenType Font) and fill out the relevant metadata as you see fit. Feel free to also keep a copy of the project file from glyphr studio but you can always edit the .otf file and it should have all the relevant data. This file can be shared and stored and can be included in projects as needed.
Including it in your project
To include your snazzy new .otf file in a web project, you can just add the file to the codebase, preferably under static/fonts. Then, you can import it into your main CSS file as follows:
To utilise it, you can implement a symbol picker (particularly helpful if the symbols don’t have a letter equivalent), add to a special character input in your text editor, or assign it as you would any other font.
Et voila!
Customising Rich Text Editors
By Amanda Ho-Lyn, on 5 June 2024
Why?
Customising rich text editors within web apps offers significant benefits, particularly for researchers in Digital Humanities. While basic text editors fulfil essential functions, rich text editors (RTEs) can incorporate features such as rich text formatting and customisable styling, allowing for clearer and more engaging presentation of textual information. Whilst they provide greater benefits than regular text editors, you can further enhance them through customisation, which can greatly improve the user experience and effectiveness of communication.
RTEs can be particularly useful in situations where researchers are working with sources where meaning is encoded in the visual layout. They enable researchers to communicate broader and more intricate ideas effectively. Features like embedded multimedia content (photos, videos, links), interactive elements (tables, footnotes), and collaborative editing tools (history, comments) can facilitate the expression and exploration of complex concepts within the text itself. These can also include adding functionality for symbols or text decorators not widely used – helpful with mathematics and ancient languages. See my post on creating custom font symbols for more details.
Customisation can also foster more connections and cohesiveness within the website or application. Integrated features such as cross-referencing, hyperlinking, and version control enhance the interconnectedness of textual content, enabling users to navigate seamlessly between related information and fostering a more immersive and engaging user experience.
TLDR: Basic text editors are fine for simple input, Rich Text Editors are better for things with more variety and customising them can build on the functionality to enrich the quality of research done.
How?
This somewhat depends on the structure of your project, but most likely it will be through a CDN, npm install or directly adding the package to your codebase (in the case of JavaScript). If you have a python based project, odds are you can install through pip.
From experience…
I recommend CK Editor (4) as this has worked well through both implementations and is very stable and well-established, meaning there are many plugins already written that work with it. You can, of course, write your own for even more customised functionality! The 5th version is likely also a good option, but may require a different license. There are different degrees of customisability with CK Editor, from using their basic setups to enabling any and all plugins you can find, which makes it very adaptable to a variety of projects. You could even overhaul the entire thing if you really wanted. It uses a config.js file as default but each initialisation can also be customised on an individual basis, as well.
For example, here are two different builds of the editor:
The first is quite simple, though not the most basic and has extra functionality that isn’t immediately obvious @ mentions and image browsing. The second is more populated (more plugins enabled) and also has some entirely custom functions (the Vs and Us on the second line) for adding diacriticals through Unicode. It also has @ mentions which have been implemented with more complexity due to a greater breadth of items referenced. These were each implemented through different languages (one through Python and one through JS) and whilst the way of configuring the build was different in each project, the results are stable and reliable whilst remaining flexible.
We have also used QuillJS but this was less well established and when it updated to a new major version, the old version stopped working (not great for a research project that is almost entirely text content).
TLDR: There are tonnes of options out there and it’s not always the best idea to go for the one that is the newest and most-cutting edge, particularly if you need something reliable and easy to maintain. I think CK Editor 4 is a solid choice for the next few years since it’s stable and well established, while still being flexible and customisable to your needs.
The importance of collaboration: The latest engagement between DiRAC and ARC
By Connor Aird, on 26 April 2024
When time is scarce on a research project, it is important to continuously plan and effectively collaborate with the whole team. A good example of this is the DiRAC project, Spontaneous Symmetry Breaking in 3d Models of Fermions with Prof Simon Hands (PI) which, due to a funding deadline, had to be delivered in 5 weeks. This project aims to explore the phase diagram of a relativistic field theory of fermions using a code base developed by Prof Hands et al, known as thirring-rhmc. However, the collaboration with ARC and Prof Hands covered a much smaller scope.
Aims
Our aim was to migrate the work of a PhD student (Dr Jude Worthy) into the default branch of the thirring-rhmc code base. Once this was completed, the intention was that some performance improvements could be investigated as part of the project. Jude’s work implemented a higher accuracy but consequently lower performance formulation (Wilson kernel) of something already implemented in the code base (Shamir kernel). This reduction in performance is the reason for the desire to gain some performance improvements. However, from the PI’s initial comments, it was clear that the key aim remained the code migration – “…I’m increasingly convinced it only makes sense to pursue this research program further if an improved formulation is employed, so the Shamir -> Wilson transition as essential”.
Obstacles
Several obstacles threatened the success of this project. Development on the original version of thirring-rhmc had continued throughout Jude’s PhD but unfortunately git had not been used to develop the Wilson kernel. Therefore, the two codes had diverged significantly with no clear indication as to what degree. Due to this divergence, it was vital to develop a continuous testing suite to have any chance of success. However, the outputs of thirring-rhmc are statistical in nature and can, whilst remaining correct, vary significantly with only slight changes to the code. Therefore, a lot of domain specific knowledge would be required to design these tests.
What we did
This project’s strict time constraints required us to take a methodical approach to planning our work. For each task, we defined a clear definition of done and ensured we understood how that individual piece of work helped progress towards our key aim. Continuously planning our tasks in this way was essential to our success.
The lack of clarity around what changes in the Wilson kernel were significant meant our first task was to set up reliable unit tests. With these tests in place, we could confidently alter the code and catch any breaking changes we might introduce. Helpfully, some stale tests were already present in the repository. With Simon’s domain knowledge, we were able to update these existing tests to create a working test suite. When these tests failed and highlighted issues we couldn’t solve independently, we were able to quickly reach a solution through regular communication with Simon. Simon’s domain knowledge was an invaluable asset throughout the project. As a bonus, we were able to demonstrate the confidence regular testing gave us when carrying out large refactors and migrations. This will hopefully increase the chances of Simon’s research team continuing to maintain and build upon these tests, therefore preventing the tests going stale again. This is a great example of how close collaboration between RSEs and Researchers can benefit both parties.
This close collaboration and communication with Simon helped to quickly increase our knowledge of the code base and research domain. Due to this better understanding, we identified the likely causes of two known issues with Jude’s code. Most notably, we identified that the inflated value of an input parameter was a key reason for the Wilson kernels reduced performance.
Conclusion
To conclude, RSEs and Researchers work best together when they effectively communicate. Siloing the domain knowledge of these two parties only reduces the chances of success. Our projects are collaborations and can only succeed if we work in this way from the very beginning.
Research Integrity in an AI-Enabled World
By Samantha Ahern, on 5 April 2024
Over the last 15 months there has been much debate, hype and concern relating to capabilities of tools and platforms leveraging Large Language Models (LLMs) and media generators. Broadly termed Generative AI. The predominant narrative in Higher Education has been around the perceived threat to academic integirty and associated value to degrees. As such a lot of focus and discussion has focused on taught students, assessment design and “AI-proof” assessment. This has been coupled with concerns relating to the inability to reliably detect generated content, and the disproportionate number of false positives related to non-native English speakers text submitted to various platforms.
However, despite the proliferation of Generative AI enabled research tools and platforms, numerous workshops offering increased research output productivity and publications asking authors to declare whether or not these tools were used in producing outputs there has been limited discussion with relation to staff and research integrity.
Coupled with the publication of initial findings from a study on staff use of these tools by Watermayer, Lanclos and Phipps that included use to complete “little things like health and safety stuff, or ethics, or summarizing reports” and potential safety risks from fine-tuning models as reported in the Stanford Univeristy published policy briefing Safety Risks from Customizing Foundation Models via Fine-Tuning a workshop focusing on the interplay of Generative AI and research integrity and ethics was proposed as an AIUK Fringe event.
Research Integrity in an AI-Enabled World took place on Monday 25th March 2024. The aim was to explore how we think Generative AI enabled tools and platforms, could and should impact on the research process, and what the integrity and ethics implication are. Eventually aim would be to produce a policy white paper.
The event was organised so that there was a series of thought provoking talks in the morning, followed by a world-cafe style session in the afternoon. The event was held under the Chatham House Rule to enable open and frank discussion of the topic and arising issues.
The first set of talks predominantly focused on ethical issues. There were discussions on authorship, and the nature of authorship where multiple actors are involved e.g. training data creators, platform developers and prompters. Bias in image generation, reinforcing misconceptions and stereotypes. Culminating in a talk on the University of Salfords evolving approach to Generative AI and research ethics.
The second set of talks was focused on current capabilities, limitations and implications of using Generative AI enabled tools in the research pipeline, predomintly focusing on qualitative analysis. This session included a discussion around evidence synthesis and the need to find more efficient methods whilst maintaining reliability and a breadth of knowledge, and different approaches using “traditional” machine learning approaches versus use of large language models. Enhanced capabilities of Computer Aided Qualitative Data Analysis Systems and implications for methodological approaches were also introduced and discussed. The session concluded with a talk from Prof Jeremy Watson about the work currently being undertaken by the UK Committee on Research Integrity’s AI working group, of which he is member. Key themes currently under consideration by UKCORI are:
- Governance
- Roles and Responsibilities
- Skills and Training
- Public Understanding and Expectations
- Attribution and Ownership – IP, etc.
- Understanding Data Inputs and Models
- Need for Research in AI and Integrity
During the world-cafe session participants addressed the following questions:
- What do we mean by Research Integrity in an AI-Enabled Research Environment?
- Are there degrees of Research Integrity based on discipline and how embedded AI use is in the research process?
- What are the key ethical and legal considerations?
Including the following participant proposed questions:
- Generative AI is extremely good at in-filling uncertainty, where details of images become filled with bias. Should the responsibility of bias be equally on a prompter who enables this by omission?
- Recalibration of government and private funded RI in AI? Isn’t this the foundation of biases for RI?
Outputs from the world cafe session will be analysed over the next few weeks, and workshop participants were invited to contribute to the development of workshop outputs.
Key themes that emerged from the event include:
- Transparency
- Criticality
- Responsibility
- Fitness for purpose
- Data protection and privacy
- Digital divide – privilege and harms
- Training – education
The workshop was well received by participants, with the participants rate their overall experience of the event as 4.71 out of 5.
The speaker sessions were rated as very good by over 70% of participants. With the world cafe being mentioned as a highlight of the event.
As the proposer, organising and the host of the event I can’t help but still wonder:
- Can we ethically and with genuine integrity use tools which are fundamentally ethically flawed?
- Why are we accepting of these issues?
- How should we be pushing back?
I will leave you with these words from Arudhati Roy with which I opened the event:
Role models, outreach and changing the future of STE(A)M
By Samantha Ahern, on 20 December 2023
STE(A)M is for everyone, so why isn’t that reflected in the demography of STE(A)M professionals?
Sadly, when we look at the prevailing media representation and social narrative around STE(A)M it’s not surprising. Which, is why, events like the one I recently supported at Samuel Whitbred Academy organised by STEMPoint are incredibly important.
Over the course of the day I supported girls from four different secondary schools undertaking two computing challenges, one focused on coding, the other on robotics. In addition to the challenges, the participants also had the opportunity to hear from the STEM Ambassadors, most of them women, supporting the event about our careers, why and how we got into STEM.
This was a key part of the event as it enabled them to learn more about the different types of STE(A)M careers available, pathways into those careers and the diversity of the people working in them.
It is often said that you can’t be, what you can’t see. You also can’t be what you don’t know about. This is why since 2015 I have been a STEM Ambassador.
During this time I have supported a number of events requesting support via the STEM Learning platform, featured in a STEMettes Christmas Calendar, been profiled for BCS Women and designed and delivered some RI Masterclasses. On a more subtle level I use my full first name on publications and when public speaking, re-emphasising that I am a woman in STEM.
The event at Samuel Whitbred Academy was a lot of fun, I always enjoy seeing students engaging with a STEM challenge. Especially seeing young women grow in confidence in their abilities and seeing computing as a space for them.
Unsurprisingly I spent most of the day supporting the robotics challenge. It was fantastic to see them rise to the challenge, and in some cases go beyond the extension task. I hope to support further events in 2024.
I strongly recommend becoming a STEM Ambassador to my colleagues, it really does make a big difference to the participants.
Randomising Blender scene properties for semi-automated data generation
By Ruaridh Gollifer, on 12 December 2023
Blender is a free and open-source software for 3D geometry rendering. Uses include modelling, simulation, animation, virtual reality applications, and more recently synthetic datasets generation. This last application is of particular interest in the field of medical imaging, where often there is limited real data that can be used to train machine learning models. By creating large amounts of synthetic but realistic data, we can improve the performance of models in tasks such as polyp detection in image guided surgery. Synthetic data generation has other advantages since using tools like Blender gives us more control and we can generate a variety of ground truth data from segmentation masks to optic flow fields, which in real data would be very challenging to generate or would involve extensive time consuming manual labelling. Another advantage of this approach is that often we can easily scale up our synthetic datasets by randomising parameters of the modelled 3D geometry. There can be challenges to make the data realistic and representative of the real data.
The Problem
The aim was to develop an add-on that would help researchers and medical imaging experts determine which range of parameter values make realistic synthetic images. Prior to the project, the dataset generation involved a more laborious process of manually creating scenes in Blender with parameters changed manually for introducing variation in the datasets. A more efficient process was needed during the prototyping of synthetic dataset generation to decide what range of parameters make sense visually, and therefore in the future, to more easily extend to other use cases.
What we did
In collaboration with the UCL Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), research software engineers from ARC have developed a Blender add-on to randomise relevant parameters for the generation of datasets for polyp detection within the colon. The add-on was originally developed to render a highly diverse and (near) photo-realistic synthetic dataset of laparoscopic surgery camera views. To replicate the different camera positions used in surgery as well as the shape and appearance of the tissues, we focused on randomising three main components of the scene: camera transforms (camera orientation and location), geometry and materials. However, we allowed for more flexibility beyond these 3 main groups of parameters, implementing utilities to randomise other user-defined properties. The software also allows the following features: 1) setting the minimum and maximum bounds through an input file, 2) setting a randomisation seed for reproducibility, 3) exporting output parameters for a chosen number of frames to an output file. The add-on includes testing through Pytest, documentation for users and developers, example input and output files and a sample Blender scene.
The outcomes
Version 1.0.0 of the Blender Randomiser is available under a BSD 3-Clause License. The GitHub repo is public where the software can be downloaded and installed with instructions provided on how to use the add-on. Examples of what can be produced in Blender can be found at the UCL Research Data Repository (N.B. these examples were produced manually prior to completion of this project).
Developer notes are also available to allow contributions.
—
k-Plan now available to researchers!
By Sam Cunliffe, on 11 December 2023
One of ARC’s longest-running collaborations is with the Biomedical Ultrasound Group. Over the past three years, we’ve been developing a graphical user interface to simulate ultrasound treatment plans!
This software is called k-Plan, and licences are now available for sale through UCL’s commercial partner, BrainBox (who also sell ultrasound transducers).
If you’re interested in medical ultrasound, and think this software might help you: you can read the full UCL press release, or you can see some more snapshots of k-Plan in action.
The people behind the work…
Our collaboration is managed and led by Bradley Treeby. As well as me, there’s a full roster of research software engineers who’ve worked hard at various times over the last three years to make this happen:
- Panayiotis Georgiou, ex-UCL now ARM.
- Timothy Spain, ex-UCL now NERSC, 🇳🇴.
- Ilektra Christidi, ARC, UCL.
- Alessandro Felder, ARC, UCL.
- Orod Razeghi, ex-UCL now University of Cambridge.
- Idil Ozdemir, ARC, UCL.
- Connor Aird, ARC, UCL.
We also have collaborators from the Brno University of Technology who work behind the scenes on the middleware and back-end of k-Plan and run the planning simulations in the cloud.
First Julia workshop at ARC
By David Pérez-Suárez, on 15 November 2023
Last Friday (the 10th of November) we run our first ever Julia workshop. After years of having an expert in our team – Mosè – who has been introducing the rest of the team to this wonderful language and even convinced some collaborators to use it on their projects, we’ve done the jump to teach it to the UCL community.
This first workshop was limited to a reduced number of learners (10-12). Seven of which attended (most of them physicists!), with our team of three (myself instructing, with Mosè and Tuomas helping — also all physicists 😅) made the learners’ experience very positive.
Originally, we were going to use the Carpentries Julia lesson available in the incubator. However, Mosè and I decided against it as the expected previous knowledge was higher than what we were aiming for. Therefore, we created our own lesson!
Our lesson started with the basics, different types of numbers, strings and how all them fit in the family of types in Julia. We introduced some of the quirks Julia surprises you with when you come from a different language. This was key in our lesson! We started to write a function as if it was Python — which was what we expected to be the most familiar for our cohort. From there, we were introducing new concepts and syntax to make our code more “julianic” (I’ve come up with that term, so it may not be the one used by the Julia community). We covered the basics (types, function, conditionals, loops and plotting) during the morning session. After lunch, we went to introduce how to use other libraries to solve polynomials and ordinary differential equations. We even introduced unit testing and had time to learn how to work with CSV files with DataFrames and gave a quick overview of Pluto.
During the preparation of the material and the class, I was constantly supported by Mosè, bouncing lots of ideas and suggestions. We’ve even found a bug in one of the libraries we were going to use that they fixed instantly after Mosè reported it.
The class went smoothly. We encountered some problems with the installation of Julia and some unexpected slowness when installing libraries (we reported it after the workshop, and it was also fixed straight away!). This is some of the feedback we’ve received at the end of the day:
- Great course, learned a lot.
- The course has been great. The pace is good and it allows us to ask any questions we have.
- Comparison’s to Python really helped me appreciate the advantages of Julia. Paper plane example was great.
- Very good course, covered all the right topics for a 1-day intro session.
Personally, I don’t remember a class that has gone so well! With very little difficulties, covering everything we were planning to do and answering very interesting questions from our learners. It may have been due to the small number of learners, or because of their previous programming experience, or the similar background across all of them, or maybe, it’s because Julia is easy to learn 😉. Whatever reason it is, I really want to repeat it, with a larger class and a more varied background of learners. There’s no reason for only letting the physicists have fun with Julia, right.
So, if you are interested in learning Julia, be sure that we will repeat more sessions like this one! This may be too basic for you? Don’t worry, we are also planning to run a more advanced workshop focused on Julia for HPC during Term 2’s reading week. Keep an eye out for our future announcements.
Now that we have started, we won’t stop!
Using continuous integration efficiently
By Matt Graham, on 3 November 2023
Use of continuous integration (CI) is an important part of creating robust and reproducible research software, however running automated jobs on services such as GitHub Actions and GitLab CI/CD comes with an associated energy, and so environmental, cost.
As an example, in the 30 days from 2nd October to 1st November 2023, the UCL/TLOmodel
repository ran GitHub Actions workflow jobs which used 1937 hours of runner time. As a (very rough) back-of-the-envelope estimate, assuming an average runner power consumption of 12W[1] this equates to a total monthly CI energy usage for this project of 23kWh, which is about 10% of a typical UK’s households monthly electricity usage or the monthly energy usage of around 8 UCL employee’s laptops[2].
Below are some simple ways to reduce GitHub Actions and GitLab CI/CD runner usage without compromising the gains that automated testing and deployment brings. These approaches also have the side benefit for private GitHub repositories of reducing usage of the free Actions minutes quota.
- Automatically cancel redundant jobs
For jobs triggered by pushing commits to a pull or merge request branch, we may push new commits and trigger new job runs while previous runs are still in progress. Typically we will only care about the job results on the latest commit, and so the previous job runs will be redundant. Theconcurrency.group
andconcurrency.cancel-in-progress
properties in GitHub Actions workflows can be used to automatically cancel already in progress job runs at different grouping levels – for example for workflows triggered for the same pull request, branch or tag. In GitLab, jobs which have theinterruptible
property set totrue
will be automatically cancelled when the Auto-cancel redundant pipelines project setting is enabled. Both GitHub and GitLab also allow manually skipping job runs when pushing new commits by including a token such as[skip ci]
or[ci skip]
in the commit message. - Use filters and rules to only run jobs when needed
Commonly some checks are only relevant to a subset of the files in a repository. For example if a branch only involves changes to a Markdown README file we probably do not need to run a test suite for the source code in a repository. In GitHub Actions we can use the optionalpaths
andpaths-ignore
properties of thepush
andpull-request
triggers to only run workflow jobs when the files changed do or do not match one or more patterns. Similarly in GitLab we can use rules to set conditions on a when a job runs, with specifically therules:changes
property allowing limiting runs to when files matching specified patterns are changed. It may also make sense to have jobs only run when pull requests are no longer drafts and are marked as ready to review. This can be achieved in GitHub Actions using a combination of theon.pull_requests.types
property and using anif
condition on the job that checksgithub.event.pull_request.draft
isfalse
. - Set the scheduled job frequency appropriately
Both GitHub Actions and GitLab CI/CD pipelines allow running jobs on a schedule usingcrontab
syntax. While we sometimes refer to nightly jobs, it is worth considering carefully what the appropriate frequency is for such scheduled jobs based on the use case and wider context of the project, and whether running jobs at for example a weekly frequency might be sufficient. For example running daily jobs to check whether tests for a package pass against the latest compatible versions of upstream dependencies may make sense for a a package with a large userbase and development team where breaking changes in upstream packages are both likely to be encountered in practice quickly, and where there is likely to be sufficient developer capacity to resolve the issues in a similar timeframe. On the other hand for a package with fewer users and contributors, having a test job identify such breaking changes at a weekly frequency may be sufficient. - Cache job dependencies
A common step in CI jobs is setting up dependencies on the runner, often using a language specific package manager likenpm
orpip
. Rather than redownloading and building dependencies afresh on each run, we can use caching features to reuse the dependencies built in previous jobs (providing the new run uses the same versions of dependencies). The GitHubcache
action provides a generic approach for performing caching in GitHub Actions jobs, while language specific setup actions such assetup-python
have built in support for setting up dependency caching. GitLab similarly provides support for caching dependencies. - Set timeouts to avoid misbehaving jobs running for long times
The default timeout for GitHub Actions jobs is 6 hours. If you expect a job to run in much less time than that, setting a more conservative timeout can avoid misbehaving jobs which hang or run very slowly, from inadvertently using a large amount of runner time. The default timeout on GitLab is shorter at 60 minutes but can similarly be adjusted. - Chain jobs intelligently and fail fast
Commonly we will run an assortment of test and checks as part of CI workflows with a corresponding wide range of run times. Code formatting and linting checks will typically be the quickest to run, while test suites may contain both quicker unit tests as well as integration and system tests which can take longer to run. Ensuring faster running checks and tests run first, and halting further jobs from running if these fail, will avoid wasting compute time running slow tests unnecessarily before changes to fix the faster running checks are made. Taking this one step further, for very quick checks such as linting and formatting, frameworks likepre-commit
can be used to ensure checks are run automatically when committing changes, reducing the chance of CI jobs failing and needing to be rerun in the first place. In GitHub Actions thejobs.<job_id>.needs
property can be used to specify that other jobs must successfully complete before another job runs. When running a matrix strategy job which creates one job run for each of a set of configurations (corresponding to for example different versions, operating systems or groups of tests) thejobs.<job_id>.strategy.fail-fast
property can be set totrue
to cancel any other still in progress job runs in the matrix if a single run fails. GitLab pipeline jobs also have aneeds
property for specifying dependencies.
-
Estimating the power usage of a CI job runner is complicated as typically jobs will be run on virtual machines (VMs) on cloud-hosted servers, with generally there being multiple VMs running in parallel on each server and potentially multiple job runners per VM. The analysis in Aldossary and Djemame (2019) suggests a VM with 4 virtual central processing units (CPUs) running on a host with eight-core Intel Xeon E3-1230 V2 CPU and 16GB of DDR3 RAM when running with an active load with 80% CPU utilization can be attributed a mean power consumption of around 40W (see Figure 7). If we assume four job runners per VM (one per virtual CPU) this corresponds to around 10W per runner. To account for the additional power overhead for data centre infrastructure such as cooling, we assume a power usage effectiveness of 1.2 (based on figures provided by Microsoft for Azure), giving an average overall power draw of 12W per runner. ↩︎
-
This is based on an assumption of an average power draw of 20W (under a mix of idling and working at load) for a Dell Latitude 5410 laptop with a 36.5 hour working week and four weeks in a month giving an estimate of 2.92kWh used per laptop per month. ↩︎
Simulating light propagation through matter.
By Sam Cunliffe, on 31 October 2023
Observing how light interacts with materials allows us to develop non-invasive medical imaging techniques, that rely on these interactions to assemble an image or infer an appropriate diagnosis.
Light interacts with materials in many different ways. One of the most commonly observed interactions is dispersion; which causes white light to split into individual colours, creating phenomena like rainbows (light from the sun dispersing through raindrops). Another commonly observed interaction is refraction; which causes light to change direction as it passes between two materials, responsible for straight objects like straws appearing to be disjointed when placed into water. To completely describe what is going on in these interactions, we have to use a system of equations known as Maxwell’s equations. We also have to consider some additional parameters that describe the particular material(s) that the light is interacting with. In their most general form, Maxwell’s equations are very complex but have the advantage that almost all materials and interactions can be modelled by them. Solving these equations is, in general, impossible to do with pen and paper, so we need software to do this for us.
Software like this has a wide variety of applications in biomedical optics; notably optical coherence tomography (non-invasive medical imaging of the eye), multiphoton microscopy, and wavefront shaping. For example; we can use this software to model light propagating in the retina: simulating a retina scan. Then we can perform a retina scan for a patient in real life, and use our simulation to better understand the scan. Retinal scans often hint at a particular change to the retina, without being definitive, in the early stages of disease. We can use our simulation to test what types of changes to a retina can lead to observed signatures in an image and therefore help in achieving a diagnosis.
The Problem
In collaboration with the UCL Medical Physics and Biomedical Engineering department, developers from ARC have worked to open up a legacy C
and MATLAB
library which simulates light propagating through matter. This software was initially developed as part of a PhD thesis approximately 20 years ago and has been continuously developed since then. However, the need to rapidly answer research questions led to the code becoming less sustainable and harder for others to use. Whilst the core functionality was already there; the library needed updating to a more modern language and aligning with the FAIR4SW principles.
What we did
The aim of the project was to be able to provide users with a program that they can give custom input which describes the material they want to simulate, pass this to the software and receive an output they can use in further analysis. We wanted users not to have to worry about the internal workings of the software; only having to download the library code, build and install it once, and be ready for future analyses. We used modern build tools to standardise the build and install of the software, we aimed to make our instructions as straightforward and operating-system-independent as possible. We also set up automated testing of the software and wrote example scripts that users can modify to easily create input files in the correct format.
The outcomes
Version 1.0.1
of the Time Domain Maxwell Solver (TDMS), is now available under a GPL-3.0 license. You can download from GitHub, and install and run on all operating systems. The project has a public-facing website and a growing collection of examples. We also have developer documentation so anyone can contribute in the future.
TDMS 1.0.1 now has a number of new features, including the option to switch between different solver methods (how the simulation is performed), select custom regions over which to compute (to save wasting computation time), and the ability select different techniques for extracting output information through interpolation.
The ARC software engineers were a joy to work with. They brought knowledge of modern software engineering practice and quickly understood the code, and the underlying physics, as required to very effectively re-engineer the code. This collaboration with ARC will hopefully allow for a new range of users to access TDMS and significantly increase its impact.
—