X Close

Centre for Advanced Research Computing


ARC is UCL's research, innovation and service centre for the tools, practices and systems that enable computational science and digital scholarship


Archive for the 'Event' Category

Research Integrity in an AI-Enabled World

By Samantha Ahern, on 5 April 2024

Over the last 15 months there has been much debate, hype and concern relating to capabilities of tools and platforms leveraging Large Language Models (LLMs) and media generators. Broadly termed Generative AI. The predominant narrative in Higher Education has been around the perceived threat to academic integirty and associated value to degrees. As such a lot of focus and discussion has focused on taught students, assessment design and “AI-proof” assessment. This has been coupled with concerns relating to the inability to reliably detect generated content, and the disproportionate number of false positives related to non-native English speakers text submitted to various platforms.

AI generated image of a researches using AI in front of the UCL porticoHowever, despite the proliferation of Generative AI enabled research tools and platforms, numerous workshops offering increased research output productivity and publications asking authors to declare whether or not these tools were used in producing outputs there has been limited discussion with relation to staff and research integrity.

Coupled with the publication of initial findings from a study on staff use of these tools by Watermayer, Lanclos and Phipps that included use to complete “little things like health and safety stuff, or ethics, or summarizing reports” and potential safety risks from fine-tuning models as reported in the Stanford Univeristy published policy briefing Safety Risks from Customizing Foundation Models via Fine-Tuning a workshop focusing on the interplay of Generative AI and research integrity and ethics was proposed as an AIUK Fringe event.

Research Integrity in an AI-Enabled World took place on Monday 25th March 2024. The aim was to explore how we think Generative AI enabled tools and platforms, could and should impact on the research process, and what the integrity and ethics implication are. Eventually aim would be to produce a policy white paper.

The event was organised so that there was a series of thought provoking talks in the morning, followed by a world-cafe style session in the afternoon. The event was held under the Chatham House Rule to enable open and frank discussion of the topic and arising issues.

The first set of talks predominantly focused on ethical issues. There were discussions on authorship, and the nature of authorship where multiple actors are involved e.g. training data creators, platform developers and prompters.  Bias in image generation, reinforcing misconceptions and stereotypes. Culminating in a talk on the University of Salfords evolving approach to Generative AI and research ethics.

The second set of talks was focused on current capabilities, limitations and implications of using Generative AI enabled tools in the research pipeline, predomintly focusing on qualitative analysis. This session included a discussion around evidence synthesis and the need to find more efficient methods whilst maintaining reliability and a breadth of knowledge, and different approaches using “traditional” machine learning approaches versus use of large language models. Enhanced capabilities of Computer Aided Qualitative Data Analysis Systems and implications for methodological approaches were also introduced and discussed. The session concluded with a talk from Prof Jeremy Watson about the work currently being undertaken by the UK Committee on Research Integrity’s AI working group, of which he is member. Key themes currently under consideration by UKCORI are:

  • Governance
  • Roles and Responsibilities
  • Skills and Training
  • Public Understanding and Expectations
  • Attribution and Ownership – IP, etc.
  • Understanding Data Inputs and Models
  • Need for Research in AI and Integrity

During the world-cafe session participants addressed the following questions:

  • What do we mean by Research Integrity in an AI-Enabled Research Environment?
  • Are there degrees of Research Integrity based on discipline and how embedded AI use is in the research process?
  • What are the key ethical and legal considerations?

Including the following participant proposed questions:

  • Generative AI is extremely good at in-filling uncertainty, where details of images become filled with bias. Should the responsibility of bias be equally on a prompter who enables this by omission?
  • Recalibration of government and private funded RI in AI? Isn’t this the foundation of biases for RI?

Outputs from the world cafe session will be analysed over the next few weeks, and workshop participants were invited to contribute to the development of workshop outputs.

Key themes that emerged from the event include:

  • Transparency
  • Criticality
  • Responsibility
  • Fitness for purpose
  • Data protection and privacy
  • Digital divide – privilege and harms
  • Training – education

Social media post about the workshopThe workshop was well received by participants, with the participants rate their overall experience of the event as 4.71 out of 5.

The speaker sessions were rated as very good by over 70% of participants. With the world cafe being mentioned as a highlight of the event.




As the proposer, organising and the host of the event I can’t help but still wonder:

  • Can we ethically and with genuine integrity use tools which are fundamentally ethically flawed?
  • Why are we accepting of these issues?
  • How should we be pushing back?

I will leave you with these words from Arudhati Roy with which I opened the event:

First Julia workshop at ARC

By David Pérez-Suárez, on 15 November 2023

Last Friday (the 10th of November) we run our first ever Julia workshop. After years of having an expert in our team – Mosè – who has been introducing the rest of the team to this wonderful language and even convinced some collaborators to use it on their projects, we’ve done the jump to teach it to the UCL community.

paperplane imageThis first workshop was limited to a reduced number of learners (10-12). Seven of which attended (most of them physicists!), with our team of three (myself instructing, with Mosè and Tuomas helping — also all physicists 😅) made the learners’ experience very positive.

Originally, we were going to use the Carpentries Julia lesson available in the incubator. However, Mosè and I decided against it as the expected previous knowledge was higher than what we were aiming for. Therefore, we created our own lesson!

Our lesson started with the basics, different types of numbers, strings and how all them fit in the family of types in Julia. We introduced some of the quirks Julia surprises you with when you come from a different language. This was key in our lesson! We started to write a function as if it was Python — which was what we expected to be the most familiar for our cohort. From there, we were introducing new concepts and syntax to make our code more “julianic” (I’ve come up with that term, so it may not be the one used by the Julia community). We covered the basics (types, function, conditionals, loops and plotting) during the morning session. After lunch, we went to introduce how to use other libraries to solve polynomials and ordinary differential equations. We even introduced unit testing and had time to learn how to work with CSV files with DataFrames and gave a quick overview of Pluto.

During the preparation of the material and the class, I was constantly supported by Mosè, bouncing lots of ideas and suggestions. We’ve even found a bug in one of the libraries we were going to use that they fixed instantly after Mosè reported it.

The class went smoothly. We encountered some problems with the installation of Julia and some unexpected slowness when installing libraries (we reported it after the workshop, and it was also fixed straight away!). This is some of the feedback we’ve received at the end of the day:

  • Great course, learned a lot.
  • The course has been great. The pace is good and it allows us to ask any questions we have.
  • Comparison’s to Python really helped me appreciate the advantages of Julia. Paper plane example was great.
  • Very good course, covered all the right topics for a 1-day intro session.

Personally, I don’t remember a class that has gone so well! With very little difficulties, covering everything we were planning to do and answering very interesting questions from our learners. It may have been due to the small number of learners, or because of their previous programming experience, or the similar background across all of them, or maybe, it’s because Julia is easy to learn 😉. Whatever reason it is, I really want to repeat it, with a larger class and a more varied background of learners. There’s no reason for only letting the physicists have fun with Julia, right.

So, if you are interested in learning Julia, be sure that we will repeat more sessions like this one! This may be too basic for you? Don’t worry, we are also planning to run a more advanced workshop focused on Julia for HPC during Term 2’s reading week. Keep an eye out for our future announcements.

Now that we have started, we won’t stop!

RSE and Education for Sustainable Development: A Call to Action

By Samantha Ahern, on 12 September 2023

RSECon23 opened with a keynote from Gael Varoquaux, introducing themes synergistic to my conference workshop “How do we design and deliver sustainable digital research education”.

The actual theme of the workshop, what role(s) does RSE have to play in Education for Sustainable Development probably wasn’t what most participants were expecting. However, there were some very good conversations and ideas for action.

The workshop opened with two questions:
1. What does Sustainable mean to you?
2. What does Education for Sustainable Development mean to you?

These set the scene for the discussion in the session.

Key definitions and the SDGs

“meeting the needs of the present without compromising the ability of future generations to meet their own needs.”

UN, 1987

Sustainable Development
“An aspirational ongoing process of addressing social, environmental and economic concerns to create a better world.”

Advance HE / QAA 2021

Education for Sustainable Development
“The process of creating curriculum structures and subject-relevant content to support sustainable development.”

Advance HE / QAA 2021

The workshop participants were introduced to the 17 UN Sustainable Development Goals and asked to consider which are areas for development in RSE and which of these areas can we affect through education?

There was a general consensus that almost all are related in some way to RSE activity and impact that activity. The most notable being SDG 4: Quality Education.

Through RSE led education activity it was felt that the SDGs that could be affected were:

  • Goal 3: Good Health and Wellbeing
  • Goal 4: Quality Education
  • Goal 5: Gender Equality
  • Goal 8: Decent Work and Economic Growth
  • Goal 9: Industry, Innovation and Infrastructure
  • Goal 10: Reduced Inequalities
  • Goal 11: Sustainable Cities and Communities
  • Goal 12:Responsible Consumption and Production
  • Goal 13: Climate Action

For Goals 16 (Peace, Justice and Strong Institutions) and 17 (Partnerships For The Goals) it was unclear as to how these would apply.

Barriers and Opportunities

The discussion then focused on barriers to having an impact on the SDGs but also what opporunities we had for making a positive difference.


Key themes from the discussion on barriers were:

  • Resources: time, people, data sets
  • Funding
  • Lack of training, confidence in education skills
  • Lack of recognition
  • Lack of support / mentorship


Key themes from the discussion on opportunities were:

  • Ability to design our own materials and select data sets
  • Work collaboratively, as a community
  • Ability to raise awareness of issues
  • Access to experts from across our institutions
  • Access to education related CPD (if in a university setting)
  • Our learners want to learn
  • Our educators are passionate about their work

Although there are some well recognised barriers, there is also a lot of opportunities and connections we can leverage to make change.

The Call to Action

The workshop concluded with a design task to identify concrete steps we could take to address the barriers and leverage the opportunities.

The calls to action were:

  • Never teach alone
    • Enables different ways to explain
    • Could lead to a variety of role models
    • Less pressure
    • More perspectives
    • Broader variety of disciplines
    • Different background knowledge
  • Encourage those who found it difficult to return as helpers and instructors
  • Humanise the educators
    • Introductions
    • Live coding
    • Coding confessiona
  • Co-development of lesson materials
    • Share ideas and examples
    • Examples from different domains
  • Talk to learners
    • What is needed?

Most importantly, we are community and should leverage that community to learn from and support each other.

So, let’s work together to make a positive change!

You can view the original results on Mentimeter.

View the Twitter / X thread from the workshop.

Hack the (ARC) Teaching workshop

By David Pérez-Suárez, on 4 July 2022

Two weeks ago (20th – 23rd June) we ran an internal workshop in our group to reflect about our teaching activities. As any good workshop, it also included a fun hack day at the end to work on pet projects or ideas that we haven’t had the time to work on it before. This is a summary of these four days and a reflection for the future.

The workshop was set with two main purposes: review all the teaching activities we are involved, and learn some techniques to become better teachers. The workshop was attended by roughly 8 people every session, this contributed to allow everyone to participate. The event was fully hybrid, with roughly a 50-50% participation of people joining physically and remotely (the trains and tube strike shifted the participation towards a 30-70% towards the end of the week).  Thanks to the big screen and the semi-separated areas we have in our collaboration space, together with how the workshop was run with smaller physical and virtual small groups, contributed to a nice flow of the workshop.

Each day of the workshop was broken into two 2-hour blocks, one in the morning from 10:00 to 12:00 and one in the afternoon form 14:00 to 16:00. This helped to disconnect a bit, catch up with other commitments or have time to enjoy lunch in the park while recharging our solar batteries.

In terms of tooling, we used MS Teams as the conferencing tool (our calendars and the big screen are linked to it) – we also explored the breakout rooms feature it provides; HackMD and Etherpad for note-taking; Google’s Jamboard for collaboratively moving cards in a digital medium; IdeaBoardz to collect feedback; and tried (with only partial success) Visual Studio Code’s Live Share to pair-program during the hack day.

Now that the logistics and tooling has been explained, let’s dive into the content of the workshop.

The workshop started with a short review of the Carpentries instructor training lessons. That workshop lasts two full days, and this session lasted only two hours. Therefore, many things were not covered (like practising the teaching), however, we covered some basics about how learning works and how to create a positive learning environment. As any Carpentries workshop, they are full of activities and discussions, and we had good and interesting discussions. The afternoon of that day, we spend it discussing a set of uncomfortable scenarios that may happen during a teaching activity. These scenarios were created by Yanina Bellini Saibene for Metadocencia and translated by J.C Szamosi. They are a very useful resource to explore before they manifest in a real situation. The scenarios were distributed between the different small groups and then shared with the bigger group our suggested actions. Of course, sharing it in the bigger group was also a source of new point of views and ideas. We highly recommend doing this exercise to everyone who takes part in any teaching activity! The day finished with a review of the Science of Learning paper. As with the previous exercise, we distributed the sections across us and discuss it first in small groups and then as a whole. This is a nice quote about the paper from Sarah in our team:

I want to print this out and stick it all over my office so I can see it whenever I teach.

The second day was focused on our teaching activities and an overview of Submitty, the autograding tool we use in a couple of master courses we teach. We started with a set of lightning talks (aiming for 1 minute each, but all of us overran a bit) for each teaching activity we are involved in. Each talk has to describe the teaching activity with its topics, the audience to whom it is aimed to, the format, what is going well and what can be improved, and finish it with the challenges presented for next year – all that in one minute! We had 13 talks, some of these talks are from courses or workshops we run once a year, others are about courses that happen multiple times. Two of them were from the UKRI Data Science Training in Health and Bioscience (DaSH) projects we are involved with: IDEAS and Learn to Discover. The last one was a short summary of the teaching activities from our friends at Digital Education. The afternoon was focused on Submitty. First with an overview of how the system looks from the different point of views (student and instructor) and then how to set up the exercises. We completed the day with an exercise about thinking how to plan the autograding of two questions from past assignments. The main conclusion of this exercise was that for autograding to work, we need to be more specific on what we ask the students. This, however, may have its disadvantages as it limits the freedom of how the students may approach a problem.

The third day was an ABC Learning Design workshop led by Nataša Perović from UCL’s Office of the Vice Provost Education & Student experience. The workshop starts with an overview of the different learning activities types as described in Diana Laurillard’s work “Teaching as a design science”. We spent the practical side of the workshop, focusing on three of our courses. It was a very useful exercise that we should do more frequently to keep improving and fine-tuning our courses. In the afternoon, we learnt how to migrate our notes from Jamboard into the Learning designer tool from UCL’s Knowledge Lab at IOE. One cool feature that Nataša demonstrated to us is how our Learning Design structure can be exported into Moodle.

The last day was the hack day. We have a collection of mini-projects that we would like to work on, but that normally get postponed till we have the time… Well, finally the time arrived! We tackled four of these projects, two were completed quite quickly, and the other two got started (and that’s sometimes the harder bit!) and hopefully the inertia keeps them moving to a complete state soon. One project that involved an analysis of students grades included a good discussion at the start about the ethics and privacy of the project. This helped to make some decisions of which dataset we were going to use (e.g., the anonymous dataset provided by Moodle before the marks get released), and future ideas about how to clarify to the students how the assignments get graded anonymously.

That was how we’ve spent four days last week learning how to improve our teaching, reflecting on what we’ve done so far and planning what we can do to have better courses in the future. After the positive feedback and seeing how useful a focus week without other distractions can be, we may make this a recurrent annual activity!

How to get started with HPC and AI in your research

By Anastasis A Georgoulas, on 28 September 2021

The Tech Socials are back after our summer break, this time with a twist. For September’s social, we decided to follow up on a SeptembRSE event with our own discussion.

The topic was how to get started with technologies like high-performance computing and artificial intelligence. The panel members came from UCL’s Centre for Advanced Research Computing (ARC), research institutes in molecular biology and sociology, as well as industry. We wanted a range of perspectives, and we think we got it!

There’s lots to talk about on this topic and the discussion proved that, as we moved away from practical tips (such as keeping in mind the scale of your data and workloads) to consider the broader context. We covered policy issues, like the availability of seed funding, ideas for tackling the “skills gap” that researchers and students have to overcome, the importance of wrapping your head around what is possible to do with new technologies, all the way to the right attitude, and how technologists can more efficiently communicate with researchers in the social sciences and humanities (hint: some humility is advised).

If this sounds interesting, do watch the recording (unfortunately missing the first few minutes of introductions!). We are still exploring how ARC can best serve UCL, so get in touch if you think we can work together – you can find out more about what we do on the Research Programming Hub pages until the website reflects our change to ARC.

Thank you to all panel and audience members for their contributions!

Our first ever Git workshop online

By David Pérez-Suárez, on 16 April 2020

Tl;dr: We successfully ran a 3-hour workshop for 11 learners with one instructor and five helpers. We used Blackboard Collaborate as our main tool and shellshare to “look over the shoulders” of the learners.

Our team has been running training workshops since its start. Enabling researchers to make better software is one of our core goals. Most of our training benefited from The Carpentries’ methodology and material. We were early adopters and supporters of those – UCL became one of the first affiliate institutions of the Software Carpentry Foundation.

These workshops had always been in person, broken into 3-hour blocks with no more than 30 learners at a time, with one helper for at least every seven learners and one instructor per session. That model has been very successful as the feedback from our learners has shown. However, in the current situation where everyone is working from home, we need to move these workshops to the online world. Last week we run our first Git workshop online! We could say it was a success! Keep reading to find out how we did it.

There have been some experiences shared by other groups that we’ve learnt from:

For our first online workshop we had only 11 learners, all from the same research group. This was helpful as they all had similar experience and goals (and that was to learn Git!). We had one instructor and 5 helpers. That made our helpers/learners ratio quite high (from ~1/7 in an in-person workshop to ~1/2), and though that large ratio may not be sustainable for our group, it was safer to start this way.

On the technology side we used a set of tools to facilitate the teaching and helping:

  • To deliver the workshop we hosted the teaching session using Blackboard Collaborate. This is what our university uses for online teaching and it worked very well. Blackboard has various features like sharing the screen, breakout rooms, chat and whiteboard. On this first instance we tried to use the breakout rooms but we didn’t succeed (more about that later). The whiteboard was not used either.
  • Though blackboard has a chat, we also used a google document to share links and other information in a more organised way.
  • Shellshare was used to broadcast the learners’ terminals and each helper was monitoring two of these at a time.
  • We also used Jitsi to host a drop-in session to help with installation issues before the workshop.

Let’s dive into how the organisation and the delivery of the workshop went ahead.


As in any Carpentries’ workshop, we created a quick survey to know how to pitch the workshop. We knew from there that most of the learners were using macOS, only one was using Linux and none were using Windows. That was very useful as we knew that shellshare works well on macOS and Linux.

The students were provided with a set of instructions to prepare for the workshop, and with a suggested way for laying out the windows on a single screen (such as for a laptop). Since there are always problems with installations, we also hosted a one-hour drop-in session a few hours before the workshop. For the drop-in session we used Jitsi as it works straight from the browser with a single link, and a meeting doesn’t need to be scheduled (as in BlackBoard). With Jitsi we could get the learners to share their screen and help them to debug the problems they had.

suggested layout of the windows for a student

We also sent them the link to the google document that we were going to be using during the workshop. In that document, and “copying” from what the SSI had done at the Collaborations Workshop, we included a link to the Code of Conduct, the installation instructions, the Blackboard room, the Socrative room (for quizzes) and the pinup board (for feedback at the end). (We went for a google document instead of a Carpentries’ etherpad so people could include images, though it lacks the ability to refer to a particular line number.)

The workshop

In this workshop we taught the git lesson from software carpentry (with the recipe twist). Therefore, the students were going to use their terminal, nano and GitHub’s website for the last bit.

Almost everyone connected to the digital classroom without issues. Two people had either connectivity issues or problems with the audio; the session was recorded in case they wanted to review it later. We started the session introducing everyone to the platform (how to mute/unmute, raise the hand), how we were going to use the google document and how to start shellshare.

We asked everyone to add themselves with their names and pronouns on the google doc and paste their shellshare link under their favourite helper. Though everyone managed to get a link for shellshare, the success of it was not perfect. Some of the learners switched to use a terminal from within RStudio (leaving the helpers in the dark), or the program crashed at some point. Nevertheless, that didn’t disrupt the workshop much.

We also explained the schedule for the session. We had two breaks every 55 minutes, the first of 10 minutes and the second of only 5. This differs from our common workshops where we have only one break every 90 minutes, but it’s an important change to mention here as people may not have a good chair in their houses and more frequent breaks to stretch are welcome.

Since we knew most of the learners hadn’t used a Unix shell before, we added to the document a short description of the six commands we were going to use (cd, ls, pwd, mkdir, rm and cat) which we introduced during the first 15 minutes of the workshop.

We used socrative to run quizzes during the class, where most of the questions are asked twice, the first one to answer individually and the second one to answer after discussing it with a peer. We tried to translate that to the online world using the breakout rooms capability of Blackboard. However, it didn’t work well. We hadn’t tested before with that many participants in a call and the room creation and assigning people to the room was not as smooth as we would have liked it. In part, this was because we were not too familiar with this tool, but it may also be a limitation of the tool itself (how can we keep the custom assignment of participants to breakout rooms over the whole session?).

During the workshop we had some learners having problems which we tried to help using breakout rooms. The problem there is that the helper loses track of what’s happening in the main room and can’t – as they would do in a physical classroom – point the learner to pay attention to something that’s being explained at the moment, and help after that bit.

Finally, the workshop finished almost on time (+5 min) and we covered most of the material. Learners gave positive feedback and appreciated the number of helpers we had in place.

The instructor setup

I, the instructor, was using a Linux machine running Gnome on Wayland. The terminal that was broadcast was one from within Jupyter-lab. For some reason, the browser could share any application windows except the terminal! (I am yet to understand why.) The terminal was split using Raniere’s shell-split trick letting the students catch up if some commands have gone out of the window. Sharing the terminal via the browser had an unexpected advantage! I could jump between tabs (terminal, google doc, diagrams, github, …) without having to change which application was being shared.

To communicate with the helpers we didn’t define clearly what to do, so we had a Slack room on one side and the moderators’ chat within Blackboard. Thankfully we defaulted to use the chat within blackboard as the workshop progressed.

I had also set up Krita with a drawing tablet in case I needed to use a whiteboard during the workshop. But finally, following Greg’s advice on his talk, I decided not to do so. I had chosen to use Krita instead of the provided whiteboard within Blackboard because I found it harder to write in the latter as it smoothes the lines you draw.

The helpers

The helpers were doing everything within the browser. They were (virtually) looking over the shoulders of some of the learners as they went through via shellshare, informing the instructor to go slower/faster or to intervene if help was needed.

How the helpers were going to communicate with the learners should have been explained better, as the chat of Blackboard was not fully explained and it’s not that obvious!


The workshop was a success! Though we had some hiccups, nothing was too disruptive.

What worked

  • Blackboard is a very nice tool for online teaching, works across all operating systems and doesn’t require any tool or plugin to be installed.
  • shellshare, when it works, is very good!

What could be improved

  • We should have practised setting up break-out rooms on Blackboard more. We would have noticed that the people in each room changes each time you separate them.
  • Explain better to everyone how to use the communication channels (e.g., chat feature on Blackboard)[1].
  • shellshare worked, but not for all. We also need to explain its purpose better (e.g., if a learner uses a different terminal we don’t see it anymore).

Other thoughts

We can’t have this high ratio between helpers and learners, and probably we don’t need as many. Next week we will experiment with one helper for every four students.

Most issues happened at the start. Not everyone installed all they needed beforehand. A “compulsory” drop-in where the setup is checked and explained will give more time to focus on the content of the lesson. Additionally, a self-check script, as in the carpentry workshops, that tells whether the installation has been successful would help.

The learners at this workshop were familiar with RStudio (but we didn’t know it). Maybe teaching git from within RStudio would have worked better, although it would require more space on the learner’s window as the IDEs are bigger than a terminal.

In this workshop we didn’t have anyone with Windows. Shellshare by default doesn’t work for Windows, but there’s a workaround using Powershell and with Python installed.

Using shellshare you may wonder whether you are streaming passwords when using it. The answer is no. Shellshare uses the UNIX command script and only stores what you see on the screen.

How well did we do regarding the recent Carpentries’ recommendations for teaching workshops online?

  • experienced instructor and small class size ✅
  • procedures that are as close as possible to our standard practices ✅
  • The most common barriers are likely to be unreliable internet connections (the workshop was recorded) ✅
  • and the limitation of a small single screen (suggested windows layout) ✅
  • pre-workshop support with software installation ✅
  • and the use of cloud instances with pre-installed software as a backup ❌
  • Helpers: Stepping in if Instructor loses connection ❌

That document provides many more recommendations. I think we did very well with most of them, but we can still do better!

One day we will be as prepared as this Chris at Berkeley


  1. It turns out that on Blackboard you need to click a back arrow on the top of the chats to exit from the “Everyone” chat! ↩︎

Career opportunities for RSE-like roles outside RSE groups

By Jonathan Cooper, on 23 August 2019

In the weeks running up to the RSE Conference, myself and some colleagues will be providing our thoughts on the questions people have submitted for our panel discussion with senior university management about how RSEs are being supported within academia. (You can submit more questions and vote on the current questions on Sli.do.)

Question: If you have a central university RSE group do other staff working in RSE-like roles in academic departments have the same career opportunities as that group?

As research software groups grow, seemingly inevitably they start to acquire more structure and hierarchy, with more senior roles being created to help manage the group. This provides for career advancement opportunities within the group, although often this is with the caveat that an increase in grade requires more managerial responsibility and reduced time for technical work. The situation is more bleak for those employed directly in research groups, typically on a fixed-term postdoc style contract where typical academic advancement routes – dependent on a publication record – do not easily apply. How can universities ensure the same opportunities are available to all staff?

Several groups have tried to address this challenge by making their job descriptions and selection criteria available to the whole university, for instance at Sheffield and King’s Digital Lab. This promotes consistency of approach, and gives the potential to aim at senior roles with appropriate selection criteria. Sustaining posts beyond a single funding source still presents a bottleneck to supporting careers, however. Central RSE groups with a large project portfolio and successful track record can often offer permanent contracts to staff, whereas this is rarely an option for individual research groups. At UCL we are investigating the potential of establishing ‘satellite’ groups within academic departments. These would be able to offer equivalent terms and conditions to their staff, and collaborate closely with the central team. We are not there yet however!

It is also worth considering a broader perspective. For staff trapped in a succession of fixed term contracts, ‘career opportunities’ is often synonymous with getting a permanent post, ideally at senior postdoc or even up to professorial pay grade. There are however many directions of travel that could be considered. How do we allow university staff, wherever they may be based, to move flexibly between teaching, research and professional services roles (or a combination thereof!), into industry and back into academia, always learning new skills along the way? How can progression be linked to expertise in different technical areas, not just management responsibility?

This question is ultimately not limited to RSE roles, but affects the wider network of roles within academia. UCL’s career pathways initiative (aimed at professional services) and academic career framework provide hopeful first steps in supporting more flexible careers, but there is still plenty of work to be done!

Hear an expert perspective on this and a variety of other questions from a panel of senior institutional representatives covering roles in research, HR and research software at the Science and Engineering South panel on “Institutional Support for RSEs” at RSEConUK19 on the 18th September at 13:30. You can also read previous blog posts in this series by Simon Hettrick and Jeremy Cohen.


By Jonathan Cooper, on 28 September 2018

Back in early September was the third edition of a series of conferences dedicated to Research Software Engineering.

It’s like the national meetings that exist for numerous disciplines, but to talk about us: our community and career paths, ways to serve the research community better, tools and techniques for better software, among others. As expected, almost the whole Research Software Development Group (and Ian Kirker from Research Computing!) attended the conference.

This year’s conference was really important for us: the conference started with a keynote by our very own Prof. Eleanor Robson of UCL Archeology, who talked about Oracc, the longest project our group has been involved in since its creation, including a couple of demos of the tools we have created for writing translations of cuneiform texts. We had Ilektra Christidi in the organizing committee, who also co-organized and co-chaired the international session, a lively event where RSE’s from around the world exchanged views and experiences from the communities of their countries, and discussed about cross-country initiatives and collaborations. Tom Dowrick – our affiliated team member – gave a talk about using the Robot Operating System to create a reproducible platform for surgical device development.

As usual, there were interesting workshops and talks from researchers, RSEs and representatives of big industry players alike. We especially enjoyed the workshops about singularity; JupyterHub + kubernetes; lean tools for product development; and parallelizing python applications. Talk highlights covered building computer vision systems by Microsoft Research; why making scientific software sustainable is difficult; GUI’s and visualization; lessons from CASTEP on rebuilding legacy software; and an entertaining and enlightening presentation from Catherine Jones (STFC) on how to shut down services gracefully – something we’ll be putting into practice as we deprecate our local Jenkins service in favour of the national service STFC are now piloting.

There was also exciting news for the future of RSE in the UK, with the announcement of a new registered society due to be launched soon. And we are excited to have struck up an agreement with the Software Sustainability Institute to collaborate with them on analysing their international survey results – more news on that in the coming months!

Our group head Jonathan Cooper attended several sessions looking at different aspects of managing RSE groups, as well as having many stimulating discussions with other RSE group leaders. There was a helpful workshop on inclusivity and diversity in RSE recruitment, discussing everything from wording of adverts to structure of interviews – and indeed how we maintain a welcoming culture within our team. UCL is doing well in so far as we have 6 nationalities within the team, 40% of our senior team female, and have put together majority-female interview panels, but there is no room for complacency. We’ll be looking at what outreach we can do to demonstrate what a great career path RSE is for all people. A panel session with RSE group leaders highlighted that all RSE groups across the UK face much higher demand for their services than they can match with current staffing levels, and so training the next generation of RSEs will be crucial.

A funders’ perspective from Susan Morrell (EPSRC) and David Carr (Wellcome Trust) underlined again the increasingly heavy dependence of UK research on software development, and hence the need for RSEs. Wellcome’s emphasis on open research was particularly encouraging. Also of interest here was an ARCHER study showing a 3:1 return on investment for their eCSE programme, which provides RSE support to researchers using national HPC resources. In another session we thought about ways in which RSE groups can benefit the wider community: not just delivering projects which benefit the researchers involved directly, and ensuring these tools are usable by others, but contributing to underpinning open source projects on which many researchers depend.

All in all, enough technical and social activity – a good warm up as term begins and we resume our regular activities, like drop-ins, Tech Socials, and coffee mornings!

Seminar: Developing a Parallel Adaptive Method for Pseudo-Arclength Continuation

By cceajhn, on 14 August 2013

There will be a seminar in the “research programming in practice” series by Dhavide Aruliah University of Ontario Institute of Technology (Oshawa, ON, Canada) on Weds 28th August at 2pm in Drayton B06.

Pseudo-arclength continuation is a well-established framework for generating a curve of numerical solutions of nonlinear equations. In my talk, I will review the basic ideas underlying adaptive predictor-corrector schemes for pseudo-arclength continuation emphasising where the bottle-neck arises in computation of rejected corrector steps. We have been developed a parallel code using standard C and MPI for adapting the step-length in pseudo-arclength continuation. Our method employs several predictor-corrector sequences run concurrently on distinct processors with differing step-lengths. Our parallel framework permits intermediate results of unconverged correction sequences to seed new predictor-corrector sequences with longer step-lengths; the goal is to amortise the cost of corrector steps to make further progress along the underlying numerical curve. I shall describe the essence of the parallel code and some of the issues that arose in its implementation. The goal is to have a straightforward interface to an MPI library into which researchers can plug in their serial C continuation codes to achieve modest improvements with widely available multicore desktop machines. This is joint work with Alexander Dubitski (Amadeus R & D, Toronto) and Lennaert van Veen (UOIT).

By James P J Hetherington, on 8 August 2013

Another workshop which I will be at, which should be of interest to readers:

Research software engineers are the people behind research software.
They not only develop the software, they also understand the research
that it makes possible.

Software is a fundamental part of research, and research software
engineers are fundamental to good software. Despite this, the role is
not well understood in the research community. This is something the
Software Sustainability Institute is campaigning to change – starting
with a workshop for research software engineers.

We will bring research software engineers together to talk about new
tools and interesting work, to share ideas with people who do the same
work, and to discuss how we can overcome the problems that are faced by
all research software engineers – like gaining recognition and reward
for their work.

The term research software engineer is new, so many people who fulfil
the role will not describe themselves in this way. To help judge whether
you should attend the workshop, we’ve put together some questions:

1. Are you employed to develop software for researchers?
2. Are you a researcher who now spends more time developing software
than conducting research?
3. Are you employed as a postdoctoral researcher, even though you
predominantly work on software development?
4. Are you the “person who does computers” in your research group?
5. Are you not named on research papers despite playing a fundamental
part in developing the software used to create that research?
6. Do you lack the metrics needed to progress your career in research –
like papers and conference presentations – despite having made a
significant contribution with software?

If you answered yes to any of the above questions, you should attend the
workshop for research software engineers.

For more information:
Registration: http://workshopforresearchsoftwareengineers.eventbrite.co.uk/