X Close

UKRI Centre for Doctoral Training in Foundational AI

Home

Menu

Archive for the 'Research' Category

“Safe Trajectory Sampling in Model-based Reinforcement Learning for Robotic Systems” By Sicelukwanda Zwane

By sharon.betts, on 29 September 2023

In the exciting realm of Model-based Reinforcement Learning (MBRL), researchers are constantly pushing the boundaries of what robots can learn to achieve when given access to an internal model of the environment. One key challenge in this field is ensuring that robots can perform tasks safely and reliably, especially in situations where they lack prior data or knowledge about the environment. That’s where the work of Sicelukwanda Zwane comes into play.

Background

In MBRL, robots use small sets of data to learn a dynamics model. This model is like a crystal ball that predicts how the system will respond to a given sequence of different actions. With MBRL, we can train policies from simulated trajectories sampled from the dynamics model instead of first generating them by executing each action on the actual system, a process that can take extremely long periods of time on a physical robot and possibly cause wear and tear.

One of the tools often used in MBRL is the Gaussian process (GP) dynamics model. GPs are fully-Bayesian models that not only model the system but also account for the uncertainty in state observations. Additionally, they are flexible and are able to learn without making strong assumptions about the underlying system dynamics [1].

The Challenge of Learning Safely

When we train robots to perform tasks, it’s not enough to just predict what will happen; we need to do it safely. As with most model classes in MBRL, GPs don’t naturally incorporate safety constraints. This means that they may produce unsafe or unfeasible trajectories. This is particularly true during early stages of learning, when the model hasn’t seen much data, it can produce unsafe and seemingly random trajectories.

For a 7 degree of freedom (DOF) manipulator robot, bad trajectories may contain self-collisions.

 

Distributional Trajectory Sampling

In standard GP dynamics models, the posterior is represented in distributional form – using its parameters, the mean vector and covariance matrix. In this form, it is difficult to reason about

about the safety of entire trajectories. This is because trajectories are generated through iterative random sampling. Furthermore, this kind of trajectory sampling is limited to cases where the intermediate state marginal distributions are Gaussian distributed.

Pathwise Trajectory Sampling

Zwane uses an innovative alternative called “pathwise sampling” [3]. This approach draws samples from GP posteriors using an efficient method called Matheron’s rule. The result is a set of smooth, deterministic trajectories that aren’t confined to Gaussian distributions and are temporally correlated.

Adding Safety

The beauty of pathwise sampling [3] is that it has a particle representation of the GP posterior, where individual trajectories are smooth, differentiable, and deterministic functions. This allows for the isolation of constraint-violating trajectories from safe ones. For safety, rejection sampling is performed on trajectories that violate safety constraints, leaving behind only the safe ones to train the policy. Additionally, soft constraint penalty terms are added to the reward function.

Sim-Real Robot Experiments

To put this approach to the test, Zwane conducted experiments involving a 7-DoF robot arm in a simulated constrained reaching task, where the robot has to avoid colliding with a low ceiling. The method successfully learned a reaching policy that adhered to safety constraints, even when starting from random initial states.

In this constrained manipulation task, the robot is able to reach the goal (shown by the red sphere – bottom row) without colliding with the ceiling (blue – bottom row) using less than 100 seconds of data in simulation.

Summary

Sicelukwanda Zwane’s research makes incremental advances on the safety of simulated trajectories by incorporating safety constraints while keeping the benefits of using fully-Bayesian dynamics models such as GPs. This method promises to take MBRL out of simulated environments and make it more applicable to real-world settings. If you’re interested in this work, we invite you to dive into the full paper, published at the recent IEEE CASE 2023 conference.

References

 

  1. M. P. Deisenroth and C. E. Rasmussen. PILCO: A Model-based and Data-efficient Approach to Policy Search. ICML, 2011.
  2. S. Kamthe and M. P. Deisenroth. Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control. AISTATS, 2018.
  3. J. T. Wilson, V. Borovitskiy, A. Terenin, P. Mostowsky, and M. P. Deisenroth. Pathwise Conditioning of Gaussian Processes. JMLR, 2021.

 

Student-Led Workshop – Distance-based Methods in Machine Learning – Review by Masha Naslidnyk

By sharon.betts, on 3 July 2023

We are delighted to announce the successful conclusion of our recent workshop on Distance-based Methods in Machine Learning. Held at the historical Bentham House on 27-28th of June, the event brought together approximately 60 delegates, including leading experts and researchers from statistics and machine learning.The workshop showcased a diverse range of speakers who shared their knowledge and insights on the theory and methodology behind machine learning approaches utilising kernel-based and Wasserstein distances. Topics covered included parameter estimation, generalised Bayes, hypothesis testing, optimal transport, optimization, and more.The interactive sessions and engaging discussions created a vibrant learning environment, fostering networking opportunities and collaborations among participants. We extend our gratitude to the organising committee, speakers, and attendees for their valuable contributions to this successful event. Stay tuned for future updates on similar initiatives as we continue to explore the exciting possibilities offered by distance-based methods in machine learning.

A large group of attendees for the workshop stand in front of a screen, smiling at the camera.

Happy attendees at the Distance-based learning workshop

Conferences and Workshops – GOFCP, MLF & EDS 2022 – Recap of events by Antonin Schrab

By sharon.betts, on 16 November 2022

In September 2022 I had the amazing opportunity to participate in workshops in Rennes and in Sophia Antipolis, and in a doctoral symposium in Alicante!

In poster sessions and talks, I have presented my work on Aggregated Kernel Tests which covers three of my papers. The first one is MMD Aggregated Two-Sample Test where the two-sample problem is considered, in which one has access to samples from two distributions and is interested in detecting whether those come from the same or from different distributions. The second is KSD Aggregated Goodness-of-fit Test in which we consider the goodness-of-fit problem where one is given some samples and is asked whether these come from a given model (with access to its density or score function). In the third one, Efficient Aggregated Kernel Tests using Incomplete U-statistics, we propose computationally efficient tests for the two-sample, goodness-of-fit, and independence problems; this last one consists in detecting dependence between the two components of paired samples. We tackle these three testing problems using kernel-based statistics, in such a setting the performance of these tests is known to heavily depend on the choice of kernels or kernel parameters (i.e. bandwidth parameter). We propose tests which aggregate over a collection of kernels and retain test power, we theoretically prove optimality of our tests under some regularity assumptions, and empirically show that our aggregated tests outperform other state-of-the-art kernel-based tests.

Photo
I started the month of September by participating in GOFCP 2022, the 5th Workshop on Goodness-of-Fit, Change-Point and related problems, from 2nd to 4th September in ENSAI in Rennes (France). It was extremely interesting to hear about the latest research in this very specific research field which covers exactly the topics I had been working on since the start of my PhD.

Photo
I then went to EURECOM in Sophia Antipolis (France) for MLF 2022, the ELISE Theory Workshop on Machine Learning Fundamentals, from 5th to 7th September. Talks and poster sessions covered the theory of kernel methods, hypothesis testing, partial differential equations, optimisation, Gaussian processes, explainability and AI safety.

Photo
Finally, I participated in EDS 2022, the ELLIS Doctoral Symposium 2022, hosted by the ELLIS Alicante at the University of Alicante in Spain from 19th to 23rd September. It was an amazing experience to meet so many other PhD students working on diverse topics in Machine Learning. I especially enjoyed the numerous poster sessions which allowed to engage with other students and discuss their current research!

I am extremely grateful to Valentin PatileaMotonobu Kanagawa and Aditya Gulati for the respective invitations, and to my CDT (UCL CDT in Foundational AI with funding from UKRI) which allowed me to participate in those workshops/symposium!

CDT Students shine at poster showcase event

By sharon.betts, on 4 November 2022

Tuesday 1st November was a busy day at the CDT and UCL Centre for Artificial Intelligence with our joint UKRI CDT poster showcase and AI demo event. Together with the UKRI CDT in AI-Enabled Healthcare we put on an event featuring posters, demos, AI art and robots.

David Barber is at podium presenting his thoughts on the CDT to an audience in the Function Space at 90 High Holborn

Prof David Barber presenting the latest news on the CDT

The afternoon began with presentations by the CDT centre directors Prof David Barber and Prof Paul Taylor, as well as our industry sponsor Ulrich Paquet from Deepmind. In attendance were students, academics and industry partners, keen to understand what we have been doing and where our research will take us in the future.

a student demonstrates his work on a laptop and screen

PhD Candidate Jakob Zeitler provides a demo on screen

We had approximately 40 posters on display, with a further 19 demonstrations of AI by a variety of groups from Vision to Natural Language Processing. Engagement with the poster presenters was high across the board and a wonderful opportunity for our students to engage with others about the work that they have undertaken the last few years.

A student presents his poster to a crowd of interested listeners

PhD candidate Reuben Adams presents his poster to a crowd of attendees

We were honoured to have the Provost in attendance to witness just how vibrant and stimulating our centres are as part of a dynamic and successful Computer Science department.

Provost Dr Michael Spence stands in front of AI generated artwork with David Barber and crowd in attendance

Provost Dr Michael Spence unveils the Amedeo Modigliani painting

The UCL Centre of Artificial Intelligence have been donated a rare 3D generated AI generated painting of a Amedeo Modigliani, which started as a Masters and then PhD project for Dr. Anthony Bouchard and Dr. George Cann and will be displayed at the AI Centre for all to see.

The day ended with a robot display in the Function Space, showcasing the quadrapod robots that our students are working with both at the AI Centre and the soon to be opened UCL East.

Two quadrapod robots on display

Two quadrapod robots being demonstrated to the crowd

It was wonderful to witness all the different ways in which AI is being applied and developed to help solve some of societies greatest needs and to have the opportunity to share the work of our students with a wider audience.

With thanks to those who attended, our students, director David Barber, AI Centre manager Sarah Bentley and the TSG team for their time, patience and support in helping to make this a hugely successful event.

Getting the Most Out of Presenting Your Research to Non-specialists, Reflections on the 2022 UCLIC PhD Showcase By Zak Morgan, PhD Student

By sharon.betts, on 4 July 2022

Zak Morgan Cumberland Lodge poster

One of the most challenging parts of starting my PhD journey so far, has been adapting my written work from the highly technical works such as academic papers, to a broader audience such as those at multi-disciplinary research conferences. The formal definition of this skill would be in the D domain of the Researcher Development Framework, D2 “Communication and Dissemination” and D3 “Engagement and Impact”.

My first poster presentation was given at Cumberland Lodge, mainly to fellow students in the FAI CDT, but also to other AI focused CDTs around the country. The second referenced here however was to other PhD students and academics in the UCL interaction centre (UCLIC) who are a multi-disciplinary group focused on all avenues of technology and how we interact with it, combining the fields of computer science and psychology.

One of the presentations at the showcase which I found interesting was by Leon Reicherts, who presented his paper “Do Make me Think!”. This is about how to use conversational user interfaces to ask questions to the user, in order to enable “deeper” thinking and more thorough learning to the user. I like to think of this as an interactive version of rubberducking.

I think applying a similar concept to these talks in beneficial. That is that ideally you can inform the audience with sufficient detail about your project in order to have the opportunity to have questions fed back to you at the end which provoke this “deeper thinking”. The difference in audience in these two presentations was crucial in order to perform the first part of this technique correctly.

These presentations have been very beneficial to me in generating new directions for my research, informing me on how well I communicated my research as well as having my peers send me relevant research papers that they see that they think will be useful to me. I can’t stress enough how much this last point has helped me in keeping up to date in my research and in writing my literature reviews, search engines can’t hold a candle to a good network of researchers!

This work was supported by the Royal Academy of Engineering Chairs in Emerging Technology Scheme (CiET1718/14)

The simplest model debiasing approach by Samuel Cohen

By sharon.betts, on 13 June 2022

Machine learning is used in most industries to automate decision processes, e.g., in banking, insurance and HR. Historical data is used to train models, and biases inherent to this data are replicated in the behaviour of the trained models.  In this blog post, we look at one of the simplest and intuitive model-centered methods for making the predictions of AI models fairer.

The data

Figure 1: Proportion of low- and high-income males and females in the US census dataset‍

We are interested in an income prediction task based on real US census data. We can see in Figure 1 that there is a gap of 19% in the proportion of high-income males and female.

 

Figure 2: Proportion of low- and high-income males and females as predicted by an ML model

When we train a ML model (logistic regression) on this data, and allow it to make decisions on a hold-out dataset, it allocates 18% more high-income predictions to males than females, replicating the unbalanced patterns from the training data.

Our aim is to train a model that will provide balanced high/low-income predictions, while preserving the accuracy (the proportion of correctly classified individuals) as much as possible.

A simple debiasing solution

Figure 3: Diagram illustrating the simple debiasing solution.

Most machine learning models not only provide binary high/low-income predictions, but also assign probabilities for each option. Typically, we set the threshold for predicting high- or low-income at the 50% level, meaning that if your probability of being high-income is estimated to be over 50%, then we assign a high-income prediction.

In order to force the model to allocate balanced high- and low-income predictions, we can use separate thresholds for males and females. In order to find these thresholds, we first look at the overall number of high-income individuals in the full population — it is of 24%. We hence need to find a separate probability threshold for males and females that will lead to 24% of high-income (predicted) individuals in each group.

Figure 4: the following thresholds allow both subgroups to have 24% of their members predicted as high-income

First, we begin by training a logistic regression on the training dataset. Second, we use this model to allocate probabilities of being high-income to each individual in the training data.  We then find the probability thresholds for males and females such that 24% in each group will be predicted a high-income (see Figure 4).

As can be seen in the figure, females with a 13% probability of having a high-income will be predicted high-income, and males with a 60% probability of having a high-income will be predicted a high-income.

We observe in Figure 5 that after this decision post-processing, we provide as many high-income predictions to each group, and the overall accuracy decreased of 2% only.

Figure 5: By leveraging threshold-corrected income predictions, the model predicts the same proportion of low/high income individuals in each group while losing only 2% accuracy.

Get the code to run this experiment by yourself !

Surveying Generalisation in Reinforcement Learning

By Sharon C Betts, on 19 January 2022

By Robert Kirk, PhD Candidate

Reinforcement Learning (RL) could be used in a range of applications such as autonomous vehicles and robotics, but to fulfil this potential we need RL algorithms that can be used in the real world. Reality is varied, non-stationarity and open-ended, and to handle this algorithms need to be robust to variation in their environments, and be able to transfer and adapt to unseen (but similar) environments during their deployment. Generalisation in RL is all about creating methods that can tackle these difficulties, challenging a common assumption in previous RL research that the training and testing environments are identical.

However, reading RL generalisation research can be challenging, as there is confusion about the exact problem being tackled and the use of terminology. This is because generalisation refers to a class of problems within RL, rather than a specific problem. Claiming to improve “generalisation” without being more specific about the type of generalisation that is being improved is underspecified. In fact, it seems unlikely we could improve all types of generalisation with a single method, as an analogy of the No Free Lunch theorem may apply.

To address this confusion, we’ve written a survey and critical review of the field of generalisation in RL. We formally describe the class of generalisation problems and use this formalism to discuss benchmarks for generalisation as well as methods. Given the field is so young, there’s also a lot of future directions to explore, and we highlight ones we think are important.

To find out more, check out the extended blogpost here!

Simultaneous Localisation and Mapping using sensors

By Sharon C Betts, on 9 December 2021

By Jingwen Wang – PhD Candidate Cohort 1

Simultaneous Localisation and Mapping is the process of reconstructing the surrounding environment using a sensor (camera, LiDAR, radar, etc) and estimating the ego-motion of the sensor at the same time. It is widely used in many applications such as augmented reality (AR), autonomous driving and robot navigation.

Traditional SLAM algorithms could build very high-quality geometric maps of the room-scale and street-scale environments with accurate camera trajectory estimation of less than 1% error drift. However, a purely geometric map is not enough for many applications. To enable more advanced interaction, we need semantic level and object level understanding of the scene. That’s why we want to build a SLAM system that is able to build a map of 3D objects.

Example of dense (left) and sparse (right) map reconstructed from ElasticFusion and stereo-DSO

Prior art along the direction of object-level SLAM have several limitations. They either 1. require a pre-scanned CAD model database, thus cannot generalize to previously unseen objects, or 2. perform online dense surface reconstruction resulting in incomplete partial reconstruction, or 3. model objects using simple geometric shapes and sacrifice the level of details. So the question is can we achieve these three goals at the same time?

Issues with prior art

 

In DSP-SLAM, we solve this problem by leveraging shape prior pre-trained from a large dataset of known shapes within a category, and formulate the object reconstruction as an iterative optimization problem:  given an initial coarse estimate of the shape code and object pose, we can iteratively refine the shape and pose such that they fit our current observation the best.

We solve the optimization using Gauss-Newton method with analytical Jacobians to speed up the process, so that it can be extended to a full object SLAM system. We take advantage of multi-view observations iteratively refining the object poses and maintaining a globally consistent joint map of objects and points.

DSP-SLAM teaser

project page: https://jingwenwang95.github.io/dsp-slam/

code: https://github.com/JingwenWang95/DSP-SLAM

Learning in High Dimension Always Amounts to Extrapolation

By Sharon C Betts, on 24 November 2021

By Laura Ruis, PhD Candidate

Recently, Randall Balestriero, Jerome Pesenti, and Yann LeCun dropped a paper on arXiv that clarifies certain terms that are often used when people talk about generalization in machine learning. In machine learning, we often formulate a differentiable objective function for our problem that we can optimize with gradient-based methods. We tune model parameters given data such that this objective function is optimized. However, what differentiates machine learning from optimization is that we do not just want our model to optimize the objective function for the data we used to learn the parameters, called the training data, we also want it to generalize to unseen data points. Modern machine learning methods have become very good at this for lots of applications like speech recognition, machine translation, and image classification. However, some people claim (here, here, and here) that these methods are simply interpolating the training data they see during training, and that they would fail when classifying a new data point requires extrapolation. The paper by Balestriero et al. shows that this is not the case for a specific definition of interpolation and extrapolation. They come to the following conclusion:

We shouldn’t use interpolation/extrapolation in the way the terms are defined in the paper when talking about generalization, because for high dimensional data deep learning models always have to extrapolate, regardless of the dimension of the underlying data manifold. 

In this post I’ll attempt to shed some light on this conclusion. It’s drawn in part from the first figure in the paper, which we will reproduce from scratch. In the process of doing that we’ll encounter all the relevant background material that’s necessary to understand this paper. I’ll go through all the code and maths that’s required to reproduce it. Below you can see the figure I’m talking about, that without any explanation won’t illuminate much yet. If you want to understand it, read on!

 

At the end this post, we will know more about the following terms:

  • The curse of dimensionality
  • Convex hull
  • Ambient dimension
  • Intrinsic dimension / data manifold dimension
  • Interpolation / extrapolation

Click here to read full post.