X Close

UKRI Centre for Doctoral Training in Foundational AI

Home

Menu

Archive for the 'Research paper' Category

2nd Bayes-Duality Workshop: Daniel Augusto de Souza

By Claire Hudson, on 15 December 2024

On June 12th to 21st of 2024, I had the pleasure to attend and present my work as a poster for the 2nd Bayes-Duality Workshop 2024 organized by the Bayes Duality, a Japan-French joint research project team. This workshop was hosted in the Centre for Advanced Intelligence Project (AIP) of RIKEN in Nihonbashi, Chūō City, Tokyo.

Nihonbashi is one of the oldest districts of Tokyo, a lively business district where finance and general office workers gather while neighbouring the imperial palace, where the Japanese monarch and his family lives. Feeling out of place in this somewhat non-academic environment, the two-week workshop contained invited talks, panels between speakers, showcase of works done by the Bayes Duality team, and a poster session.

As stated in the program, the workshop focused on the development of AI that learns adaptively, robustly, and continuously, like humans. A common theme in the presentations by collaborators of the Bayes Duality is to explore the mathematical connections between the training data examples of and the model parameters of these machine learning systems. This connection is incredibly desirable due to the following difference in complexity: the current state-of-art models have a vast number of uninterpretable parameters while the data examples can usually still be understood by human experts.

Due to the length of the workshop, the invited talks could cover an extensive range of topics. Such breadth of topics is hard to describe in such post and, most incredibly, none of them felt out of place in this workshop. Starting from the expected topics as the tutorial on the Bayesian learning rule, one of the papers that put together the connections between data-parameter duality, and convex duality, to more general topics in uncertainty quantification, such as Eugene Ndiaye’s tutorial and presentation on conformal prediction, continual learning, and identifiability of parameters in neural network models.

The poster session included works mentioned in the invited talks and others from students like me. I chose to present my progress on “Interpretable deep Gaussian processes for geospatial tasks”; in this project I analyse the issue of interpretability of three commonly used architectures of deep Gaussian processes and try to understand what practitioners really meant by “interpretable” and suggest a different metric than the commonly used. I felt this was the right work to present to the audience of this workshop due to their familiarity with Bayesian deep learning and interest in understanding the parameters of these models. As the only student from UCL, I was happy to display our work and connect with researchers from institutions all over the world, with attendees from the US, Asia, and Europe.

Student presentation – Alex Hawkins Hooker at ISMB

By sharon.betts, on 4 October 2023

In July of 2023, our Cohort 2 student Alex Hawkins-Hooker presented his work at the Machine Learning in Computational and Systems Biology Track at ISMB, which is one of the leading computational biology conferences.
The full paper describing this work ‘Getting personal with epigenetics: towards individual-specific epigenomic imputation with machine learning’, has since been published in Nature Communications here https://www.nature.com/articles/s41467-023-40211-2.
The work was started before Alex came to UCL, but completed during his PhD, so it was done jointly with collaborators at the Max Planck Institute for Intelligent Systems in Tübingen and the University of Dundee.
If you are interested in reading more publications by our outstanding students, do check out our publications page on our website.

“Safe Trajectory Sampling in Model-based Reinforcement Learning for Robotic Systems” By Sicelukwanda Zwane

By sharon.betts, on 29 September 2023

In the exciting realm of Model-based Reinforcement Learning (MBRL), researchers are constantly pushing the boundaries of what robots can learn to achieve when given access to an internal model of the environment. One key challenge in this field is ensuring that robots can perform tasks safely and reliably, especially in situations where they lack prior data or knowledge about the environment. That’s where the work of Sicelukwanda Zwane comes into play.

Background

In MBRL, robots use small sets of data to learn a dynamics model. This model is like a crystal ball that predicts how the system will respond to a given sequence of different actions. With MBRL, we can train policies from simulated trajectories sampled from the dynamics model instead of first generating them by executing each action on the actual system, a process that can take extremely long periods of time on a physical robot and possibly cause wear and tear.

One of the tools often used in MBRL is the Gaussian process (GP) dynamics model. GPs are fully-Bayesian models that not only model the system but also account for the uncertainty in state observations. Additionally, they are flexible and are able to learn without making strong assumptions about the underlying system dynamics [1].

The Challenge of Learning Safely

When we train robots to perform tasks, it’s not enough to just predict what will happen; we need to do it safely. As with most model classes in MBRL, GPs don’t naturally incorporate safety constraints. This means that they may produce unsafe or unfeasible trajectories. This is particularly true during early stages of learning, when the model hasn’t seen much data, it can produce unsafe and seemingly random trajectories.

For a 7 degree of freedom (DOF) manipulator robot, bad trajectories may contain self-collisions.

 

Distributional Trajectory Sampling

In standard GP dynamics models, the posterior is represented in distributional form – using its parameters, the mean vector and covariance matrix. In this form, it is difficult to reason about

about the safety of entire trajectories. This is because trajectories are generated through iterative random sampling. Furthermore, this kind of trajectory sampling is limited to cases where the intermediate state marginal distributions are Gaussian distributed.

Pathwise Trajectory Sampling

Zwane uses an innovative alternative called “pathwise sampling” [3]. This approach draws samples from GP posteriors using an efficient method called Matheron’s rule. The result is a set of smooth, deterministic trajectories that aren’t confined to Gaussian distributions and are temporally correlated.

Adding Safety

The beauty of pathwise sampling [3] is that it has a particle representation of the GP posterior, where individual trajectories are smooth, differentiable, and deterministic functions. This allows for the isolation of constraint-violating trajectories from safe ones. For safety, rejection sampling is performed on trajectories that violate safety constraints, leaving behind only the safe ones to train the policy. Additionally, soft constraint penalty terms are added to the reward function.

Sim-Real Robot Experiments

To put this approach to the test, Zwane conducted experiments involving a 7-DoF robot arm in a simulated constrained reaching task, where the robot has to avoid colliding with a low ceiling. The method successfully learned a reaching policy that adhered to safety constraints, even when starting from random initial states.

In this constrained manipulation task, the robot is able to reach the goal (shown by the red sphere – bottom row) without colliding with the ceiling (blue – bottom row) using less than 100 seconds of data in simulation.

Summary

Sicelukwanda Zwane’s research makes incremental advances on the safety of simulated trajectories by incorporating safety constraints while keeping the benefits of using fully-Bayesian dynamics models such as GPs. This method promises to take MBRL out of simulated environments and make it more applicable to real-world settings. If you’re interested in this work, we invite you to dive into the full paper, published at the recent IEEE CASE 2023 conference.

References

 

  1. M. P. Deisenroth and C. E. Rasmussen. PILCO: A Model-based and Data-efficient Approach to Policy Search. ICML, 2011.
  2. S. Kamthe and M. P. Deisenroth. Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control. AISTATS, 2018.
  3. J. T. Wilson, V. Borovitskiy, A. Terenin, P. Mostowsky, and M. P. Deisenroth. Pathwise Conditioning of Gaussian Processes. JMLR, 2021.

 

Welcome to our blog!

By Sharon C Betts, on 18 November 2021

Welcome to the blog for the UKRI Centre for Doctoral Training in Foundational Artificial Intelligence.

Our aim with this blog is to inform the wider world of our research, ourselves and our ambitions in helping create new algorithms and foundational practices in artificial intelligence to help deliver the UK National Strategy in artificial intelligence.

Our CDT is one of 16 UKRI funded CDT’s focusing on artificial intelligence and building on the UK’s history of excellence with machine learning.

The UKRI CDT in Foundational AI sits within UCL’s Centre for Artificial Intelligence in the heart of London and helps bring together some of the best minds in the field of machine learning, natural language processing, robotics, deep learning and reinforcement learning (and so much more!)

We look forward to letting you know more about us and what we are doing to help forward the research in artificial intelligence and create new frontiers in research.