X Close

UKRI Centre for Doctoral Training in Foundational AI

Home

Menu

2nd Bayes-Duality Workshop: Daniel Augusto de Souza

By Claire Hudson, on 15 December 2024

On June 12th to 21st of 2024, I had the pleasure to attend and present my work as a poster for the 2nd Bayes-Duality Workshop 2024 organized by the Bayes Duality, a Japan-French joint research project team. This workshop was hosted in the Centre for Advanced Intelligence Project (AIP) of RIKEN in Nihonbashi, Chūō City, Tokyo.

Nihonbashi is one of the oldest districts of Tokyo, a lively business district where finance and general office workers gather while neighbouring the imperial palace, where the Japanese monarch and his family lives. Feeling out of place in this somewhat non-academic environment, the two-week workshop contained invited talks, panels between speakers, showcase of works done by the Bayes Duality team, and a poster session.

As stated in the program, the workshop focused on the development of AI that learns adaptively, robustly, and continuously, like humans. A common theme in the presentations by collaborators of the Bayes Duality is to explore the mathematical connections between the training data examples of and the model parameters of these machine learning systems. This connection is incredibly desirable due to the following difference in complexity: the current state-of-art models have a vast number of uninterpretable parameters while the data examples can usually still be understood by human experts.

Due to the length of the workshop, the invited talks could cover an extensive range of topics. Such breadth of topics is hard to describe in such post and, most incredibly, none of them felt out of place in this workshop. Starting from the expected topics as the tutorial on the Bayesian learning rule, one of the papers that put together the connections between data-parameter duality, and convex duality, to more general topics in uncertainty quantification, such as Eugene Ndiaye’s tutorial and presentation on conformal prediction, continual learning, and identifiability of parameters in neural network models.

The poster session included works mentioned in the invited talks and others from students like me. I chose to present my progress on “Interpretable deep Gaussian processes for geospatial tasks”; in this project I analyse the issue of interpretability of three commonly used architectures of deep Gaussian processes and try to understand what practitioners really meant by “interpretable” and suggest a different metric than the commonly used. I felt this was the right work to present to the audience of this workshop due to their familiarity with Bayesian deep learning and interest in understanding the parameters of these models. As the only student from UCL, I was happy to display our work and connect with researchers from institutions all over the world, with attendees from the US, Asia, and Europe.

Leave a Reply