X Close

UKRI Centre for Doctoral Training in Foundational AI

Home

Menu

Archive for the 'workshop' Category

Workshop Report: Mathematical Imaging and Surface Processing at the MFO: Romy Williamson

By Claire Hudson, on 26 February 2025

1 Introduction
The Mathematisches Forschungsinstitut Oberwolfach — or MFO — is a Mathematical Institute located deep in the hills of the Black Forest, far away from the distractions of civilisation. It was built by the Nazis in 1944, in a deliberately secluded location, so as to be an unlikely target for Allied bombing. This is ideal, when you want to peacefully concentrate on maths.
I attended the workshop on Mathematical Imaging and Surface Processing, from 2nd to 7th February 2025. This was organised by Mirela Ben Chen, an inspiring figure in the Geometry Processing community, along with Antonin Chambolle and Benedikt Wirth. Throughout the week, we heard a variety of talks from fields including optimal transport, inverse problems, surface representation, rendering, video generation, fluid simulation and more. We heard from researchers whose work is strongly rooted in classical theory, as well as many researchers who are either using, or investigating the properties of, machine learning techniques such as diffusion models.

Presenting Spherical Neural Surfaces.

2 Contribution
My personal contribution was to present my recently-accepted paper, Neural Geometry Processing via Spherical Neural Surfaces. This paper fitted in very well with the Surface Processing side of the workshop, and I noticed links with several of the other talks, particularly:
• Xavier Dennec (Flag Spaces and Geometric Statistics) — this had relevance to the part of my project where I find eigenfunctions of the Laplace-Beltrami operator.
• Nicholas Sharp (SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes) — Nick’s discussion of various surface representations led very nicely into my presentation of an alternative neural representation.
Please see the project webpage for more information.

Group photo, in front of Schwarzwald trees

3 Interesting People and Talks
These are the talks that stuck in my mind or inspired me the most. Listening to these talks has helped me to figure out what I like the most in terms of research topics and presentation styles, and to take cues from this to steer my own research direction and presentation style.
• Nicholas Sharp (SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes): excellent presentation style. He is also very engaging and enthusiastic in conversation. I was excited that he talked about halfedege meshes, and he mentioned properties of orbits of subalgebras of halfedge meshes, which I had figured out myself last summer, not realising it was an established thing. I am motivated and inspired to see someone that is very skilled and knowledgeable at classical geometry processing techniques, combining these thoughtfully (not blindly) with neural networks, to create algorithms that are more robust than previous neural techniques, with improved performance over classical methods.
• Albert Chern (Fluid Dynamics with Sub-Riemannian Geometry): I am amazed at his ability to produce such incredible results with such an aesthetic method (no ugly performance tricks and no magic neural net, etc). I plan to learn some Riemannian Geometry so I can understand more. I played a geometric game, trigon Blockus, with Albert Chern and others.(1)  Albert Chern also likes the Nichomachus Identity (related to the year 2025) and we have both independently tried to find a 4-dimensional
visual proof of that, so far without success.
(1) Why is it triangle, and pentagon, not trigon and pentangle?
• Florine Hartwig (Optimal Motion from Shape Change): I was really excited to see this talk, because Niloy and I had seen the original talk eighteen months ago at the Obergurgl Geometry Processing Workshop in Innsbruck, Austria. The original paper provided an elegant framework to predict the global motion of a deformable body in space, given its shape change. The follow-up paper explored how to optimally deform a shape so that its global motion matches a target motion as closely as possible.
They were able to do the optimisation quite elegantly within the framework. I enjoyed talking to Florine. We had some things in common, such as a background in pure maths, and rowing. Her work also relates to Riemannian geometry and I want to understand more.
• Robert Beinert (A Geometric Optimal Transport Framework for 3D Shape Interpolation): I liked this talk very much because I have spent some time in my research thinking about surface correspondence and shape interpolation, from the perspective of neural surfaces, but I had never considered it from an optimal transport point of view. I really liked the method and I was impressed that it worked at all, but I am not entirely convinced about its framing/applications, because it has no semantic knowledge of
the shapes so it can easily go wrong when the shapes have different-enough proportions. I spent some time talking to Robert and Simon Schwarz during the afternoon excursion, and they explained to me the meaning of Habilitation in Germany.
• Mark Gillespie (harmonic functions rendering). Quite interesting talk. It was a ‘walk on spheres’ type of thing, for rendering implicitly defined surfaces, but the assumption is that the function is harmonic, not necessarily an SDF. This fits into the category of ‘questioning a common assumption in the field’. I’m not totally sure how often this is practically useful but it’s a nice problem set up.
• Nicole Feng (heat method geodesics) This is a cool paper. It constructs geodesic distance robustly, by ‘diffusing’ oriented normals for a short time and solving a Poisson equation. I also noticed she dealt with it quite well when some audience were distracting from the talk a bit by being pedantic about what it even means to have signed distance in some odd cases. She joked about it slightly and moved on, it didn’t put her off that they didn’t like one of the examples.
• Zorah Lahner (nuclear fusion) This was a really good example of a talk that everyone found interesting and engaging even though it didn’t have any actual results. I think that is a difficult kind of talk to give and it leaves the speaker vulnerable to a lot of tricky questions. Maybe people liked it partly because the lack of answers made it a good discussion point.

4 Benefit
This workshop has provided me with great academic benefit. I have been exposed to new topics and I now have a better idea where I need to do further reading to improve my background knowledge in several areas. These areas would include Riemannian geometry, optimal transport, and diffusion. Also importantly, I have worked on soft skills such as presenting and networking.
The MFO was a very calm place to think. I enjoyed looking at the maths books in the library and playing the instruments in the music room. I need to be calm in order to think  clearly and be creative. Therefore I appreciated the effort the MFO has made to make such a conducive environment.

Standing next to the Boy Surface — which is an immersion of the Real Projective Plane into R3. The particular immersion depicted by the sculpture also minimises the Willmore functional, which measures elastic energy.

Conjoined Stellated Icosahedra.

PhD Researchers on the move: Journey to Vancouver to attend Neurips

By Claire Hudson, on 31 January 2025

Last month, CDT students travelled to Vancouver to present their work at NeurIPS, one of the largest AI conferences. The schedule was packed, with 6 conference papers and 3 workshop papers presented. The CDT is proud to sponsor and support our PhD students as they present their research at the conference, showcasing their hard work and academic excellence on an international stage!
Day 1, Tuesday: No papers were presented, but the expo, tutorials, and careers fair kept everyone occupied.

Day 2, Wednesday:

William Bankes William and his supervisor presented their work on robust data down sampling. Naive approaches to training on a subset of data can cause problems when classes in the dataset are imbalanced, with rare classes becoming even rarer. A direct result of this is the model’s ability to predict on this subset of the data decreases. To address this they proposed an algorithm called REDUCR, which downsamples data in a manner that preserves the performance of minority classes within the dataset. They show REDUCR works across a range of text and image problems achieving state of the art results. The work is available here.

Jake Cunningham also presented his work on 
Reparameterized Multi-Resolution Convolutions for Long Sequence Modelling

 

 

Day 3, Thursday
Yuchen Zhu 

Yuchen’s joint paper with Jialin Yu and Ricardo Silva from UCL Statistical Sciences, Structured Learning of CompositionalSequential Models was presented as part of the main proceedings of NeurIPS 2024. They proposed an explicit model for expressing how the effect of sequential interventions can be isolated into modules, clarifying previously unclear data conditions that allow for the identification of their combined effect at different units and time steps. The paper is here.  Additionally, together with collaborators from MPI Tuebingen, Yuchen presented a paper Unsupervised Causal Abstraction at the Causal Representation Learning workshop. Due to the lack of interpretability of current large blackbox models, they propose a methodology for causally abstracting a large model to a smaller and more interpretable model. In particular, unlike existing methods, their method does not require supervision signals from the smaller model. The paper can be found  here.

David Chanin

David presented a paper along with co-author Daniel Tan, another UCL PhD Student. They find that Contrastive Activation Addition (CAA) steering has mixed results in terms of robustness and reliability. Steering vectors tend to generalise out of distribution when they work in distribution. However, steerability is highly variable across different inputs: depending on the concept, spurious biases can substantially contribute to how effective steering is for each input, presenting a challenge for the widespread use of steering vectors. While CAA is effective on some tasks, other behaviours turn out to be unsteerable. As a result, it is difficult to ensure they will be effective on a given task of interest, limiting their reliability as a general alignment intervention. The paper is available here.

Day 4, Friday
Reuben Adams

Reuben presented his paper (co-authored with his supervisors) on extending a classic theorem in the PAC-Bayes literature to account for arbitrary outcomes, rather than simply correct or incorrect classifications. Their work provides theoretical insight into the generalisation behaviour of neural networks and the different kinds of errors they can make. Their framework can cover not just Type I and Type II errors, but any kind of error that may occur in multi class classification. You can find the paper here.

 

 

Daniel Augusto
Daniel presented a co-authored paper in collaboration with
Getúlio Vargas Foundation for the main conference track. Their work proposes a new solution to streaming variational Bayesian inference using  GFlowNets as a foundation for their methodology. This was the first work that allows high quality variational inference for discrete parameters without requiring the storage of the whole dataset. They believe this work will be useful for applications in genetics, through the inference of phylogenetic trees, for preference learning, and other big-data contexts. Their paper can be read here.

 

Day 5, Saturday 

Oscar Key Oscar, along with co-authors from Graphcore presented a poster at the workshop on Efficient Natural Language and Speech Processing. Their work considers the top-k operation, which finds the largest k items in a list, and investigates how it can be computed as quickly as possible on the parallel hardware commonly used to run AI applications. Top-k is found in many AI algorithms, so it’s useful to make it fast, e.g. a large language model might use it to select the most important parts of a long prompt. Their full paper is available here.

 

Varsha Ramineni 

Varsha presented her research at the Workshop on Algorithmic Fairness through the Lens of Metrics and Evaluation (AFME 2024). This work addresses the challenge of evaluating classifier fairness when complete datasets, including protected attributes, are inaccessible. They propose a novel approach that leverages separate overlapping datasets, such as internal datasets lacking demographic information  and external sources like census data, to construct synthetic test data with all necessary variables. Experiments demonstrate that our approach produces synthetic data with high fidelity and offers reliable fairness evaluation where real data is limited.  Varsha says that she had an incredible experience attending her first NeurIPS and presenting her work and engaging in meaningful discussions throughout the conference was a deeply rewarding experience, providing invaluable feedback and ideas as the work extends. Do reach out to her if you’d like to learn more!

 

 

 

2nd Bayes-Duality Workshop: Daniel Augusto de Souza

By Claire Hudson, on 15 December 2024

On June 12th to 21st of 2024, I had the pleasure to attend and present my work as a poster for the 2nd Bayes-Duality Workshop 2024 organized by the Bayes Duality, a Japan-French joint research project team. This workshop was hosted in the Centre for Advanced Intelligence Project (AIP) of RIKEN in Nihonbashi, Chūō City, Tokyo.

Nihonbashi is one of the oldest districts of Tokyo, a lively business district where finance and general office workers gather while neighbouring the imperial palace, where the Japanese monarch and his family lives. Feeling out of place in this somewhat non-academic environment, the two-week workshop contained invited talks, panels between speakers, showcase of works done by the Bayes Duality team, and a poster session.

As stated in the program, the workshop focused on the development of AI that learns adaptively, robustly, and continuously, like humans. A common theme in the presentations by collaborators of the Bayes Duality is to explore the mathematical connections between the training data examples of and the model parameters of these machine learning systems. This connection is incredibly desirable due to the following difference in complexity: the current state-of-art models have a vast number of uninterpretable parameters while the data examples can usually still be understood by human experts.

Due to the length of the workshop, the invited talks could cover an extensive range of topics. Such breadth of topics is hard to describe in such post and, most incredibly, none of them felt out of place in this workshop. Starting from the expected topics as the tutorial on the Bayesian learning rule, one of the papers that put together the connections between data-parameter duality, and convex duality, to more general topics in uncertainty quantification, such as Eugene Ndiaye’s tutorial and presentation on conformal prediction, continual learning, and identifiability of parameters in neural network models.

The poster session included works mentioned in the invited talks and others from students like me. I chose to present my progress on “Interpretable deep Gaussian processes for geospatial tasks”; in this project I analyse the issue of interpretability of three commonly used architectures of deep Gaussian processes and try to understand what practitioners really meant by “interpretable” and suggest a different metric than the commonly used. I felt this was the right work to present to the audience of this workshop due to their familiarity with Bayesian deep learning and interest in understanding the parameters of these models. As the only student from UCL, I was happy to display our work and connect with researchers from institutions all over the world, with attendees from the US, Asia, and Europe.

CDT Foundational Artificial Intelligence Showcase: London. 22-24 July

By Claire Hudson, on 27 August 2024

This year, CDT students, academics and speakers along with staff and students from the prestigious Erasmus Mundus Joint Master’s Programme in AI gathered at the AI Centre for a journey into research, innovation and collaboration at the annual CDT Showcase.
The event kicked off with a session focusing on “The future of AI: Forming your own opinion on what’s coming, when it’s coming, and what we should

or shouldn’t do about it”. Here we explored AI Bias, AI and Warfare and AI Regulation – topics which sparked some lively debates and fostered a spirit of critical thinking amongst attendees.

After lunch, we heard from Dr Anthony Bourachad with his talk titled ART: AI’s final frontier” in which Dr Bourachad presented AI’s foray into the world of art and the Pandora’s box of questions this opens. His talk delved into the philosophical debates about what it means to create, the legal intricacies of ownership and moral rights, and the use of AI as a tool to analyze historical art.
This led to the next session in which we had the opportunity to view participants’ AI and Art entries during a mini museum experience. On display were over 40 entries from current CDT students and students from the Erasmus Mundus Joint Master’s Programme.  All artwork had to be original and created by the submitting artist and there were many impressive submissions, each telling a story and highlighting a wealth of creativity and innovation from the artists.
More on the winners later…..
To conclude the first day, attendees were treated to a vibrant and engaging social event at Immersive Gamebox  This immersive activity provided a welcome break from the day’s sessions and created many memorable moments whilst fostering relationships between participants. Truly a wonderful way to close the first day of the Showcase and a chance to solidify connections made during the conference sessions.
Day two started with a morning of informative presentations from CDT students in which we heard more about their research. With topics ranging from ” A Human-Centric Assessment of the Usefulness of Attribution Methods in Computer Vision” to “Latent Attention for Linear Time Transformers” to a talk on the “Theory of generative modelling – rethinking generative modelling as optimization in the space of measures”  The range of topics being presented provided a reminder about the diversity and exciting research that is being conducted from students and demonstrates why centres such as the FAI CDT are crucial to foster interdisciplinary research in this ever changing landscape.
One of the highlights of the showcase was the afternoon’s visit to the offices of Conception X.
Conception X is the UK’s leading PhD deeptech venture programme and assists PhD students to launch deeptech startups based on their research. There are two tracks available. “Project X” which is for PhD students interested in developing business skills through training designed for STEM researchers, and “Startup X” which is aimed at  PhD students ready to build startups.
During our visit, we enjoyed a welcome introduction from Dr Riam Kanso, Chief Executive Officer who spoke about how Conception X is leading the way in enabling scientists to create companies from their research. This was followed by presentations from entrepreneurs who have been successful in launching their companies with the support of Conception X and concluded with a host of questions from students all seemingly keen to find out more about the Conception X programme and how they too might launch their entrepreneurial journey.
Day three started with a visit to the Intelligent Robotics Lab at UCL East in which the group enjoyed a fast-paced morning with Professor Igor Gaponov.

 The lab is a world-leading research centre of excellence, dedicated to autonomous robotics, specializing in robots that can make decisions in the real-world and act on those. The lab covers areas from mechatronics and control to robot vision and learning, so our group were delighted to be able to hear more about the fascinating research that is emerging and would like to thank Professor Gaponov for providing such a wonderful opportunity to our group.

The final afternoon was filled with key note talks on a range of AI related topics. First up was  Avanade’s Emerging Technology R&D Engineering lead, Fergus Kidd with his talk titled ” The road to General Artificial Intelligence”. Next up was Professor Niloy Mitra and his talk on “what are Good Representations for 3D-aware Generative Models’ then we concluded with a presentation from Sophia Banno – Assistant Professor in Robotics and Artificial Intelligence at UCL and her talk looking at the future of AI and Robotics in Surgical Interventions!
All of these talks emphasized the importance of sustained innovation and collaboration in this rapidly evolving world and provided an intriguing end to the formal presentations of the CDT Showcase.

The final session was an opportunity to view and discuss a variety of posters that students had produced which represented their research. Poster sessions are always a great opportunity for researchers to share their findings in a visual format and encourage observers to delve deeper into specific areas of interest. It was inspiring to witness this session buzzing with an energy that underscores the collaborative spirit that defines the CDT showcase experience.

To close, our sponsor G-Research presented prizes for the AI and Art competition and best poster award to the following recipients
AI & ART
Judging was based on three key criteria (i) Description: convincing description that is compelling and an ability to explain the concept (ii) Novelty: originality of the idea and (iii) Aesthetics.
1st:
Romy Williamson
the convergence of perception
2nd:
Reuben Adams
Nook
3rd:
Pedro José Ferreira Moreira
UCL Summer School
4th:
Kai Biegun
In With The New
5th:
Roberta Chissich
Fores Escape
POSTER SESSION
1st:
Adrian Gheorghiu & Pedro Moreira
Joint 2nd:
Lorenz Wolf.
Mirgahney Mohamed & Jake Cunningham
4th:
Sierra Bonilla
5th:
Bernardo Perrone De Menezes Bulcao Ribeiro & Roberta Chissich

We would like to take this opportunity to thank G Research for their generous sponsorship of the AI & Art competition and Best Poster award.

Looking ahead, the connections made and ideas exchanged during these three days will continue to develop, shaping the future of AI. The Foundational Artificial Intelligence CDT Annual Conference is a platform for researchers and academics to showcase their research and innovation and this event proved to be a melting pot of ideas, insights, and networking opportunities, shaping the future landscape of AI.

We look forward to hosting the event again next year!

Student-Led Workshop – Distance-based Methods in Machine Learning – Review by Masha Naslidnyk

By sharon.betts, on 3 July 2023

We are delighted to announce the successful conclusion of our recent workshop on Distance-based Methods in Machine Learning. Held at the historical Bentham House on 27-28th of June, the event brought together approximately 60 delegates, including leading experts and researchers from statistics and machine learning.The workshop showcased a diverse range of speakers who shared their knowledge and insights on the theory and methodology behind machine learning approaches utilising kernel-based and Wasserstein distances. Topics covered included parameter estimation, generalised Bayes, hypothesis testing, optimal transport, optimization, and more.The interactive sessions and engaging discussions created a vibrant learning environment, fostering networking opportunities and collaborations among participants. We extend our gratitude to the organising committee, speakers, and attendees for their valuable contributions to this successful event. Stay tuned for future updates on similar initiatives as we continue to explore the exciting possibilities offered by distance-based methods in machine learning.

A large group of attendees for the workshop stand in front of a screen, smiling at the camera.

Happy attendees at the Distance-based learning workshop