X Close

UKRI Centre for Doctoral Training in Foundational AI

Home

Menu

Archive for the 'Conferences' Category

PhD Researchers on the move: Journey to Vancouver to attend Neurips

By Claire Hudson, on 31 January 2025

Last month, CDT students travelled to Vancouver to present their work at NeurIPS, one of the largest AI conferences. The schedule was packed, with 6 conference papers and 3 workshop papers presented. The CDT is proud to sponsor and support our PhD students as they present their research at the conference, showcasing their hard work and academic excellence on an international stage!
Day 1, Tuesday: No papers were presented, but the expo, tutorials, and careers fair kept everyone occupied.

Day 2, Wednesday:

William Bankes William and his supervisor presented their work on robust data down sampling. Naive approaches to training on a subset of data can cause problems when classes in the dataset are imbalanced, with rare classes becoming even rarer. A direct result of this is the model’s ability to predict on this subset of the data decreases. To address this they proposed an algorithm called REDUCR, which downsamples data in a manner that preserves the performance of minority classes within the dataset. They show REDUCR works across a range of text and image problems achieving state of the art results. The work is available here.

Jake Cunningham also presented his work on 
Reparameterized Multi-Resolution Convolutions for Long Sequence Modelling

 

 

Day 3, Thursday
Yuchen Zhu 

Yuchen’s joint paper with Jialin Yu and Ricardo Silva from UCL Statistical Sciences, Structured Learning of CompositionalSequential Models was presented as part of the main proceedings of NeurIPS 2024. They proposed an explicit model for expressing how the effect of sequential interventions can be isolated into modules, clarifying previously unclear data conditions that allow for the identification of their combined effect at different units and time steps. The paper is here.  Additionally, together with collaborators from MPI Tuebingen, Yuchen presented a paper Unsupervised Causal Abstraction at the Causal Representation Learning workshop. Due to the lack of interpretability of current large blackbox models, they propose a methodology for causally abstracting a large model to a smaller and more interpretable model. In particular, unlike existing methods, their method does not require supervision signals from the smaller model. The paper can be found  here.

David Chanin

David presented a paper along with co-author Daniel Tan, another UCL PhD Student. They find that Contrastive Activation Addition (CAA) steering has mixed results in terms of robustness and reliability. Steering vectors tend to generalise out of distribution when they work in distribution. However, steerability is highly variable across different inputs: depending on the concept, spurious biases can substantially contribute to how effective steering is for each input, presenting a challenge for the widespread use of steering vectors. While CAA is effective on some tasks, other behaviours turn out to be unsteerable. As a result, it is difficult to ensure they will be effective on a given task of interest, limiting their reliability as a general alignment intervention. The paper is available here.

Day 4, Friday
Reuben Adams

Reuben presented his paper (co-authored with his supervisors) on extending a classic theorem in the PAC-Bayes literature to account for arbitrary outcomes, rather than simply correct or incorrect classifications. Their work provides theoretical insight into the generalisation behaviour of neural networks and the different kinds of errors they can make. Their framework can cover not just Type I and Type II errors, but any kind of error that may occur in multi class classification. You can find the paper here.

 

 

Daniel Augusto
Daniel presented a co-authored paper in collaboration with
Getúlio Vargas Foundation for the main conference track. Their work proposes a new solution to streaming variational Bayesian inference using  GFlowNets as a foundation for their methodology. This was the first work that allows high quality variational inference for discrete parameters without requiring the storage of the whole dataset. They believe this work will be useful for applications in genetics, through the inference of phylogenetic trees, for preference learning, and other big-data contexts. Their paper can be read here.

 

Day 5, Saturday 

Oscar Key Oscar, along with co-authors from Graphcore presented a poster at the workshop on Efficient Natural Language and Speech Processing. Their work considers the top-k operation, which finds the largest k items in a list, and investigates how it can be computed as quickly as possible on the parallel hardware commonly used to run AI applications. Top-k is found in many AI algorithms, so it’s useful to make it fast, e.g. a large language model might use it to select the most important parts of a long prompt. Their full paper is available here.

 

Varsha Ramineni 

Varsha presented her research at the Workshop on Algorithmic Fairness through the Lens of Metrics and Evaluation (AFME 2024). This work addresses the challenge of evaluating classifier fairness when complete datasets, including protected attributes, are inaccessible. They propose a novel approach that leverages separate overlapping datasets, such as internal datasets lacking demographic information  and external sources like census data, to construct synthetic test data with all necessary variables. Experiments demonstrate that our approach produces synthetic data with high fidelity and offers reliable fairness evaluation where real data is limited.  Varsha says that she had an incredible experience attending her first NeurIPS and presenting her work and engaging in meaningful discussions throughout the conference was a deeply rewarding experience, providing invaluable feedback and ideas as the work extends. Do reach out to her if you’d like to learn more!

 

 

 

2nd Bayes-Duality Workshop: Daniel Augusto de Souza

By Claire Hudson, on 15 December 2024

On June 12th to 21st of 2024, I had the pleasure to attend and present my work as a poster for the 2nd Bayes-Duality Workshop 2024 organized by the Bayes Duality, a Japan-French joint research project team. This workshop was hosted in the Centre for Advanced Intelligence Project (AIP) of RIKEN in Nihonbashi, Chūō City, Tokyo.

Nihonbashi is one of the oldest districts of Tokyo, a lively business district where finance and general office workers gather while neighbouring the imperial palace, where the Japanese monarch and his family lives. Feeling out of place in this somewhat non-academic environment, the two-week workshop contained invited talks, panels between speakers, showcase of works done by the Bayes Duality team, and a poster session.

As stated in the program, the workshop focused on the development of AI that learns adaptively, robustly, and continuously, like humans. A common theme in the presentations by collaborators of the Bayes Duality is to explore the mathematical connections between the training data examples of and the model parameters of these machine learning systems. This connection is incredibly desirable due to the following difference in complexity: the current state-of-art models have a vast number of uninterpretable parameters while the data examples can usually still be understood by human experts.

Due to the length of the workshop, the invited talks could cover an extensive range of topics. Such breadth of topics is hard to describe in such post and, most incredibly, none of them felt out of place in this workshop. Starting from the expected topics as the tutorial on the Bayesian learning rule, one of the papers that put together the connections between data-parameter duality, and convex duality, to more general topics in uncertainty quantification, such as Eugene Ndiaye’s tutorial and presentation on conformal prediction, continual learning, and identifiability of parameters in neural network models.

The poster session included works mentioned in the invited talks and others from students like me. I chose to present my progress on “Interpretable deep Gaussian processes for geospatial tasks”; in this project I analyse the issue of interpretability of three commonly used architectures of deep Gaussian processes and try to understand what practitioners really meant by “interpretable” and suggest a different metric than the commonly used. I felt this was the right work to present to the audience of this workshop due to their familiarity with Bayesian deep learning and interest in understanding the parameters of these models. As the only student from UCL, I was happy to display our work and connect with researchers from institutions all over the world, with attendees from the US, Asia, and Europe.

Celebrating the Winning Entries: Highlights from the AI & Art Competition

By Claire Hudson, on 13 September 2024

The AI & Art competition we ran as part of the CDT Showcase event brought together a fantastic array of talent and creativity, with participants impressing us with their outstanding submissions. We were thrilled to see some innovative approaches and unique perspectives reflected in each entry and are excited to highlight the winning entries that stood out among the rest.

1st Place: Romy Williamson-The convergence of perception
This piece shows a series of stone busts arranged in a figure. The busts blend smoothly between a perfect sphere, Max Planck, and Igea – the Greek Goddess of Health.
In order to blend smoothly between the busts, I converted the meshes into Spherical Neural Surfaces (read my paper or listen to my talk to find out more) and I optimised a smooth neural map between the two domain spheres, minimising the conformal distortion energy using a variant of the First Fundamental Form.
Romy Comment: the convergence of perception (as named by ChatGPT)
I used our novel shape representation (Spherical Neural Surfaces) to represent the heads of Max Planck and the goddess Igea (converted from meshes), and performed a geometric optimization to find a nice correspondence (diffeomorphism), which then allowed me to interpolate to get the in-between heads.
 This is my paper about Spherical Neural Surfaces: https://arxiv.org/abs/2407.07755 . The geometric optimization part is similar to Neural Surface Maps (https://geometry.cs.ucl.ac.uk/projects/2021/neuralmaps/).

2nd Place:Reuben Adams – Nook 
The colours in this photo have been subtly changed to encode an audio file of a crackling fireplace, which in turn has been imperceptibly altered to encode a text file of Hardy’s poem The Darkling Thrush. The work telescopes into one image a dreary and wet walk through the peak district, warming by the fire, and thoughts of an old friend.

 

3rd Place: Pedro José Ferreira Moreira-UCL Summer School
Welcome to ‘UCL Summer School,’ an exciting comic book adventure that follows a young student on their thrilling journey at University College London Summer School!

Imagine being able to create a whole comic book without knowing how to draw – thanks to AI, that’s exactly what happened here! From packing bags and boarding a plane to sightseeing around London and attending cool AI seminars, this comic capturers every moment with vibrant, dynamic art.
What AI Can Do: AI makes it possible to turn your wildest ideas into reality, even if you can’t draw a stick figure. It helps craft detailed and expressive comic panels that perfectly match the story in your head. Plus, AI is like a super-fast sidekick, helping to create everything in no time!
The Not-So-Great Parts: Sometimes, AI might miss the mark on capturing those deep, personal emotions or might not get the scene just right without some help. It’s great, but it’s not a mind reader – yet!
The Future Is Bright: Imagine a world where AI tools are even more creative, intuitive, and just plain fun to use. We’re talking about easier ways to blend human creativity with AI’s power, making art that’s truly one-of-a-kind.

In ‘UCL Summer School’ you’ll see how AI can turn anyone into a comic book creator, expressing thoughts and stories in a vibrant way that’s never been easier. This comic is all about having fu, exploring new tech and realizing that with a little help from AI, the sky’s the limit for your creativity!
Pedro’s Comments. The motivation behind this comic book art is simple: to show that creativity shouldn’t be limited by technical skills. With the help of AI, anyone can turn their ideas into reality, no matter their experience. Even if you’re “not good at drawing,” you can bring your imagination to life. Sure, the technology isn’t perfect (extra fingers popping up in the art can be a funny surprise), but it’s more than enough to convey emotion and tell captivating stories

4th Place: Kai Biegun-In With The New
This piece aims to convey a juxtaposition of retro analogue photography and state of the art AI image generation. Four film photos were taken on various film stocks with vintage analogue cameras, and descriptions of those images were used to generate four corresponding photos with the Adobe Firefly image generation suite. I have always felt that the grainy, textured look of film photographs gives them a certain quality that makes looking at them feel like you’re looking at a snapshot from a memory. This is in stark contrast to the saturated, ultra-smooth, somewhat cartoonish look of AI generated photos. I believe this speaks to the fact that, although we are moving towards a world where digital and AI generated media are the norm, there is still place for the analogue to provide a window into real moments, memories, and experiences.
Kai’s comments. The piece is a study of the differences between images captured with analogue cameras and images generated by AI, whereby the analogue photographs were recreated by generative AI by prompting it with a text description of each image. It aims to highlight not just the superficial differences in colour, texture, and subject, but also the difference in feeling one gets from knowing how each image was captured, and question whether that in itself contributes to the artistic merit of the images.

5th Place: Roberta Chissich-Forest Escape.
Materials Used: Blender 4.1, ANT Landscape Addon, Node Wrangler Addon, Cycles Render Engine, Sapling Tree Gen Addon, Poly Haven Textures.

The Interactive Forest Environment is a meticulously crafted 3D scene designed to immerse viewers in a realist natural landscape. This piece leverages advanced procedural techniques and tools within Blender, reflecting the growing intersection of AI and art in the digital age.

Blender’s geometry nodes and procedural generation tools were extensively used to create the ground and vegetation layouts. These nodes enable the creation of complex, natural-looking terrains and distributions with minimal manual intervention. This results in highly detailed and varied environments without the need for manual modelling of each element. The use of procedural shaders and texture blending techniques in Blender mimics AL-assisted methods to combine ground textures from Poly Haven seamlessly, ensuring enhanced detail and natural transitions.

To optimize rendering, the Cycles Render Engine utilizes NVIDIA’s AI-accelerated denoising technology. OptiX reduces noise in rendered images, significantly speeding up the rendering process while maintaining high-quality visuals. This integration of AI technology helps in producing clean, detailed renders with fewer samples, making the workflow more efficient.

This artwork is inspired by the calming and restorative qualities of nature. It aims to transport viewers to a serene forest environment, providing a momentary escape from the hustle and bustle of everyday life, capturing the essence of nature’s tranquility.
Roberta’s comments. This animated river scene, created in Blender, showcases the power of combining human creativity with advanced tools. By using OptiX rendering, the video achieves a higher level of visual fidelity, capturing the intricate details of light and water. The use of procedural scattering has simplified the placement of grass, leaves, and trees, making the natural landscape come to life effortlessly.
My motivation for this piece comes from the belief that art and technology are not in opposition, but are powerful allies; AI-enhanced tools can aid artists in their creative process. This artwork embodies the idea that we can use these innovations to elevate our creative expression. It’s not about replacing human artistry, it’s about how these tools can help us amplify our imagination, making the impossible possible, and turning complex visions into reality. Together, we can craft a future where human spirit and technological prowess unite to create beauty.

Thank you to everyone who participated. Each entry brought something special to the event and helped create a vibrant and memorable experience for all involved!

CDT Foundational Artificial Intelligence Showcase: London. 22-24 July

By Claire Hudson, on 27 August 2024

This year, CDT students, academics and speakers along with staff and students from the prestigious Erasmus Mundus Joint Master’s Programme in AI gathered at the AI Centre for a journey into research, innovation and collaboration at the annual CDT Showcase.
The event kicked off with a session focusing on “The future of AI: Forming your own opinion on what’s coming, when it’s coming, and what we should

or shouldn’t do about it”. Here we explored AI Bias, AI and Warfare and AI Regulation – topics which sparked some lively debates and fostered a spirit of critical thinking amongst attendees.

After lunch, we heard from Dr Anthony Bourachad with his talk titled ART: AI’s final frontier” in which Dr Bourachad presented AI’s foray into the world of art and the Pandora’s box of questions this opens. His talk delved into the philosophical debates about what it means to create, the legal intricacies of ownership and moral rights, and the use of AI as a tool to analyze historical art.
This led to the next session in which we had the opportunity to view participants’ AI and Art entries during a mini museum experience. On display were over 40 entries from current CDT students and students from the Erasmus Mundus Joint Master’s Programme.  All artwork had to be original and created by the submitting artist and there were many impressive submissions, each telling a story and highlighting a wealth of creativity and innovation from the artists.
More on the winners later…..
To conclude the first day, attendees were treated to a vibrant and engaging social event at Immersive Gamebox  This immersive activity provided a welcome break from the day’s sessions and created many memorable moments whilst fostering relationships between participants. Truly a wonderful way to close the first day of the Showcase and a chance to solidify connections made during the conference sessions.
Day two started with a morning of informative presentations from CDT students in which we heard more about their research. With topics ranging from ” A Human-Centric Assessment of the Usefulness of Attribution Methods in Computer Vision” to “Latent Attention for Linear Time Transformers” to a talk on the “Theory of generative modelling – rethinking generative modelling as optimization in the space of measures”  The range of topics being presented provided a reminder about the diversity and exciting research that is being conducted from students and demonstrates why centres such as the FAI CDT are crucial to foster interdisciplinary research in this ever changing landscape.
One of the highlights of the showcase was the afternoon’s visit to the offices of Conception X.
Conception X is the UK’s leading PhD deeptech venture programme and assists PhD students to launch deeptech startups based on their research. There are two tracks available. “Project X” which is for PhD students interested in developing business skills through training designed for STEM researchers, and “Startup X” which is aimed at  PhD students ready to build startups.
During our visit, we enjoyed a welcome introduction from Dr Riam Kanso, Chief Executive Officer who spoke about how Conception X is leading the way in enabling scientists to create companies from their research. This was followed by presentations from entrepreneurs who have been successful in launching their companies with the support of Conception X and concluded with a host of questions from students all seemingly keen to find out more about the Conception X programme and how they too might launch their entrepreneurial journey.
Day three started with a visit to the Intelligent Robotics Lab at UCL East in which the group enjoyed a fast-paced morning with Professor Igor Gaponov.

 The lab is a world-leading research centre of excellence, dedicated to autonomous robotics, specializing in robots that can make decisions in the real-world and act on those. The lab covers areas from mechatronics and control to robot vision and learning, so our group were delighted to be able to hear more about the fascinating research that is emerging and would like to thank Professor Gaponov for providing such a wonderful opportunity to our group.

The final afternoon was filled with key note talks on a range of AI related topics. First up was  Avanade’s Emerging Technology R&D Engineering lead, Fergus Kidd with his talk titled ” The road to General Artificial Intelligence”. Next up was Professor Niloy Mitra and his talk on “what are Good Representations for 3D-aware Generative Models’ then we concluded with a presentation from Sophia Banno – Assistant Professor in Robotics and Artificial Intelligence at UCL and her talk looking at the future of AI and Robotics in Surgical Interventions!
All of these talks emphasized the importance of sustained innovation and collaboration in this rapidly evolving world and provided an intriguing end to the formal presentations of the CDT Showcase.

The final session was an opportunity to view and discuss a variety of posters that students had produced which represented their research. Poster sessions are always a great opportunity for researchers to share their findings in a visual format and encourage observers to delve deeper into specific areas of interest. It was inspiring to witness this session buzzing with an energy that underscores the collaborative spirit that defines the CDT showcase experience.

To close, our sponsor G-Research presented prizes for the AI and Art competition and best poster award to the following recipients
AI & ART
Judging was based on three key criteria (i) Description: convincing description that is compelling and an ability to explain the concept (ii) Novelty: originality of the idea and (iii) Aesthetics.
1st:
Romy Williamson
the convergence of perception
2nd:
Reuben Adams
Nook
3rd:
Pedro José Ferreira Moreira
UCL Summer School
4th:
Kai Biegun
In With The New
5th:
Roberta Chissich
Fores Escape
POSTER SESSION
1st:
Adrian Gheorghiu & Pedro Moreira
Joint 2nd:
Lorenz Wolf.
Mirgahney Mohamed & Jake Cunningham
4th:
Sierra Bonilla
5th:
Bernardo Perrone De Menezes Bulcao Ribeiro & Roberta Chissich

We would like to take this opportunity to thank G Research for their generous sponsorship of the AI & Art competition and Best Poster award.

Looking ahead, the connections made and ideas exchanged during these three days will continue to develop, shaping the future of AI. The Foundational Artificial Intelligence CDT Annual Conference is a platform for researchers and academics to showcase their research and innovation and this event proved to be a melting pot of ideas, insights, and networking opportunities, shaping the future landscape of AI.

We look forward to hosting the event again next year!

Student presentation – Alex Hawkins Hooker at ISMB

By sharon.betts, on 4 October 2023

In July of 2023, our Cohort 2 student Alex Hawkins-Hooker presented his work at the Machine Learning in Computational and Systems Biology Track at ISMB, which is one of the leading computational biology conferences.
The full paper describing this work ‘Getting personal with epigenetics: towards individual-specific epigenomic imputation with machine learning’, has since been published in Nature Communications here https://www.nature.com/articles/s41467-023-40211-2.
The work was started before Alex came to UCL, but completed during his PhD, so it was done jointly with collaborators at the Max Planck Institute for Intelligent Systems in Tübingen and the University of Dundee.
If you are interested in reading more publications by our outstanding students, do check out our publications page on our website.

“Safe Trajectory Sampling in Model-based Reinforcement Learning for Robotic Systems” By Sicelukwanda Zwane

By sharon.betts, on 29 September 2023

In the exciting realm of Model-based Reinforcement Learning (MBRL), researchers are constantly pushing the boundaries of what robots can learn to achieve when given access to an internal model of the environment. One key challenge in this field is ensuring that robots can perform tasks safely and reliably, especially in situations where they lack prior data or knowledge about the environment. That’s where the work of Sicelukwanda Zwane comes into play.

Background

In MBRL, robots use small sets of data to learn a dynamics model. This model is like a crystal ball that predicts how the system will respond to a given sequence of different actions. With MBRL, we can train policies from simulated trajectories sampled from the dynamics model instead of first generating them by executing each action on the actual system, a process that can take extremely long periods of time on a physical robot and possibly cause wear and tear.

One of the tools often used in MBRL is the Gaussian process (GP) dynamics model. GPs are fully-Bayesian models that not only model the system but also account for the uncertainty in state observations. Additionally, they are flexible and are able to learn without making strong assumptions about the underlying system dynamics [1].

The Challenge of Learning Safely

When we train robots to perform tasks, it’s not enough to just predict what will happen; we need to do it safely. As with most model classes in MBRL, GPs don’t naturally incorporate safety constraints. This means that they may produce unsafe or unfeasible trajectories. This is particularly true during early stages of learning, when the model hasn’t seen much data, it can produce unsafe and seemingly random trajectories.

For a 7 degree of freedom (DOF) manipulator robot, bad trajectories may contain self-collisions.

 

Distributional Trajectory Sampling

In standard GP dynamics models, the posterior is represented in distributional form – using its parameters, the mean vector and covariance matrix. In this form, it is difficult to reason about

about the safety of entire trajectories. This is because trajectories are generated through iterative random sampling. Furthermore, this kind of trajectory sampling is limited to cases where the intermediate state marginal distributions are Gaussian distributed.

Pathwise Trajectory Sampling

Zwane uses an innovative alternative called “pathwise sampling” [3]. This approach draws samples from GP posteriors using an efficient method called Matheron’s rule. The result is a set of smooth, deterministic trajectories that aren’t confined to Gaussian distributions and are temporally correlated.

Adding Safety

The beauty of pathwise sampling [3] is that it has a particle representation of the GP posterior, where individual trajectories are smooth, differentiable, and deterministic functions. This allows for the isolation of constraint-violating trajectories from safe ones. For safety, rejection sampling is performed on trajectories that violate safety constraints, leaving behind only the safe ones to train the policy. Additionally, soft constraint penalty terms are added to the reward function.

Sim-Real Robot Experiments

To put this approach to the test, Zwane conducted experiments involving a 7-DoF robot arm in a simulated constrained reaching task, where the robot has to avoid colliding with a low ceiling. The method successfully learned a reaching policy that adhered to safety constraints, even when starting from random initial states.

In this constrained manipulation task, the robot is able to reach the goal (shown by the red sphere – bottom row) without colliding with the ceiling (blue – bottom row) using less than 100 seconds of data in simulation.

Summary

Sicelukwanda Zwane’s research makes incremental advances on the safety of simulated trajectories by incorporating safety constraints while keeping the benefits of using fully-Bayesian dynamics models such as GPs. This method promises to take MBRL out of simulated environments and make it more applicable to real-world settings. If you’re interested in this work, we invite you to dive into the full paper, published at the recent IEEE CASE 2023 conference.

References

 

  1. M. P. Deisenroth and C. E. Rasmussen. PILCO: A Model-based and Data-efficient Approach to Policy Search. ICML, 2011.
  2. S. Kamthe and M. P. Deisenroth. Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control. AISTATS, 2018.
  3. J. T. Wilson, V. Borovitskiy, A. Terenin, P. Mostowsky, and M. P. Deisenroth. Pathwise Conditioning of Gaussian Processes. JMLR, 2021.

 

CDT Collaboration – Inter CDT Conference at Bristol Hotel with ART-AI and Interactive AI CDTS 7-8 Nov 2022

By sharon.betts, on 29 November 2022

On 7th and 8th November 2022 three of the UKRI CDTs in Artificial Intelligence hosted an Inter-CDT conference for our students and industry partners at The Bristol Hotel. The UKRI CDT in Foundational AI worked alongside our sister CDTs at the University of Bath (ART-AI) and University of Bristol (Interactive AI), to produce a two day event that covered AI from deep tech entrepreneurship to AI Ethics and Defence.

Turnout from all three CDTs was excellent and it was a wonderful opportunity for students across the three institutions to meet and collaborate with one another, sharing their knowledge and research of AI both in theory and applied.

UCL were delighted to host two panel sessions; the first being on Deep Tech entrepreneurship with Dr. Riam Kanso from Conception X, Dr. Stacy-Ann Sinclair from CodeREG and Dr. Thomas Stone from Kintsugi (ad)Ventures. Hosted by our CDT Director, Prof David Barber, this interactive panel session saw our specialists discuss the pathways into start ups and entrepreneurships, the perils, pitfalls and positives that follow! It was wonderful to be able to hear from industry experts their personal journeys to successful business ventures and great to have such an engaged and enquiring audience, who were keen to ask numerous questions and gain further insight to future possibilities.

Our second panel closed the event and was a student-led initiative discussing large scale datasets and massive computational modelling in AI.

For a more detailed review of the event we highly recommend you read the review by ART-AI on their website.

We were delighted to celebrate our student Dennis Hadjivelichkov’s second place in the poster session that took place at the MShed in Bristol as well as enjoy the fine food and fabulous company of our CDT peers.

With thanks to ART-AI and Interactive AI CDTs for their co-hosting and co-organising skills. It was a delight to be able to share time and work with our sister CDTs and we hope to collaborate again in the not too distant future.

Conferences and Workshops – GOFCP, MLF & EDS 2022 – Recap of events by Antonin Schrab

By sharon.betts, on 16 November 2022

In September 2022 I had the amazing opportunity to participate in workshops in Rennes and in Sophia Antipolis, and in a doctoral symposium in Alicante!

In poster sessions and talks, I have presented my work on Aggregated Kernel Tests which covers three of my papers. The first one is MMD Aggregated Two-Sample Test where the two-sample problem is considered, in which one has access to samples from two distributions and is interested in detecting whether those come from the same or from different distributions. The second is KSD Aggregated Goodness-of-fit Test in which we consider the goodness-of-fit problem where one is given some samples and is asked whether these come from a given model (with access to its density or score function). In the third one, Efficient Aggregated Kernel Tests using Incomplete U-statistics, we propose computationally efficient tests for the two-sample, goodness-of-fit, and independence problems; this last one consists in detecting dependence between the two components of paired samples. We tackle these three testing problems using kernel-based statistics, in such a setting the performance of these tests is known to heavily depend on the choice of kernels or kernel parameters (i.e. bandwidth parameter). We propose tests which aggregate over a collection of kernels and retain test power, we theoretically prove optimality of our tests under some regularity assumptions, and empirically show that our aggregated tests outperform other state-of-the-art kernel-based tests.

Photo
I started the month of September by participating in GOFCP 2022, the 5th Workshop on Goodness-of-Fit, Change-Point and related problems, from 2nd to 4th September in ENSAI in Rennes (France). It was extremely interesting to hear about the latest research in this very specific research field which covers exactly the topics I had been working on since the start of my PhD.

Photo
I then went to EURECOM in Sophia Antipolis (France) for MLF 2022, the ELISE Theory Workshop on Machine Learning Fundamentals, from 5th to 7th September. Talks and poster sessions covered the theory of kernel methods, hypothesis testing, partial differential equations, optimisation, Gaussian processes, explainability and AI safety.

Photo
Finally, I participated in EDS 2022, the ELLIS Doctoral Symposium 2022, hosted by the ELLIS Alicante at the University of Alicante in Spain from 19th to 23rd September. It was an amazing experience to meet so many other PhD students working on diverse topics in Machine Learning. I especially enjoyed the numerous poster sessions which allowed to engage with other students and discuss their current research!

I am extremely grateful to Valentin PatileaMotonobu Kanagawa and Aditya Gulati for the respective invitations, and to my CDT (UCL CDT in Foundational AI with funding from UKRI) which allowed me to participate in those workshops/symposium!