X Close

UKRI Centre for Doctoral Training in Foundational AI

Home

Menu

Archive for July, 2025

Nengo Neromorphic AI summer school 2025: Jianwei Liu

By Claire Hudson, on 18 July 2025

I recently had the pleasure of attending the 2025 Nengo Neuromorphic AI Summer School, held at the University of Waterloo in Ontario, Canada. The program was expertly organized by Prof. Chris Eliasmith, Dr. Terry Stewart, and Dr. Michael Furlong.

Neuromorphic AI Fundamentals
The first 3 days of the summer school were dedicated to lectures on the fundamental concepts of neuromorphic AI, delivered by world-leading experts from the Centre for Theoretical Neuroscience. This foundational knowledge was crucial for tackling the hands-on research projects that followed.

A central topic was the Spiking Neural Network (SNN), which differs significantly from traditional Artificial Neural Networks (ANNs). While ANNs process continuous numerical signals synchronized by a global clock, SNNs communicate using discrete, time-encoded events called “spikes”—a closer approximation to biological neural activity.

SNNS offer significant advantages in terms of energy efficiency, particularly when deployed on dedicated neuromorphic hardware such as Intel Loihi, SpiNNaker, or Braindrop. As a researcher at the intersection of AI and robotics, I find this efficiency especially promising for applications on resource-constrained systems like mobile robots. This could pave the way for low-power, on-device, open-ended learning systems.

The Nengo Framework is a powerful, flexible tool for designing and deploying large-scale spiking neural models. It serves as a bridge between high-level computational goals and low-level neural dynamics. Built upon the Neural Engineering Framework (NEF) developed by Prof. Eliasmith and colleagues, Nengo allows users to define desired functions, and optimise for the synaptic weights needed for a population of spiking —typically leaky integrate-and-fire (LIF) neurons with random tuning curves—to approximate those functions. Nengo also supports multiple backends, enabling users to run models on standard CPUs and GPUs or deploy them directly to neuromorphic chips for real-world, low-power applications. Lectures throughout the week also introduced more advanced topics such as the Legendre Memory Unit (LMU), Semantic Pointer Architecture (SPA), adaptive spiking neural controllers, and neuromorphic SLAM, among others.

Summer school research project
The remainder of the summer school was dedicated to short research projects. For my project, I proposed and developed a spiking neural controller for quadruped robot locomotion, implemented using Nengo and MuJoCo. My project focused on integrating components such as SNNs, the LMU, spiking Central Pattern Generators (sCPGs), and imitation learning techniques to achieve biologically inspired locomotion controller.

The program concluded with a showcase event, where participants presented their projects in a technical  session and a public demonstration.

 

 

 

 

 

To my great delight, my project “SNN for Legged Locomotion Control” was awarded “The Recurrent Award for Best Use of Recurrent Neural Networks” during the closing banquet and award ceremony.

 

 

 

 

 

 

Reflections and Future Work
The Nengo Summer School was a truly transformative experience. The combination of expert-led theoretical sessions, hands-on tutorials, and intensive mini-projects provided a deep and practical understanding of neuromorphic AI. The experience also helped me establish valuable connections for future collaborations, particularly for continuing my research on neuromorphic AI and robotics at UCL.
I highly recommend the Nengo Summer School to anyone with an interest in neuromorphic computing or biologically inspired AI. It’s a rare and enriching opportunity to engage deeply with a cutting-edge field alongside leading researchers and fellow enthusiasts.

Workshop on Advances in Post-Bayesian Methods: Masha Naslidnyk

By Claire Hudson, on 9 July 2025

On 15 May 2025, a stream of researchers and students wound their way into the Denys Holland Lecture Theatre at UCL, drawn by a shared curiosity: how do we learn reliably when our models are imperfect? This two-day gathering, the inaugural Workshop on Advances in Post-Bayesian Methods—organised by Dr. Jeremias Knoblauch, Yann McLatchie, and  Matías Altamirano (UCL) explored advances beyond the confines of classical Bayesian inference.

Traditional Bayesian methods hinge on having the “right” likelihood and a fully specified prior, then performing a precise update when data arrive. But what happens when those assumptions crumble? In fields from cosmology to epidemiology, models are often approximate, priors are chosen more out of convenience than conviction, and exact computation is out of reach. The answer, as highlighted by the organisers, lies in a broader view of Bayes—one that replaces the rigid likelihood with flexible loss functions or divergences, yielding posteriors that behave more like tools in an optimizer’s kit than tenets of statistical doctrine. Over two days in May, five themes emerged:

  1. Reweighting for Robustness
    A number of talks explored how reweighting the data can help account for model misspecification. Ruchira Ray presented statistical guarantees for data-driven tempering, while Prof. Martyn Plummer discussed Bayesian estimating equations leading to inferences which are made invariant to the learning rate.
  2. Real-World Impact and Scientific Applications
    Speakers like Devina Mohan and Kai Lehman grounded the discussion in high-impact domains. From galaxy classification to cosmological modeling, these talks showed how post-Bayesian methods are being applied where models are inevitably approximate and uncertainty is essential.
  3. Variational Inference at the Forefront
    Variational methods continued to evolve beyond classical forms. Dr. Kamélia Daudel, Dr. Diana Cai, and Dr. Badr-Eddine Cherief-Abdellatif presented advances in black-box inference and importance weighting, illustrating how variational approaches are expanding to handle more structure, complexity, and real-world constraints.
  4. PAC-Bayesian Perspectives on Generalization
    PAC-Bayes theory offered a unifying language for understanding how well models generalize. Talks by Prof. Benjamin Guedj and Ioar Casado-Telletxea examined comparator bounds and marginal likelihoods through a PAC-Bayesian lens—providing rigorous guarantees even in adversarial or data-limited regimes.
  5. Predictive Bayesian Thinking
    Prof. Sonia Petrone and others emphasized a shift toward prediction-focused Bayesian inference, where the goal is not merely to estimate parameters, but to make useful, calibrated forecasts. This view reframes classical Bayesianism into a pragmatic framework centered on learning what matters.
  6. Gradient Flows and Computational Tools
    Finally, computation was treated not as an afterthought but as a core conceptual tool. Dr. Sam Power and Dr. Zheyang Shen discussed using gradient flows and kernel methods to structure inference, showcasing how modern optimization techniques are reshaping the Bayesian workflow itself.