Workshop on Advances in Post-Bayesian Methods: Masha Naslidnyk
By Claire Hudson, on 9 July 2025
On 15 May 2025, a stream of researchers and students wound their way into the Denys Holland Lecture Theatre at UCL
, drawn by a shared curiosity: how do we learn reliably when our models are imperfect? This two-day gathering, the inaugural Workshop on Advances in Post-Bayesian Methods—organised by Dr. Jeremias Knoblauch, Yann McLatchie, and Matías Altamirano (UCL) explored advances beyond the confines of classical Bayesian inference.
Traditional Bayesian methods hinge on having the “right” likelihood and a fully specified prior, then performing a precise update when data arrive. But what happens when those assumptions crumble? In fields from cosmology to epidemiology, models are often approximate, priors are chosen more out of convenience than conviction, and exact computation is out of reach. The answer, as highlighted by the organisers, lies in a broader view of Bayes—one that replaces the rigid likelihood with flexible loss functions or divergences, yielding posteriors that behave more like tools in an optimizer’s kit than tenets of statistical doctrine. Over two days in May, five themes emerged:
- Reweighting for Robustness
A number of talks explored how reweighting the data can help account for model misspecification. Ruchira Ray presented statistical guarantees for data-driven tempering, while Prof. Martyn Plummer discussed Bayesian estimating equations leading to inferences which are made invariant to the learning rate. - Real-World Impact and Scientific Applications
Speakers like Devina Mohan and Kai Lehman grounded the discussion in high-impact domains. From galaxy classification to cosmological modeling, these talks showed how post-Bayesian methods are being applied where models are inevitably approximate and uncertainty is essential. - Variational Inference at the Forefront
Variational methods continued to evolve beyond classical forms. Dr. Kamélia Daudel, Dr. Diana Cai, and Dr. Badr-Eddine Cherief-Abdellatif presented advances in black-box inference and importance weighting, illustrating how variational approaches are expanding to handle more structure, complexity, and real-world constraints. - PAC-Bayesian Perspectives on Generalization
PAC-Bayes theory offered a unifying language for understanding how well models generalize. Talks by Prof. Benjamin Guedj and Ioar Casado-Telletxea examined comparator bounds and marginal likelihoods through a PAC-Bayesian lens—providing rigorous guarantees even in adversarial or data-limited regimes. - Predictive Bayesian Thinking
Prof. Sonia Petrone and others emphasized a shift toward prediction-focused Bayesian inference, where the goal is not merely to estimate parameters, but to make useful, calibrated forecasts. This view reframes classical Bayesianism into a pragmatic framework centered on learning what matters. - Gradient Flows and Computational Tools
Finally, computation was treated not as an afterthought but as a core conceptual tool. Dr. Sam Power and Dr. Zheyang Shen discussed using gradient flows and kernel methods to structure inference, showcasing how modern optimization techniques are reshaping the Bayesian workflow itself.
Close