Split personalities in Bayesian Networks

Abstract

The true posterior distribution of a Bayesian neural network is massively multimodal. Whilst most of these modes are functionally equivalent, we demonstrate that there remains a level of real multimodality that manifests in even the simplest neural network setups. It is only by fully marginalising over all posterior modes, using appropriate Bayesian sampling tools, that we can capture the split personalities of the network. The ability of a network trained in this manner to reason between multiple candidate solutions dramatically improves the generalisability of the model, a feature we contend is not consistently captured by alternative approaches to the training of Bayesian neural networks. We provide a concise minimal example of this, which can provide lessons and a future path forward for correctly utilising the explainability and interpretability of Bayesian neural networks.

Lightning summary:

Starting from the simplest problem, a noisy XOR classifier, and the simplest Bayesian Neural Network,

We show that a posterior that is multimodal in parameter space emerges,

Which in turns induces a full solution that gives better generalisability on the problem,

Capturing this is a strong challenge to almost all Bayesian inference methods over neural architectures.

bayes neural network machine learning
David Yallup
Research Associate

I am a researcher in Bayesian Machine Learning, specialising in applications in fundamental physics.