Bayesian Programming

Probability as an extension of logic

FAQ&FAM

Computational complexity of Bayesian inference

As already stated several times, Bayesian inference is a very time consuming problem. The major problems of probabilistic inference have been proved NP-hard or worse. This is the case, for instance, for exact inference in singly connected Bayesian nets [Wu and Butz, 2005] and in multiply connected Bayesian nets [Cooper, 1990], for approximate inference [Dagum and Luby, 1993]), for optimal triangulation [Jordan and Weiss, 2002], and for optimal ordering [Arnborg et al., 1987]. Darwiche [2009] proposes a synthetic review of these questions.

The only way out is to simplify the problem. There are two approaches.

The first one is problem independent and consists of using generic approximation algorithms based on heuristics and sampling. Some of these are described in Chapter 14 of “Bayesian Programming” book. Markov chain Monte Carlo (MCMC) algorithms are obviously an immense success in this category. However, until proven otherwise, NP-hard problems stay NP-hard and this implies that there exist necessary cases where these algorithms will fail.

The second one is problem dependent and consists of dividing the problem into a combination of more simple ones using an additional conditional independence assumption. This is the central purpose of the decomposition step in the Bayesian programming approach. The efficacy of this process has been demonstrated numerous times in this book. However, any new conditional independence hypothesis is paid by a certain loss of information and transforms the initial problem into a different one. The very difficult challenge the programmer has to take is to find the conditional independence hypotheses that make sense and lead to acceptable results.

Leave a Reply

Your email address will not be published. Required fields are marked *