In this work we mainly restricted ourselves to the use of discrete variables.
The main reason for this is that we believe that continuous variables are a pure fiction on the computer, at worst a quite dangerous one. For instance, real numbers coded by types “float,” “double,” or “long double” are all coded using discrete values either on 32, 64, or 80 bits. Unfortunately, discretization of continuous values may lead to numerous problems would this discretization be using 2 bits or 80 bits. For instance, two different roots of a polynomial function may be indistinguishable as they may take the same value once discretized. Well-posed mathematical methods can lead to ill-behaved computer algorithms due to the effect of discretization.
We think that “discretization” is a very important and difficult modeling choice that deserves careful thinking and cannot be blindly left to computer float encoding.
For instance, if you have a signal with a high dynamic either in energy as light for optical sensors or sound for auditory ones, or in range as distance for laser devices, a logarithmic discretization is often wise to preserve accuracy for small values yet cover the whole range of the signal.
Another example is given by dynamical environment (with rapid change in time) where rough (with few values) discretization is often a clever choice. Indeed, there is a trade-off between the precision of discretization and the required computing time for inference. In some cases, lowering the accuracy in order to speed up computation may lead to better results by improving the reactivity to rapidly changing environments.
In Bayesian programming, the first and most difficult modeling choice is the selection of relevant variables. To be more complete and exact, we should say that this first modeling choice is the selection of the relevant variables and their encoding as discrete variables. This encoding choice is a very important component of the model which is too often neglected as it is a delicate question.
Furthermore, considering continuous variables obliges us to use measure theory as defined by Emile Borel and Henri-Léon Lebesgue and leads to Andrey Kolmogorov’s axiomatization of probability. Of course this approach to probability is of primary importance and has countless applications. However, it is a very different concept and viewpoint on probability than the epistemological position adopted in Bayesian Programming where probability is considered as an alternative and extension of logic (see Jaynes  for an extended discussion and comparison between his approach and Kolmogov’s).