**A** few months ago, as indicated on this blog, I was contacted by ISR editors to write a piece on Rao-Blackwellisation, towards a special issue celebrating Calyampudi Radhakrishna Rao’s 100th birthday. Gareth Roberts and I came up with this survey, now on arXiv, discussing different aspects of Monte Carlo and Markov Chain Monte Carlo that pertained to Rao-Blackwellisation, one way or another. As I discussed the topic with several friends over the Fall, it appeared that the difficulty was more in setting the boundaries. Than in finding connections. In a way anything conditioning or demarginalising or resorting to auxiliary variates is a form of Rao-Blackwellisation. When re-reading the JASA Gelfand and Smith 1990 paper where I first saw the link between the Rao-Blackwell theorem and simulation, I realised my memory of it had drifted from the original, since the authors proposed there an approximation of the marginal based on replicas rather than the original Markov chain. Being much closer to Tanner and Wong (1987) than I thought. It is only later that the true notion took shape. *[Since the current version is still a draft, any comment or suggestion would be most welcomed!]*

## Archive for auxiliary variables

## Rao-Blackwellisation in the MCMC era

Posted in Books, Statistics, University life with tags auxiliary variables, birthday, C.R. Rao, conditioning, David Blackwell, demarginalisation, International Statistical Review, MCMC, Monte Carlo Statistical Methods, Rao-Blackwell theorem, Rao-Blackwellisation on January 6, 2021 by xi'an## Why do we draw parameters to draw from a marginal distribution that does not contain the parameters?

Posted in Statistics with tags accept-reject algorithm, Animal Farm, auxiliary variables, cross validated, importance sampling, marginalisation, multiple importance methods, probability basics on November 3, 2019 by xi'an**A** revealing question on X validated of a simulation concept students (and others) have trouble gripping with. Namely using auxiliary variates to simulate from a marginal distribution, since these auxiliary variables are later dismissed and hence appear to them (students) of no use at all. Even after being exposed to the accept-reject algorithm. Or to multiple importance sampling. In the sense that a realisation of a random variable can be associated with a whole series of densities in an importance weight, all of them being valid (but some more equal than others!).

## adaptive exchange

Posted in Books, Statistics, University life with tags adaptive MCMC methods, auxiliary variables, bias, doubly intractable problems, evolutionary Monte Carlo, JASA, Markov chain Monte Carlo algorithm, Monte Carlo Statistical Methods, normalising constant, perfect sampling, simulated annealing on October 27, 2016 by xi'an**I**n the March 2016 issue of JASA that currently sits on my desk, there is a paper by Liang, Jim, Song and Liu on the adaptive exchange algorithm, which aims at handling posteriors for sampling distributions with intractable normalising constants. The concept behind the algorithm is the exchange principle initiated by Jesper Møller and co-authors in 2006, where an auxiliary pseudo-observation is simulated for the missing constants to vanish in a Metropolis-Hastings ratio. (The name *exchangeable* was introduced in a subsequent paper by Iain Murray, Zoubin Ghahramani and David MacKay, also in 2006.)

The crux of the method is to run an iteration as [where y denotes the observation]

- Proposing a new value θ’ of the parameter from a proposal q(θ’|θ);
- Generate a pseudo-observation z~ƒ(z|θ’);
- Accept with probability

which has the appeal to cancel all normalising constants. And the repeal of requiring an *exact* simulation from the very distribution with the missing constant, ƒ(.|θ). Which means that in practice a *finite* number of MCMC steps will be used and will *bias* the outcome. The algorithm is unusual in that it replaces the exact proposal q(θ’|θ) with an unbiased random version q(θ’|θ)ƒ(z|θ’), z being just an augmentation of the proposal. (The current JASA paper by Liang et al. seems to confuse *augment* and *argument*, see p.378.)

To avoid the difficulty in simulating from ƒ(.|θ), the authors draw pseudo-observations from sampling distributions with a *finite* number m of parameter values under the [unrealistic] assumption (A⁰) that this collection of values provides an almost complete cover of the posterior support. One of the tricks stands with an auxiliary [time-heterogeneous] chain of pseudo-observations generated by single Metropolis steps from one of these m fixed targets. These pseudo-observations are then used in the main (or *target*) chain to define the above exchange probability. The auxiliary chain is Markov but time-heterogeneous since the probabilities of accepting a move are evolving with time according to a simulated annealing schedule. Which produces a convergent estimate of the m normalising constants. The main chain is not Markov in that it depends on the whole history of the auxiliary chain [see Step 5, p.380]. Even jointly the collection of both chains is not Markov. The paper prefers to consider the process as an adaptive Markov chain. I did not check the rather intricate in details, so cannot judge of the validity of the overall algorithm; I simply note that one condition (A², p.383) is incredibly strong in that it assumes the Markov transition kernel to be Doeblin uniformly on any compact set of the calibration parameters. However, the major difficulty with this approach seems to be in its delicate calibration. From providing a reference set of m parameter values scanning the posterior support to picking transition kernels on both the parameter and the sample spaces, to properly cooling the annealing schedule [always a fun part!], there seems to be [from my armchair expert’s perspective, of course!] a wide range of opportunities for missing the target or running into zero acceptance problems. Both examples analysed in the paper, the auto-logistic and the auto-normal models, are actually of limited complexity in that they depend on a few parameters, 2 and 4 resp., and enjoy sufficient statistics, of dimensions 2 and 4 as well. Hence simulating (pseudo-)realisations of those sufficient statistics should be less challenging than the original approach replicating an entire vector of thousands of dimensions.

## common derivation for Metropolis–Hastings and other MCMC algorithms

Posted in Books, pictures, Statistics, Travel, University life with tags auxiliary variables, directional sampling, Gibbs sampling, Hamiltonian Monte Carlo, Metropolis-Hastings algorithms, Metropolis-within-Gibbs algorithm, NUTS, pseudo-marginal MCMC, recursive proposals, RJMCMC, slice sampling, Sydney, UNSW on July 25, 2016 by xi'an**K**hoa Tran and Robert Kohn from UNSW just arXived a paper on a comprehensive derivation of a large range of MCMC algorithms, beyond Metropolis-Hastings. The idea is to decompose the MCMC move into

- a random completion of the current value θ into V;
- a deterministic move T from (θ,V) to (ξ,W), where only ξ matters.

If this sounds like a new version of Peter Green’s completion at the core of his 1995 RJMCMC algorithm, it is because it is indeed essentially the same notion. The resort to this completion allows for a standard form of the Metropolis-Hastings algorithm, which leads to the correct stationary distribution if T is self-inverse. This representation covers Metropolis-Hastings algorithms, Gibbs sampling, Metropolis-within-Gibbs and auxiliary variables methods, slice sampling, recursive proposals, directional sampling, Langevin and Hamiltonian Monte Carlo, NUTS sampling, pseudo-marginal Metropolis-Hastings algorithms, and pseudo-marginal Hamiltonian Monte Carlo, as discussed by the authors. Given this representation of the Markov chain through a random transform, I wonder if Peter Glynn’s trick mentioned in the previous post on retrospective Monte Carlo applies in this generic setting (as it could considerably improve convergence…)

## recents advances in Monte Carlo Methods

Posted in R, Statistics, Travel, University life with tags ABC, auxiliary variables, England, Imperial College London, London, MCMC, Monte Carlo Statistical Methods, particle methods, Read paper, simulation, target environment, warwick university on February 8, 2012 by xi'an**N**ext Thursday *(Feb. 16*), at the RSS, there will be a special half-day meeting (*afternoon, starting at 13:30*) on Recent Advances in Monte Carlo Methods organised by the General Application Section. The speakers are

- Richard Everitt, University of Oxford,
*Missing data, and what to do about it* - Anthony Lee, Warwick University,
*Auxiliary variables and many-core computation* - Nicolas Kantas, Imperial College London,
*Particle methods for computing optimal control inputs* - Nick Whitely, Bristol University,
*Stability properties of some particle filters* - Simon Maskell, QinetiQ & Imperial College London,
*Using a Probabilistic Hypothesis Density filter to confirm tracks in a multi-target environment*

*(Note this is not a Read Paper meeting, so there is no paper nor discussion!)*