Run Post-correction for Approximate MCMC using ψ-APF
Function post_correct
updates previously obtained approximate MCMC output
with post-correction weights leading to asymptotically exact weighted posterior,
and returns updated MCMC output where components weights
, posterior
,
alpha
, alphahat
, and Vt
are updated (depending on the original output type).
post_correct( model, mcmc_output, particles, threads = 1L, is_type = "is2", seed = sample(.Machine$integer.max, size = 1) )
model |
Model of class |
mcmc_output |
An output from |
particles |
Number of particles for ψ-APF. |
threads |
Number of parallel threads. |
is_type |
Type of IS-correction. Possible choices are
|
seed |
Seed for the random number generator. |
List with suggested number of particles N
and matrix containing
estimated standard deviations of the log-weights and corresponding number of particles.
A. Doucet, M. K. Pitt, G. Deligiannidis, R. Kohn, Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator, Biometrika, Volume 102, Issue 2, 2015, Pages 295–313, https://doi.org/10.1093/biomet/asu075 Vihola, M, Helske, J, Franks, J. Importance sampling type estimators based on approximate marginal Markov chain Monte Carlo. Scand J Statist. 2020; 1– 38. https://doi.org/10.1111/sjos.12492
## Not run: set.seed(1) n <- 300 x1 <- sin((2 * pi / 12) * 1:n) x2 <- cos((2 * pi / 12) * 1:n) alpha <- numeric(n) alpha[1] <- 0 rho <- 0.7 sigma <- 2 mu <- 1 for(i in 2:n) { alpha[i] <- rnorm(1, mu * (1 - rho) + rho * alpha[i-1], sigma) } u <- rpois(n, 50) y <- rbinom(n, size = u, plogis(0.5 * x1 + x2 + alpha)) ts.plot(y / u) model <- ar1_ng(y, distribution = "binomial", rho = uniform(0.5, -1, 1), sigma = gamma(1, 2, 0.001), mu = normal(0, 0, 10), xreg = cbind(x1,x2), beta = normal(c(0, 0), 0, 5), u = u) out_approx <- run_mcmc(model, mcmc_type = "approx", local_approx = FALSE, iter = 50000) out_is2 <- post_correct(model, out_approx, particles = 30, threads = 2) out_is2$time summary(out_approx, return_se = TRUE) summary(out_is2, return_se = TRUE) # latent state library("dplyr") library("ggplot2") state_approx <- as.data.frame(out_approx, variable = "states") %>% group_by(time) %>% summarise(mean = mean(value)) state_exact <- as.data.frame(out_is2, variable = "states") %>% group_by(time) %>% summarise(mean = weighted.mean(value, weight)) dplyr::bind_rows(approx = state_approx, exact = state_exact, .id = "method") %>% filter(time > 200) %>% ggplot(aes(time, mean, colour = method)) + geom_line() + theme_bw() # posterior means p_approx <- predict(out_approx, model, type = "mean", nsim = 1000, future = FALSE) %>% group_by(time) %>% summarise(mean = mean(value)) p_exact <- predict(out_is2, model, type = "mean", nsim = 1000, future = FALSE) %>% group_by(time) %>% summarise(mean = mean(value)) dplyr:: bind_rows(approx = p_approx, exact = p_exact, .id = "method") %>% filter(time > 200) %>% ggplot(aes(time, mean, colour = method)) + geom_line() + theme_bw() ## End(Not run)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.