Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

run_mcmc.ssm_sde

Bayesian Inference of SDE


Description

Methods for posterior inference of states and parameters.

Usage

## S3 method for class 'ssm_sde'
run_mcmc(
  model,
  iter,
  particles,
  output_type = "full",
  mcmc_type = "is2",
  L_c,
  L_f,
  burnin = floor(iter/2),
  thin = 1,
  gamma = 2/3,
  target_acceptance = 0.234,
  S,
  end_adaptive_phase = FALSE,
  threads = 1,
  seed = sample(.Machine$integer.max, size = 1),
  ...
)

Arguments

model

Model model.

iter

Number of MCMC iterations.

particles

Number of state samples per MCMC iteration.

output_type

Either "full" (default, returns posterior samples of states alpha and hyperparameters theta), "theta" (for marginal posterior of theta), or "summary" (return the mean and variance estimates of the states and posterior samples of theta). In case of "summary", means and covariances are computed using the full output of particle filter instead of sampling one of these as in case of output_type = "full". If particles = 0, this is argument ignored and set to "theta".

mcmc_type

What MCMC algorithm to use? Possible choices are "pm" for pseudo-marginal MCMC, "da" for delayed acceptance version of pseudo-marginal MCMC, or one of the three importance sampling type weighting schemes: "is3" for simple importance sampling (weight is computed for each MCMC iteration independently), "is2" for jump chain importance sampling type weighting (default), or "is1" for importance sampling type weighting where the number of particles used for weight computations is proportional to the length of the jump chain block.

L_c, L_f

Integer values defining the discretization levels for first and second stages (defined as 2^L). For PM methods, maximum of these is used.

burnin

Length of the burn-in period which is disregarded from the results. Defaults to iter / 2.

thin

Thinning rate. Defaults to 1. Increase for large models in order to save memory. For IS-corrected methods, larger value can also be statistically more effective. Note: With output_type = "summary", the thinning does not affect the computations of the summary statistics in case of pseudo-marginal methods.

gamma

Tuning parameter for the adaptation of RAM algorithm. Must be between 0 and 1 (not checked).

target_acceptance

Target acceptance ratio for RAM. Defaults to 0.234. For DA-MCMC, this corresponds to first stage acceptance rate, i.e., the total acceptance rate will be smaller.

S

Initial value for the lower triangular matrix of RAM algorithm, so that the covariance matrix of the Gaussian proposal distribution is SS'. Note that for some parameters (currently the standard deviation and dispersion parameters of bsm_ng models) the sampling is done for transformed parameters with internal_theta = log(theta).

end_adaptive_phase

If TRUE, S is held fixed after the burnin period. Default is FALSE.

threads

Number of threads for state simulation.

seed

Seed for the random number generator.

...

Ignored.

References

Vihola, M, Helske, J, Franks, J. Importance sampling type estimators based on approximate marginal Markov chain Monte Carlo. Scand J Statist. 2020; 1– 38. https://doi.org/10.1111/sjos.12492


bssm

Bayesian Inference of Non-Linear and Non-Gaussian State Space Models

v1.1.4
GPL (>= 2)
Authors
Jouni Helske [aut, cre] (<https://orcid.org/0000-0001-7130-793X>), Matti Vihola [aut] (<https://orcid.org/0000-0002-8041-7222>)
Initial release
2021-04-13

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.