Marginal Likelihood for Simple MPT
Computes the marginal likelihood for simple (fixed-effects, nonhierarchical) MPT models.
marginalMPT( eqnfile, data, restrictions, alpha = 1, beta = 1, dataset = 1, method = "importance", posterior = 500, mix = 0.05, scale = 0.9, samples = 10000, batches = 10, show = TRUE, cores = 1 )
eqnfile |
The (relative or full) path to the file that specifies the MPT model
(standard .eqn syntax). Note that category labels must start with a letter
(different to multiTree) and match the column names of |
data |
The (relative or full) path to the .csv file with the data (comma separated; category labels in first row). Alternatively: a data frame or matrix (rows=individuals, columns = individual category frequencies, category labels as column names) |
restrictions |
Specifies which parameters should be
(a) constant (e.g., |
alpha |
first shape parameter(s) for the beta prior-distribution of the MPT parameters θ_s (can be a named vector to use a different prior for each MPT parameter) |
beta |
second shape parameter(s) |
dataset |
for which data set should Bayes factors be computed? |
method |
either |
posterior |
number of posterior samples used to approximate importance-sampling densities (i.e., beta distributions) |
mix |
mixture proportion of the uniform distribution for the importance-sampling density |
scale |
how much should posterior-beta approximations be downscaled to get fatter importance-sampling density |
samples |
total number of samples from parameter space |
batches |
number of batches. Used to compute a standard error of the estimate. |
show |
whether to show progress |
cores |
number of CPUs used |
Currently, this is only implemented for a single data set!
If method = "prior"
, a brute-force Monte Carlo method is used and parameters are directly sampled from the prior.Then, the likelihood is evaluated for these samples and averaged (fast, but inefficient).
Alternatively, an importance sampler is used if method = "importance"
, and the posterior distributions of the MPT parameters are approximated by independent beta distributions. Then each parameter s is sampled from the importance density:
mix*U(0,1) + (1-mix)*Beta(scale*a_s, scale*b_s)
Vandekerckhove, J. S., Matzke, D., & Wagenmakers, E. (2015). Model comparison and the principle of parsimony. In Oxford Handbook of Computational and Mathematical Psychology (pp. 300-319). New York, NY: Oxford University Press.
# 2-High-Threshold Model eqn <- "## 2HTM ## Target Hit d Target Hit (1-d)*g Target Miss (1-d)*(1-g) Lure FA (1-d)*g Lure CR (1-d)*(1-g) Lure CR d" data <- c(Hit = 46, Miss = 14, FA = 14, CR = 46) # weakly informative prior for guessing aa <- c(d = 1, g = 2) bb <- c(d = 1, g = 2) curve(dbeta(x, aa["g"], bb["g"])) # compute marginal likelihood htm <- marginalMPT(eqn, data, alpha = aa, beta = bb, posterior = 200, samples = 1000) # second model: g=.50 htm.g50 <- marginalMPT(eqn, data, list("g=.5"), alpha = aa, beta = bb, posterior = 200, samples = 1000) # Bayes factor # (per batch to get estimation error) bf <- htm.g50$p.per.batch / htm$p.per.batch mean(bf) # BF sd(bf)/sqrt(length(bf)) # standard error of BF estimate
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.