Two different varaince estimators for the Horvitz-Thompson estimator
This function estimates the variance of the Horvitz-Thompson estimator. Two different variance estimators are computed: the original one, due to Horvitz-Thompson and the one due to Sen (1953) and Yates, Grundy (1953). The two approaches yield unbiased estimator under fixed-size sampling schemes.
VarSYGHT(y, N, n, p)
y |
Vector containing the information of the characteristic of interest for every unit in the population. |
N |
Population size. |
n |
Sample size. |
p |
A vector containing the selection probabilities of a fixed size without replacement sampling design. The sum of the values of this vector must be one. |
The function returns two variance estimator for every possible sample within a fixed-size sampling support. The first estimator is due to Horvitz-Thompson and is given by the following expression:
\widehat{Var}_1(\hat{t}_{y,π}) = ∑_{k \in U}∑_{l\in U}\frac{Δ_{kl}}{π_{kl}}\frac{y_k}{π_k}\frac{y_l}{π_l}
The second estimator is due to Sen (1953) and Yates-Grundy (1953). It is given by the following expression:
\widehat{Var}_2(\hat{t}_{y,π}) = -\frac{1}{2}∑_{k \in U}∑_{l\in U}\frac{Δ_{kl}}{π_{kl}}(\frac{y_k}{π_k} - \frac{y_l}{π_l})^2
This function returns a data frame of every possible sample in within a sampling support, with its corresponding variance estimates.
Hugo Andres Gutierrez Rojas <hagutierrezro at gmail.com>
Sarndal, C-E. and Swensson, B. and Wretman, J. (1992), Model Assisted Survey Sampling. Springer.
Gutierrez, H. A. (2009), Estrategias de muestreo: Diseno de encuestas
y estimacion de parametros. Editorial Universidad Santo Tomas.
# Example 1 # Without replacement sampling # Vector U contains the label of a population of size N=5 U <- c("Yves", "Ken", "Erik", "Sharon", "Leslie") # Vector y1 and y2 are the values of the variables of interest y1<-c(32, 34, 46, 89, 35) y2<-c(1,1,1,0,0) # The population size is N=5 N <- length(U) # The sample size is n=2 n <- 2 # p is the probability of selection of every possible sample p <- c(0.13, 0.2, 0.15, 0.1, 0.15, 0.04, 0.02, 0.06, 0.07, 0.08) # Calculates the estimated variance for the HT estimator VarSYGHT(y1, N, n, p) VarSYGHT(y2, N, n, p) # Unbiasedness holds in the estimator of the total sum(y1) sum(VarSYGHT(y1, N, n, p)$p * VarSYGHT(y1, N, n, p)$Est.HT) sum(y2) sum(VarSYGHT(y2, N, n, p)$p * VarSYGHT(y2, N, n, p)$Est.HT) # Unbiasedness also holds in the two variances VarHT(y1, N, n, p) sum(VarSYGHT(y1, N, n, p)$p * VarSYGHT(y1, N, n, p)$Est.Var1) sum(VarSYGHT(y1, N, n, p)$p * VarSYGHT(y1, N, n, p)$Est.Var2) VarHT(y2, N, n, p) sum(VarSYGHT(y2, N, n, p)$p * VarSYGHT(y2, N, n, p)$Est.Var1) sum(VarSYGHT(y2, N, n, p)$p * VarSYGHT(y2, N, n, p)$Est.Var2) # Example 2: negative variance estimates x = c(2.5, 2.0, 1.1, 0.5) N = 4 n = 2 p = c(0.31, 0.20, 0.14, 0.03, 0.01, 0.31) VarSYGHT(x, N, n, p) # Unbiasedness holds in the estimator of the total sum(x) sum(VarSYGHT(x, N, n, p)$p * VarSYGHT(x, N, n, p)$Est.HT) # Unbiasedness also holds in the two variances VarHT(x, N, n, p) sum(VarSYGHT(x, N, n, p)$p * VarSYGHT(x, N, n, p)$Est.Var1) sum(VarSYGHT(x, N, n, p)$p * VarSYGHT(x, N, n, p)$Est.Var2)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.