Over informative processes, naive estimator of learning—difference between post and pre process scores—underestimates actual learning. A heuristic account for why the naive estimator is negatively biased is as follows: people know as much or more after exposed to an informative process than before it. And the less people know, the larger the number of items they don’t know. And greater the opportunity to guess.
Guessing, even when random, only increases the proportion correct. Thus, bias due to guessing for naive measures of knowledge is always positive. On average, thus, there is more positive bias in the pre-process scores than post-process scores. And naturally, subtracting pre-process scores from post-process provides an attenuated estimate of actual learning. For a more complete treatment of the issue, read this paper by Ken Cor and Gaurav Sood.
We provide a few different ways to adjust estimates of learning for guessing. For now, we limit our attention to cases where the same battery of knowledge questions has been asked in both the pre- and the post-process wave. And to cases where closed-ended questions have been asked. (Guessing is not a serious issue on open-ended items. See more evidence for that in DK Means DK by Robert Luskin and John Bullock.) More generally, the package implements the methods to adjust learning for guessing discussed in this paper.
Proportion of people who learned a particular piece of information over the course of an informative process.
Measurement of knowledge is fundamentally reactive – we must probe to learn. But probing is not without its problems. For instance, people who don’t know the answer try to triangulate based on the cues in the question itself. For another, people are remarkably averse to confessing to their ignorance. So on a closed ended question, lots of people who don’t know the right answer, guess. Here are some pertinent issues that relate to how we analyze the data:
Dealing with Missing Data
If you assume that the data are missing completely at random, you can
simply ignore them. Generally, however, respondents tend to skip items
they don’t know. So missing responses on knowledge questions typically
indicate ignorance. (Of course, it is important to investigate other
potential reasons behind missing data. And we encourage researchers to
take all precautions.) In our treatment, however, for simplicity sake,
we treat missing as indicators of ignorance.
Dealing with Don’t Knows
We now know a little bit about Don’t Know. One generally strategy is to
treat Don’t Know responses as ignorance. But research suggests that on
average there is approximately 3% hidden knowledge behind Don’t Know
responses. See DK
Means DK by Robert Luskin and John Bullock. Thus one can also choose
to replace Don’t Know responses with .03.
Related Knowledge
People either know a particular piece of information or they don’t. On
an open-ended question, they may struggle to remember it but those kinds
of concerns don’t apply to closed-ended questions where the task is
simply to identify the correct answer. What does on occassion happen on
closed-ended questions is that people have related cognitions. So for
instance, asked to identify the prime minister of France, the
respondents sometimes know that one of the options is the king of Sudan
and may guess randomly between the remaining options. But that isn’t the
same as knowing the prime minister of France.
The standard correction for guessing assumes that people guess randomly. And that people either know or don’t know. Using this assumption, it then uses total number of incorrect answers to estimate the total number of items that the person guessed on. For instance, let us assume there are 4 options on a multiple choice question. Say we have data from 100 respondents. Say there are 70 incorrect answers and 30 correct. Incorrect answers reflect attempts of guessing. (We also assume that people aren’t misinformed.) This means we can triangulat the total number of questions respondents guessed on – 70*(4/3). This means that the proportion of people who know the piece of information is roughly .067. Do it for the pre and the post wave and you have estimate of learning adjusted for guessing using the standard correction.
To get the current CRAN version:
To get the current development version from GitHub:
To adjust estimates of learning for standard correction of guessing,
use stnd_cor
. The function requires takes pre test and post
test data frames containing responses to the items on the pre- and the
post-test, and a lucky
vector that contains the probability
of getting an item correct when guessing randomly. Under standard
guessing correction, it is taken to be inverse of total number of
options.
Structure of the Input Data:
Don't know
, code
all Don't Know
responses as ‘d’.# Load library
library(guess)
# Generate some data without DK
pre_test <- data.frame(item1 = c(1, 0, 0, 1, 0), item2 = c(1, NA, 0, 1, 0))
pst_test <- pre_test + cbind(c(0, 1, 1, 0, 0), c(0, 1, 0, 0, 1))
lucky <- rep(.25, 2)
# Unadjusted Effect
# Treating Don't Know as ignorance
colMeans(nona(pst_test) - nona(pre_test))
# MCAR
colMeans(pst_test - pre_test, na.rm = T)
# Adjusted Effect
stnd_cor(pre_test, pst_test, lucky)
# Without Don't Know
pre_test_var <- c(1, 0, 0, 1, 0, 1, 0)
pst_test_var <- c(1, 0, 1, 1, 0, 1, 1)
print(transmat(pre_test_var, pst_test_var))
# With Don't Know
pre_test_var <- c(1, 0, NA, 1, "d", "d", 0, 1, 0)
pst_test_var <- c(1, 0, 1, "d", 1, 0, 1, 1, "d")
print(transmat(pre_test_var, pst_test_var))
# load(system.file("data/alldat.rda", package = "guess"))
load("../data/alldat.rda")
# nitems
nitems <- length(alldat)/400
# Vectors of Names
t1 <- paste0("guess.t1", 1:nitems)
t2 <- paste0("guess.t2", 1:nitems)
transmatrix <- multi_transmat(alldat[, t1], alldat[, t2])
res <- lca_cor(transmatrix)
round(res$param.lca[,1:4], 3)
round(res$est.learning[1:4], 3)
# LCA Correction with DK
# load(system.file("data/alldat_dk.rda", package = "guess"))
load("../data/alldat_dk.rda")
transmatrix <- multi_transmat(alldat_dk[, t1], alldat_dk[, t2], force9 = T)
res_dk <- lca_cor(transmatrix)
round(res_dk$param.lca[,1:4], 3)
round(res_dk$est.learning[1:4], 3)
Account for propensity to guess and item level attribute that a guess would be lucky.
pre_test_var <- data.frame(pre=c(1, 0, 1, 1, 0, "d", "d", 0, 1, NA, 0, 1, 1, 1, 1, 0, 0, 'd', 0, 0))
pst_test_var <- data.frame(pst=c(1, 0, NA, 1, "d", 1, 0, 1, 1, "d", 1, 1, 1, 0, 1, 1, 0, 1, 1, 0))
grp = c(rep(1, 10), rep(0, 10))
group_adj(pre_test_var, pst_test_var, gamma = 0, dk = 0)$learn
group_adj(pre_test_var, pst_test_var, gamma = .25, dk = 0)$learn
stnd_cor(pre_test_var, pst_test_var, lucky = .25)$learn
grp0_raw <- group_adj(subset(pre_test_var, grp == 0), subset(pst_test_var, grp == 0), gamma = 0, dk = 0)$learn
grp1_raw <- group_adj(subset(pre_test_var, grp == 1), subset(pst_test_var, grp == 1), gamma = 0, dk = 0)$learn
grp0_adj <- group_adj(subset(pre_test_var, grp == 0), subset(pst_test_var, grp == 0), gamma = .25, dk = 0)$learn
grp1_adj <- group_adj(subset(pre_test_var, grp == 1), subset(pst_test_var, grp == 1), gamma = .25, dk = 0)$learn
grp0_raw - grp1_raw
grp0_adj - grp1_adj
# Raw
# Generate some data without DK
pre_test <- data.frame(item1 = c(1, 0, 0, 1, 0), item2 = c(1, NA, 0, 1, 0))
pst_test <- pre_test + cbind(c(0, 1, 1, 0, 0), c(0, 1, 0, 0, 1))
diff <- pst_test - pre_test
stnd_err <- sapply(diff, function(x) sqrt(var(x, na.rm = T)/length(x)))
# Bootstrapped s.e.
# LCA model
lca_stnd_err <- lca_se(alldat[, t1], alldat[, t2], 10)
sapply(lca_stnd_err, function(x) round(head(x, 1), 3))
lca_dk_stnd_err <- lca_se(alldat_dk[, t1], alldat_dk[, t2], 10)
sapply(lca_dk_stnd_err, function(x) round(head(x, 1), 3))