Measurement Error and Treatment-Effect Estimate Bias for Non-Experimental Data

-

Location: 1024 Flanner Hall

Roberto Penaloza, Ph.D.
Research Scientist
University of Notre Dame
CREO

"Measurement Error and Treatment-Effect Estimate Bias for Non-Experimental Data".

 

Abstract: Social sciences can rarely setup experiments to measure the effect of “treatments”. They generally rely on observational data with individuals already “assigned” into groups in a non-random manner. In the educational field, the treated individuals are the students, and the treatments are the different school settings and teachers. Since the implementation of accountability systems, some school and treatment effects have been measured through complex “Value-Added” (VA) models, which seem to have had their own “morale-sinking” effect among teachers. These models are not clearly understood, and despite years of use and research, there is no consensus about the “right” model, and most importantly, the possibility that they are producing biased estimates still exists. In most research, the existence of such bias has been studied for the typical “omitted-variable” situation, and the VA models used have been justified by making untenable assumptions. In this study, we propose a different, impartial and simple but powerful framework that does not have to make such assumptions to justify the estimation approach. We assume away omitted-variable bias to focus on the biasing effect of the measurement error contained in all test scores, which is not ignorable but has been neglected in the literature. With our framework, we clarify several aspect still not clearly understood about treatment-effect estimation that will help to choose the “right” model, such as whether a levels or a gains model is better, and whether lagged scores or other covariates must be used. Importantly, we clearly explain the mechanics of the bias in the estimation models, and, through simulation, we show the actual size of the biases. This is different from the current literature that shows bias through correlations and variances of estimates. We also show that models that try to explicitly handle the measurement error through a latent-variable or errors-in-variables approach perform poorly. Although we use our proposed framework with measurement error, it can also be used to better understand the mechanics of the usual omitted-variable bias, and to confirm prior findings that used other frameworks.