Statistically Controlling for Confounding Constructs Is Harder than You Think

PLoS One. 2016 Mar 31;11(3):e0152719. doi: 10.1371/journal.pone.0152719. eCollection 2016.

Abstract

Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest--in some cases approaching 100%--when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity.

MeSH terms

  • Humans
  • Models, Statistical
  • Monte Carlo Method
  • Probability
  • Psychometrics / methods*
  • Reproducibility of Results
  • Social Sciences / methods*