Random effects structure for confirmatory hypothesis testing: Keep it maximal

DJ Barr, R Levy, C Scheepers, HJ Tily - Journal of memory and language, 2013 - Elsevier
Journal of memory and language, 2013Elsevier
Linear mixed-effects models (LMEMs) have become increasingly prominent in
psycholinguistics and related areas. However, many researchers do not seem to appreciate
how random effects structures affect the generalizability of an analysis. Here, we argue that
researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the
standards that have been in place for many decades. Through theoretical arguments and
Monte Carlo simulation, we show that LMEMs generalize best when they include the …
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F1 and F2 tests, and in many cases, even worse than F1 alone. Maximal LMEMs should be the ‘gold standard’ for confirmatory hypothesis testing in psycholinguistics and beyond.
Elsevier