Abstract

In this paper, we explore the effects of response model misspecification and uncertainty on the psychometric properties of estimates. Although model misspecification and uncertainty arises as an important consideration in many settings (e.g., measurement invariance, impression management), most literature has focused on quantifying the magnitude of potential misspecification, rather than its impact. Here, we review the broader statistical literature on misspecification and its effects on estimation, and explore how misspecification affects the estimates in prototypical scenarios. Interestingly, using analytic and simulation results, we show that although misspecification often decreases the accuracy of estimates, somewhat counterintuitively, it actually may also increase accuracy under certain circumstances. Moreover, depending on the form of misspecification, reliability can actually be increased under misspecification, in a way that provides a misleading characterization of test precision. Finally, we explore methods for integrating model uncertainty into the trait estimation process, and conclude with recommendations for applied use of tests when model uncertainty is a prominent concern.