11 Pages Posted: 5 Dec 2013 Last revised: 15 May 2014

Date Written: December 5, 2013

Abstract

Data quality is one of the major concerns of using crowdsourcing web sites such as Amazon Mechanical Turk (MTurk) to recruit participants for online behavioral studies. We compared two methods for ensuring data quality on MTurk: attention check questions (ACQs) and restricting participation to MTurk workers with high reputation (above 95% approval ratings). In Experiment 1, we found that high reputation workers rarely failed ACQs and provided higher quality data than low reputation workers; ACQs improved data quality only for low reputation workers, and only in some of the cases. Experiment 2 corroborated these findings and also suggested that more productive high reputation workers produce the highest quality data. We conclude that sampling high reputation workers can ensure high quality data without having to resort to using ACQs ,which may lead to selection bias if participants who fail ACQs are excluded post-hoc.