Careless Responding in Crowdsourced Alcohol Research: A Systematic Review and Meta-Analysis of Practices and Prevalence
Crowdsourcing—the process of using the internet to outsource research participation to “workers”—has considerable benefits, enabling research to be conducted quickly, efficiently, and responsively, diversifying participant recruitment, and allowing access to hard-to-reach samples. One of the biggest threats to this method of online data collection however is the prevalence of careless responders who can significantly affect data quality. The aims of this preregistered systematic review and meta-analysis were: (a) to examine the prevalence of screening for careless responding in crowdsourced alcohol-related studies; (b) to examine the pooled prevalence of careless responding; and (c) to identify any potential moderators of careless responding across studies. Our review identified 96 eligible studies (˜126,130 participants), of which 51 utilized at least one measure of careless responding, 53.2%, 95% CI [42.7%–63.3%]; ˜75,334 participants. Of these, 48 reported the number of participants identified by careless responding method(s) and the pooled prevalence rate was ˜11.7%, 95% CI [7.6%–16.5%]. Studies using the MTurk platform identified more careless responders compared to other platforms, and the number of careless response items was positively associated with prevalence rates. The most common measure of careless responding was an attention check question, followed by implausible response times. We suggest that researchers plan for such attrition when crowdsourcing participants and provide practical recommendations for handling and reporting careless responding in alcohol research.