Amazon Mechanical Turk (MTurk) is a powerful tool that is more commonly being used to recruit behavioral research participants for accounting research. This manuscript provides practical and technical knowledge learned from firsthand experience to help researchers collect high-quality, defendable data for research purposes. We highlight two issues of particular importance when using MTurk: (1) accessing qualified participants, and (2) validating collected data. To address these issues, we discuss alternative methods of carrying out screens and different data validation techniques researchers may want to consider. We also demonstrate how some of the techniques discussed were implemented for a recent data collection. Finally, we contrast the use of unpaid screens with merely putting participation requirements in the MTurk instructions to examine the effectiveness of using screens. We find that screening questions significantly reduce the number of manipulation check failures as well as significantly increase the usable responses per paid participant.

You do not currently have access to this content.