Abstract
This article examines whether a crowdsourced research participant who quits a study before its completion should receive any monetary compensation. The study is focused on participants recruited from Amazon Mechanical Turk, the most widely used crowdsourcing platform, and examines the tensions between participants’ rights and research objectives when online labor markets are used to recruit research participants. The discussion is informed by the recent literature on online research with crowdsourced samples, evidence from human subjects’ practices at top US universities, and an ethical analysis based on distributive justice and consequentialism. The results indicate that compensating crowdsourced research subjects who fail to complete their participation jeopardizes the benefits of many who dutifully participate and undermines the overall value of the research enterprise for advancing knowledge. By examining the issue of payment for crowdsourced participants who drop out, this article sheds light on modern ethical issues in online research and offers practical recommendations for researchers and ethics scholars.
Similar content being viewed by others
Notes
While these benchmarks vary by jurisdiction, as of Spring 2023, the minimum hourly wages in the states of New Jersey, New York, and Pennsylvania are $14.13, $15, and $7.25, respectively. For comparison, the United States federal minimum wage is $7.25 per hour.
Currently, the service fee charged by Amazon to requesters is 20%, and the minimum fee is $0.01 per assignment or unit or work. The fee increases if requesters need workers with premium qualifications. Details about pricing are available at: https://www.mturk.com/pricing.
According to Prolific, at the time of this writing, the minimum payment allowed is £6.00/$8.00 per hour, and the recommended payment is at least £8.00/$10.50 per hour. In addition, the platform charges a service fee of 25% for academic researchers and 30% for companies, excluding taxes. See current rates at: https://www.prolific.co/pricing.
This sample includes research tasks (e.g., survey completion) and non-research tasks such as audio transcription, image description, rating objects, and categorizing images.
Amazon’s mTurk qualifications document contains the following excerpt: “Note that a Worker’s approval rate is statistically meaningless for small numbers of assignments, since a single rejection can reduce the approval rate by many percentage points. So to ensure that a new Worker’s approval rate is unaffected by these statistically meaningless changes, if a Worker has submitted less than 100 assignments, the Worker’s approval rate in the system is 100%.”
R1 designates doctoral-granting institutions with very high levels of research activity.
Amazon’s guidance to pay for incomplete tasks is found at: https://blog.mturk.com/paying-for-non-submitted-hits-245c6c3323bb.
Recall that given the assumptions for this study (stated in the introduction), researchers are adopting ethical best practices to treat participants. The ethical assumptions about fair payment (appropriate compensation) and transparency (full information) rule out the possibility of exploiting participants.
Note that withdrawing from a task does not adversely affect a worker’s reputation score because the approval rate is computed based on submitted work that is approved by the requester.
Existing regulations for human-subjects research in the US do not include explicit provisions regarding payment amounts. However, the guidelines suggest minimizing the risks of coercion and undue influence in the consent process.
References
Adams, J. S. (1963). Towards an understanding of inequity. The Journal of Abnormal and Social Psychology, 67(5), 422–436. https://doi.org/10.1037/h0040968.
Ahler, D. J., Roush, C. E., & Sood, G. (2021). The micro-task market for lemons: Data quality on Amazon’s mechanical Turk. Political Science Research and Methods, 1–20. https://doi.org/10.1017/psrm.2021.57.
Amazon Mechanical Turk (2020, March 25). Amazon Mechanical Turk. Participation Agreement. https://www.mturk.com/participation-agreement.
Anderson, S. (2021). Coercion. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2021). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/sum2021/entries/coercion/.
Benbunan-Fich, R. (2017). The ethics of online research with unsuspecting users: From A/B testing to C/D experimentation. Research Ethics, 13(3–4), 200–218. https://doi.org/10.1177/1747016116680664.
Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Evaluating Online Labor Markets for Experimental Research: Amazon.com’s mechanical Turk. Political Analysis, 20(3), 351–368. https://doi.org/10.1093/pan/mpr057.
Berman, M. N. (2002). The normative functions of coercion claims. Legal Theory, 8(1), 45–89. https://doi.org/10.1017/S1352325202081028.
Buchheit, S., Doxey, M. M., Pollard, T., & Stinson, S. R. (2018). A Technical Guide to using Amazon’s mechanical Turk in behavioral Accounting Research. Behavioral Research in Accounting, 30(1), 111–122. https://doi.org/10.2308/bria-51977.
Buhrmester, M. D., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data? Perspectives on Psychological Science, 6(1), 3–5. https://doi.org/10.1177/1745691610393980.
Buhrmester, M. D., Talaifar, S., & Gosling, S. D. (2018). An evaluation of Amazon’s mechanical Turk, its Rapid rise, and its effective use. Perspectives on Psychological Science, 13(2), 149–154. https://doi.org/10.1177/1745691617706516.
Chandler, J., & Shapiro, D. (2016). Conducting Clinical Research using Crowdsourced Convenience samples. Annual Review of Clinical Psychology, 12(1), 53–81. https://doi.org/10.1146/annurev-clinpsy-021815-093623.
Crump, M. J. C., McDonnell, J. V., & Gureckis, T. M. (2013). Evaluating Amazon’s mechanical Turk as a Tool for experimental behavioral research. PLOS ONE, 8(3), e57410. https://doi.org/10.1371/journal.pone.0057410.
Daly, T. M., & Nataraajan, R. (2015). Swapping bricks for clicks: Crowdsourcing longitudinal data on Amazon Turk. Journal of Business Research, 68(12), 2603–2609. https://doi.org/10.1016/j.jbusres.2015.05.001.
Dennis, S. A., Goodson, B. M., & Pearson, C. A. (2020). Online worker Fraud and Evolving Threats to the Integrity of MTurk Data: A discussion of virtual private servers and the Limitations of IP-Based screening procedures. Behavioral Research in Accounting, 32(1), 119–134. https://doi.org/10.2308/bria-18-044.
Deutsch, M. (1975). Equity, equality, and need: What determines which value will be used as the basis of distributive justice? Journal of Social Issues, 31(3), 137–149. https://doi.org/10.1111/j.1540-4560.1975.tb01000.x.
Dietrich, M., & Weisswange, T. H. (2019). Distributive justice as an ethical principle for autonomous vehicle behavior beyond hazard scenarios. Ethics and Information Technology, 21(3), 227–239. https://doi.org/10.1007/s10676-019-09504-3.
Edwards, S. J. (2005). Research Participation and the right to Withdraw. Bioethics, 19(2), 112–130. https://doi.org/10.1111/j.1467-8519.2005.00429.x.
Farrell, M., & Sweeney, B. (2021). Amazon’s MTurk: A currently underutilised resource for Survey Researchers? Accounting Finance & Governance Review, 27, 22019. https://doi.org/10.52399/001c.22019.
Fieseler, C., Bucher, E., & Hoffmann, C. P. (2019). Unfairness by design? The Perceived Fairness of Digital Labor on Crowdworking Platforms. Journal of Business Ethics, 156(4), 987–1005. https://doi.org/10.1007/s10551-017-3607-2.
Gelinas, L., Largent, E. A., Cohen, I. G., Kornetsky, S., Bierer, B. E., & Lynch, F., H (2018). A Framework for ethical payment to research participants. New England Journal of Medicine, 378(8), 766–771. https://doi.org/10.1056/NEJMsb1710591.
Gleibs, I. H. (2017). Are all “research fields” equal? Rethinking practice for the use of data from crowdsourcing market places. Behavior Research Methods, 49(4), 1333–1342. https://doi.org/10.3758/s13428-016-0789-y.
Goodman, J. K., Cryder, C. E., & Cheema, A. (2013). Data Collection in a flat world: The Strengths and Weaknesses of Mechanical Turk samples. Journal of Behavioral Decision Making, 26(3), 213–224. https://doi.org/10.1002/bdm.1753.
Haines, W. (2006). Consequentialism. In Internet Encyclopedia of Philosophy.
Hara, K., Adams, A., Milland, K., Savage, S., Callison-Burch, C., & Bigham, J. P. (2018). A Data-Driven Analysis of Workers’ earnings on Amazon Mechanical Turk. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–14. https://doi.org/10.1145/3173574.3174023.
Hauser, D., Paolacci, G., & Chandler, J. (2019). Common concerns with MTurk as a participant pool: Evidence and solutions. Handbook of research methods in consumer psychology (pp. 319–337). Routledge/Taylor & Francis Group. https://doi.org/10.4324/9781351137713-17.
Jia, R., Steelman, Z., & Reich, B. H. (2017). Using mechanical Turk Data in IS Research: Risks, rewards, and recommendations. Communications of the Association for Information Systems, 41, 301–318. https://doi.org/10.17705/1CAIS.04114.
Keith, M. G., Tay, L., & Harms, P. D. (2017). Systems Perspective of Amazon Mechanical Turk for Organizational Research: Review and recommendations. Frontiers in Psychology, 8, 1359. https://doi.org/10.3389/fpsyg.2017.01359.
Kennedy, R., Clifford, S., Burleigh, T., Waggoner, P. D., Jewell, R., & Winter, N. J. G. (2020). The shape of and solutions to the MTurk quality crisis. Political Science Research and Methods, 8(4), 614–629. https://doi.org/10.1017/psrm.2020.6.
Kim, T. W., & Werbach, K. (2016). More than just a game: Ethical issues in gamification. Ethics and Information Technology, 18(2), 157–173. https://doi.org/10.1007/s10676-016-9401-5.
Kimmel, A. J. (2012). Deception in research. In APA handbook of ethics in psychology, Vol 2: Practice, teaching, and research (pp. 401–421). American Psychological Association. https://doi.org/10.1037/13272-019.
Kwek, A. (2020). Crowdsourced Research: Vulnerability, autonomy, and Exploitation. Ethics & Human Research, 42(1), 22–35. https://doi.org/10.1002/eahr.500040.
Mason, W., & Suri, S. (2012). Conducting behavioral research on Amazon’s mechanical Turk. Behavior Research Methods, 44(1), 1–23. https://doi.org/10.3758/s13428-011-0124-6.
McConnell, T. (2010). The inalienable right to withdraw from research. The Journal of Law Medicine & Ethics: A Journal of the American Society of Law Medicine & Ethics, 38(4), 840–846. https://doi.org/10.1111/j.1748-720X.2010.00537.x.
Moussawi, S., & Koufaris, M. (2015). Working on Low-Paid Micro-Task Crowdsourcing Platforms: An Existence, Relatedness and Growth View. ICIS 2015 Proceedings. https://aisel.aisnet.org/icis2015/proceedings/HumanBehaviorIS/15.
U.S.A National Commission (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research [National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research]. U.S. Department of Health and Human Services. https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html.
Office for Human Research (2017, March 7). Revised Common Rule Regulatory Text. HHS.Gov. https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/revised-common-rule-regulatory-text/index.html.
Palan, S., & Schitter, C. (2018). Prolific.ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance, 17, 22–27. https://doi.org/10.1016/j.jbef.2017.12.004.
Paolacci, G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 9.
Peer, E., Vosgerau, J., & Acquisti, A. (2014). Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior Research Methods, 46(4), 1023–1031. https://doi.org/10.3758/s13428-013-0434-y.
Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology, 70, 153–163. https://doi.org/10.1016/j.jesp.2017.01.006.
Peer, E., Rothschild, D., Gordon, A., Evernden, Z., & Damer, E. (2022). Data quality of platforms and panels for online behavioral research. Behavior Research Methods, 54, 1643–1662. https://doi.org/10.3758/s13428-021-01694-3.
Reips, U. D. (2000). Chapter 4 - The Web Experiment Method: Advantages, Disadvantages, and Solutions. In M. H. Birnbaum (Ed.), Psychological Experiments on the Internet (pp. 89–117). Academic Press. https://doi.org/10.1016/B978-012099980-4/50005-8.
Schaefer, G. O., & Wertheimer, A. (2010). The right to withdraw from research. Kennedy Institute of Ethics Journal, 20(4), 329–352.
Sinnott-Armstrong, W. (2022). Consequentialism. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Winter 2022). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2022/entries/consequentialism/.
Steelman, Z., Hammer, B., & Limayem, M. (2014). Data Collection in the Digital Age: Innovative Alternatives to Student samples. Management Information Systems Quarterly, 38(2), 355–378.
The Carnegie Classification of Institutions of Higher Education. https://carnegieclassifications.acenet.edu/.
Zhou, H., & Fishbach, A. (2016). The pitfall of experimenting on the web: How unattended selective attrition leads to surprising (yet false) research conclusions. Journal of Personality and Social Psychology, 111(4), 493–504. https://doi.org/10.1037/pspa0000056.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Benbunan-Fich, R. To pay or not to pay? handling crowdsourced participants who drop out from a research study. Ethics Inf Technol 25, 34 (2023). https://doi.org/10.1007/s10676-023-09708-8
Published:
DOI: https://doi.org/10.1007/s10676-023-09708-8