Fast, cheap, but is it still good? An opinionated guide to crowdsourcing platforms in 2018 (17-Oct-2018)
- Snow, R., O'Connor, B., Jurafsky, D., & Ng, A. Y. (2008, October). Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks. In Proceedings of the conference on empirical methods in natural language processing (pp. 254-263). Association for Computational Linguistics.
- Mohammad, S. (2016). A practical guide to sentiment annotation: Challenges and solutions. In Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (pp. 174-179).
- Benoit, K., Conway, D., Lauderdale, B. E., Laver, M., & Mikhaylov, S. (2016). Crowd-sourced text analysis: Reproducible and agile production of political data. American Political Science Review, 110(2), 278-295.
- Haselmayer, M., & Jenny, M. (2017). Sentiment analysis of political communication: combining a dictionary approach with crowdcoding. Quality & quantity, 51(6), 2623-2646.
- http://vanatteveldt.com/p/atteveldt_cityu_seminar.pdf
- Andersen, D. J., & Lau, R. R. (2018). Pay Rates and Subject Performance in Social Science Experiments Using Crowdsourced Online Samples. Journal of Experimental Political Science, 1-13.
- Haug, M. C. (2017). Fast, Cheap, and Unethical? The Interplay of Morality and Methodology in Crowdsourced Survey Research. Review of Philosophy and Psychology, 1-17.
- Pavlick, E., Post, M., Irvine, A., Kachaev, D., & Callison-Burch, C. (2014). The language demographics of amazon mechanical turk. Transactions of the Association for Computational Linguistics, 2, 79-92.
- http://vanatteveldt.com/p/atteveldt_cresta.pdf
- Lind, F., Gruber, M., & Boomgaarden, H. G. (2017). Content analysis by the crowd: Assessing the usability of crowdsourcing for coding latent constructs. Communication methods and measures, 11(3), 191-209.