Synonyms
Glossary
- AC:
-
Automatic computers
- AI:
-
Artificial intelligence
- AMT:
-
Amazon Mechanical Turk
- GWAP:
-
Games with a purpose
- HIT:
-
Human intelligence task
- IR:
-
Information retrieval
- MT:
-
Machine translation
- NLP:
-
Natural language processing
Introduction
The first computers were actually people (Grier 2005). Later, machines were built, known at the time as Automatic computers (ACs), to perform many routine computations. While such machines have continued to advance and now perform many of the routine processing tasks once delegated to people, human capabilities still continue to exceed state-of-the-art artificial intelligence (AI) on a variety of important data analysis tasks, such as those involving image (Sorokin and Forsyth 2008) and language understanding (Snow et al. 2008). Consequently, today’s Internet-based access to 24/7 online human crowds has sparked the advent of crowdsourcing (Howe 2006) and a renaissance of human computation (Quinn and...
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alonso O (2012) Implementing crowdsourcing-based relevance experimentation: an industrial perspective. Info Retr J Spec Issue Crowdsourc
Alonso O, Rose DE, Stewart B (2008) Crowdsourcing for relevance evaluation. ACM SIGIR Forum 42(2):9–15
Artstein R, Poesio M (2008) Inter-coder agreement for computational linguistics. Comput Linguist 34(4):555–596
Bederson BB, Quinn AJ (2011a) Web workers unite! Addressing challenges of online laborers. In: CHI workshop on crowdsourcing and human computation. ACM
Callison-Burch C (2009) Fast, cheap, and creative: evaluating translation quality using Amazon’s Mechanical Turk. In: Proceedings of the 2009 conference on empirical methods in natural language processing: volume 1-volume 1. Association for Computational Linguistics, pp 286–295
Davis J, Arderiu J, Lin H, Nevins Z, Schuon S, Gallo O, Yang M (2010) The HPU. In: Computer vision and pattern recognition workshops (CVPRW), pp 9–16
Dawid AP, Skene AM (1979) Maximum likelihood estimation of observer error-rates using the em algorithm. Appl Stat 28(1):20–28
Felstiner A (2010) Sweatshop or paper route? Child labor laws and in-game work. In: Proceedings of the 1st annual conference on the future of distributed work (CrowdConf), San Francisco
Fort K, Adda G, Cohen KB (2011) Amazon Mechanical Turk: gold mine or coal mine? Comput Linguist 37(2):413–420
Grier DA (2005) When computers were human, vol 316. Princeton University Press, Princeton
Horowitz D, Kamvar SD (2010) The anatomy of a large-scale social search engine. In: Proceedings of the 19th international conference on world wide web. ACM, pp 431–440
Howe J (2006) The rise of crowdsourcing. Wired Mag 14(6):1–4
Ipeirotis P (2010) Demographics of mechanical Turk (Tech. Rep. CeDER-10-01). New York University
Irani L, Silberman M (2013) Turkopticon: interrupting worker invisibility in Amazon Mechanical Turk. In: Proceeding of the ACM SIGCHI conference on human factors in computing systems
Kazai G, Kamps J, Milic-Frayling N (2012) An analysis of human factors and label accuracy in crowdsourcing relevance judgments. Info Retr J Spec Issue Crowdsourc
Kittur A, Nickerson JV, Bernstein M, Gerber E, Shaw A, Zimmerman J, Lease M, Horton J (2013) The future of crowd work. In: Proceedings of the ACM conference on computer supported cooperative work (CSCW), pp 1301–1318
Klinger J, Lease M (2011) Enabling trust in crowd labor relations through identity sharing. In: Proceedings of the 74th annual meeting of the American Society for Information Science and Technology (ASIS&T), pp 1–4
Kochhar S, Mazzocchi S, Paritosh P (2010) The anatomy of a large-scale human computation engine. In: Proceedings of the ACM SIGKDD workshop on human computation. ACM, pp 10–17
Kulkarni A, Gutheim P, Narula P, Rolnitzky D, Parikh T, Hartmann B (2012) Mobileworks: designing for quality in a managed crowdsourcing architecture. IEEE Internet Comput 16(5):28
Law E, von Ahn L (2011) Human computation. Synth Lect Artif Intell Mach Learn 5(3):1–121
Le J, Edmonds A, Hester V, Biewald L (2010) Ensuring quality in crowdsourced search relevance evaluation: the effects of training question distribution. In: SIGIR 2010 workshop on crowdsourcing for search evaluation, pp 21–26
Lease M, Hullman J, Bigham JP, Bernstein MS, Kim J, Lasecki WS, Bakhshi S, Mitra T, Miller RC (2013) Mechanical Turk is not anonymous. In: Social science research network (SSRN). Online: http://SSRN.Com/abstract=2228728. SSRN ID: 2228728
Liu D, Bias R, Lease M, Kuipers R (2012) Crowdsourcing for usability testing. In: Proceedings of the 75th annual meeting of the American Society for Information Science and Technology (ASIS&T)
Mason W, Watts DJ (2009) Financial incentives and the performance of crowds. In: Proceedings of the SIGKDD, Paris
Munro R (2012) Crowdsourcing and the crisis-affected community lessons learned and looking forward from mission 4636. Info Retr J Spec Issue Crowdsourc
Paritosh P, Ipeirotis P, Cooper M, Suri S (2011) The computer is the new sewing machine: benefits and perils of crowdsourcing. In: Proceedings of the 20th international conference companion on world wide web. ACM, pp 325–326
Pickard G, Pan W, Rahwan I, Cebrian M, Crane R, Madan A, Pentland A (2011) Time-critical social mobilization. Science 334(6055):509–512
Quinn AJ, Bederson BB (2011) Human computation: a survey and taxonomy of a growing field. In: 2011 annual ACM SIGCHI conference on human factors in computing systems, pp 1403–1412
Ross J, Irani L, Silberman M, Zaldivar A, Tomlinson B (2010) Who are the crowdworkers? Shifting demographics in mechanical Turk. In: Proceedings of the 28th of the international conference extended abstracts on human factors in computing systems. ACM, pp 2863–2872
Sheng V, Provost F, Ipeirotis P (2008) Get another label? Improving data quality and data mining using multiple, noisy labelers. In: Proceeding of the 14th ACM SIGKDD international conference on knowledge discovery and data mining, pp 614–622
Silberman M, Irani L, Ross J (2010) Ethics and tactics of professional crowdwork. XRDS: Crossroads ACM Mag Stud 17(2):39–43
Snow R, O’Connor B, Jurafsky D, Ng AY (2008) Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. In: Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, pp 254–263
Sorokin A, Forsyth D (2008) Utility data annotation with Amazon Mechanical Turk. In: IEEE computer society conference on computer vision and pattern recognition workshops, 2008 (CVPRW’08). IEEE, pp 1–8
Surowiecki J (2005) The wisdom of crowds. Anchor, New York
Tang W, Lease M (2011) Semi-supervised consensus labeling for crowdsourcing. In: Proceedings of the ACM SIGIR workshop on crowdsourcing for information retrieval. ACM, New York
Viégas F, Wattenberg M, Mckeon M (2007) The hidden order of Wikipedia. In: Online communities and social computing. Springer, Berlin/New York, pp 445–454
Wang J, Ipeirotis P, Provost F (2011) Managing crowdsourcing workers. In: The 2011 winter conference on business intelligence, Salt Lake City
Wolfson S, Lease, M (2011) Look before you leap: legal pitfalls of crowdsourcing. In: Proceedings of the 74th annual meeting of the American Society for Information Science and Technology (ASIS&T)
Yan T, Kumar V, Ganesan D (2010) CrowdSearch: exploiting crowds for accurate real-time image search on mobile phones. In: Proceedings of the 8th international conference on mobile systems, applications, and services (MOBISYS). ACM, pp 77–90
Zuccon G, Leelanupab T, Whiting S, Yilmaz E, Jose JM, Azzopardi L (2012) Crowdsourcing interactions: using crowdsourcing for evaluating interactive information retrieval systems. Info Retr J Spec Issue Crowdsourc
Recommended Readings
Barr J, Cabrera LF (2006) AI gets a brain. Queue 4(4):24–29
Bederson BB, Quinn A (2011b) Participation in human computation. In: CHI workshop on crowdsourcing and human computation. ACM
Bell RM, Koren Y (2007) Lessons from the Netflix prize challenge. ACM SIGKDD Explor Newsl 9(2):75–79
Benkler Y (2002) Coase’s penguin, or, Linux and “the nature of the firm”. Yale Law J 112(3):369–446
Bryant SL, Forte A, Bruckman A (2005) Becoming Wikipedian: transformation of participation in a collaborative online encyclopedia. In: Proceedings of the 2005 international ACM SIGGROUP conference on supporting group work. ACM, pp 1–10
Boyd B (2011) What is the role of technology in human trafficking? http://www.zephoria.org/thoughts/archives/2011/12/07/tech-trafficking.html
Chen JJ, Menezes NJ, Bradley AD, North TA (2011) Opportunities for crowdsourcing research on Amazon Mechanical Turk. In: CHI workshop on crowdsourcing and human computation
Chi EH, Bernstein MS (2012) Leveraging online populations for crowdsourcing: guest editors’ introduction to the special issue. IEEE Internet Comput 16(5):10–12
Cushing E (2013) Amazon Mechanical Turk: the digital sweatshop. UTNE reader. www.utne.com/science-technology/amazon-mechanical-turk-zm0z13jfzlin.as%20px
Dekel O, Shamir O (2008) Learning to classify with missing and corrupted features. In: Proceedings of the 25th international conference on machine learning. ACM, pp 216–223
Grady C, Lease M (2010) Crowdsourcing document relevance assessment with mechanical Turk. In: Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon’s Mechanical Turk. Los Angeles, Association for Computational Linguistics, pp 172–179
Hecht B, Teevan J, Morris MR, Liebling D (2012) Searchbuddies: bringing search engines into the conversation. In: Proceedings of ICWSM 2012
Irwin A (2001) Constructing the scientific citizen: science and democracy in the biosciences. Public Underst Sci 10(1):1–18
Lease M, Yilmaz E (2013) Crowdsourcing for information retrieval: introduction to the special issue. Info Retr 16(4):91–100
Levine BN, Shields C, Margolin NB (2006) A survey of solutions to the Sybil attack (Tech. Rep.), University of Massachusetts Amherst, Amherst
McCreadie R, Macdonald C, Ounis I (2012) Crowdterrier: automatic crowdsourced relevance assessments with terrier. In: Proceedings of the 35th international ACM SIGIR conference on research and development in information retrieval. ACM, pp 1005–1005
Mitchell S (2010) Inside the online sweatshops. In: PC pro magazine. www.pcpro.co.uk/features/360127/inside-the-online-sweatshops
Narula P, Gutheim P, Rolnitzky D, Kulkarni A, Hartmann B (2011) Mobileworks: a mobile crowdsourcing platform for workers at the bottom of the pyramid. In: AAAI human computation workshop, San Francisco
Oleson D, Sorokin A, Laughlin G, Hester V, Le J, Biewald L (2011) Programmatic gold: targeted and scalable quality assurance in crowdsourcing. In: AAAI workshop on human computation, San Francisco
Pontin J, (2007) Artificial intelligence, with help from the humans. New York Times, 25 March 2007
Shaw A (2013) Some initial thoughts on the otey vs crowdflower case. http://fringethoughts.wordpress.com/2013/01/09/some-initial-thoughts-on-the-otey-vs-crowdflower-case/
Smyth P, Fayyad U, Burl M, Perona P, Baldi P (1995) Inferring ground truth from subjective labelling of Venus images. Adv Neural Info Proces Syst:1085–1092
Stvilia B, Twidale MB, Smith LC, Gasser L (2008) Information quality work organization in Wikipedia. J Am Soc Info Sci Technol 59(6):983–1001
Sunstein CR (2006) Infotopia: how many minds produce knowledge. Oxford University Press, Oxford
Vincent D (2011) China used prisoners in lucrative internet gaming work. The Guardian, 25 May 2011
von Ahn L (2005) Human computation. PhD thesis, Carnegie Mellon University (Tech. Rep., CMU-CS-05-193)
von Ahn L, Dabbish L (2008) Designing games with a purpose. Commun ACM 51(8):58–67
von Ahn L, Maurer B, McMillen C, Abraham D, Blum M (2008) Recaptcha: human-based character recognition via web security measures. Science 321(5895):1465–1468
Wallach H, Vaughan JW (2010) Workshop on computational social science and the wisdom of crowds. In: NIPS, Whistler
Wauthier FL, Jordan MI (2011) Bayesian bias mitigation for crowdsourcing. In: Proceedings of NIPS
Acknowledgments
We thank Jessica Hullman for her thoughtful comments and editing regarding broader impacts of crowdsourcing (Lease et al. 2013). We also thank AMT personnel for the very useful platform they have built and their clear interest in supporting academic researchers using AMT. Last but not least, we thank the global crowd of individuals who have contributed and continue to contribute to crowdsourcing projects worldwide. Thank you for making crowdsourcing possible.
Matthew Lease was supported in part by an NSF CAREER award, a DARPA Young Faculty Award N66001-12-1-4256, and a Temple Fellowship. Any opinions, findings, and conclusions or recommendations expressed in this entry are those of the authors alone and do not express the views of any of the funding agencies.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Section Editor information
Rights and permissions
Copyright information
© 2018 Springer Science+Business Media LLC, part of Springer Nature
About this entry
Cite this entry
Lease, M., Alonso, O. (2018). Crowdsourcing and Human Computation: Introduction. In: Alhajj, R., Rokne, J. (eds) Encyclopedia of Social Network Analysis and Mining. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-7131-2_107
Download citation
DOI: https://doi.org/10.1007/978-1-4939-7131-2_107
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-7130-5
Online ISBN: 978-1-4939-7131-2
eBook Packages: Computer ScienceReference Module Computer Science and Engineering