skip to main content
10.1145/3491102.3501876acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Mobilizing Crowdwork:A Systematic Assessment of the Mobile Usability of HITs

Published: 29 April 2022 Publication History

Abstract

There is a growing interest in extending crowdwork beyond traditional desktop-centric design to include mobile devices (e.g., smartphones). However, mobilizing crowdwork remains significantly tedious due to a lack of understanding about the mobile usability requirements of human intelligence tasks (HITs). We present a taxonomy of characteristics that defines the mobile usability of HITs for smartphone devices. The taxonomy is developed based on findings from a study of three consecutive steps. In Step 1, we establish an initial design of our taxonomy through a targeted literature analysis. In Step 2, we verify and extend the taxonomy through an online survey with Amazon Mechanical Turk crowdworkers. Finally, in Step 3 we demonstrate the taxonomy’s utility by applying it to analyze the mobile usability of a dataset of scraped HITs. In this paper, we present the iterative development of the taxonomy, highlighting the observed practices and preferences around mobile crowdwork. We conclude with the implications of our taxonomy for accessibly and ethically mobilizing crowdwork not only within the context of smartphone devices, but beyond them.

Supplementary Material

Supplemental Materials (3491102.3501876-supplemental-materials.zip)
MP4 File (3491102.3501876-video-preview.mp4)
Video Preview

References

[1]
Ritu Agarwal and Viswanath Venkatesh. 2002. Assessing a firm’s web presence: a heuristic evaluation procedure for the measurement of usability. Information systems research 13, 2 (2002), 168–186.
[2]
Alan Aipe and Ujwal Gadiraju. 2018. Similarhits: Revealing the role of task similarity in microtask crowdsourcing. In Proceedings of the 29th on Hypertext and Social Media. 115–122.
[3]
Reham Alabduljabbar and Hmood Al-Dossari. 2019. A dynamic selection approach for quality control mechanisms in crowdsourcing. IEEE Access 7(2019), 38644–38656.
[4]
Lisa Anthony, Quincy Brown, Jaye Nias, Berthel Tate, and Shreya Mohan. 2012. Interaction and recognition challenges in interpreting children’s touch and gesture input on mobile devices. In Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces. 225–234.
[5]
Michael S Bernstein, David R Karger, Robert C Miller, and Joel Brandt. 2012. Analytic methods for optimizing realtime crowdsourcing. arXiv preprint arXiv:1204.2995(2012).
[6]
Michael S Bernstein, Greg Little, Robert C Miller, Björn Hartmann, Mark S Ackerman, David R Karger, David Crowell, and Katrina Panovich. 2010. Soylent: a word processor with a crowd inside. In Proceedings of the 23nd annual ACM symposium on User interface software and technology. 313–322.
[7]
Michael S Bernstein, Jaime Teevan, Susan Dumais, Daniel Liebling, and Eric Horvitz. 2012. Direct answers for search queries in the long tail. In Proceedings of the SIGCHI conference on human factors in computing systems. 237–246.
[8]
Nigel Bevan and Miles Macleod. 1994. Usability measurement in context. Behaviour & information technology 13, 1-2 (1994), 132–145.
[9]
Virginia Braun and Victoria Clarke. 2012. Thematic analysis.(2012).
[10]
Pew Research Center. 2021. Mobile fact sheet. Pew Research Center (2021).
[11]
Justin Cheng, Jaime Teevan, Shamsi T Iqbal, and Michael S Bernstein. 2015. Break it down: A comparison of macro-and microtasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 4061–4064.
[12]
Pei-Yu Chi, Anurag Batra, and Maxwell Hsu. 2018. Mobile crowdsourcing in the wild: Challenges from a global community. In Proceedings of the 20th international conference on human-computer interaction with mobile devices and services adjunct. 410–415.
[13]
Elizabeth F Churchill. 2018. Putting Accessibility First. ACM/SIGCSE Seek 25(2018), 24.
[14]
Constantinos K Coursaris and Dan J Kim. 2011. A meta-analytical review of empirical mobile usability studies. Journal of usability studies 6, 3 (2011), 117–171.
[15]
André de Lima Salgado, Leandro Agostini do Amaral, Renata Pontin de Mattos Fortes, Marcos Hortes Nisihara Chagas, and Ger Joyce. 2017. Addressing mobile usability and elderly users: Validating contextualized heuristics. In International Conference of Design, User Experience, and Usability. Springer, 379–394.
[16]
Andre L Delbecq, Andrew H Van de Ven, and David H Gustafson. 1975. Group techniques for program planning: A guide to nominal group and Delphi processes. Scott, Foresman,.
[17]
Vincenzo Della Mea, Eddy Maddalena, and Stefano Mizzaro. 2013. Crowdsourcing to Mobile Users: A Study of the Role of Platforms and Tasks. In DBCrowd. Citeseer, 14–19.
[18]
Djellel Difallah, Elena Filatova, and Panos Ipeirotis. 2018. Demographics and Dynamics of Mechanical Turk Workers. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining - WSDM ’18. ACM Press, New York, New York, USA, 135–143. https://doi.org/10.1145/3159652.3159661
[19]
Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, Panagiotis G Ipeirotis, and Philippe Cudré-Mauroux. 2015. The dynamics of micro-task crowdsourcing: The case of amazon mturk. In Proceedings of the 24th international conference on world wide web. 238–247.
[20]
Evan W Duggan. 2003. Generating systems requirements with facilitated group techniques. Human-Computer Interaction 18, 4 (2003), 373–394.
[21]
Nathan Eagle. 2009. txteagle: Mobile crowdsourcing. In International Conference on Internationalization, Design and Global Development. Springer, 447–456.
[22]
Haakon Faste, Nir Rachmel, Russell Essary, and Evan Sheehan. 2013. Brainstorm, Chainstorm, Cheatstorm, Tweetstorm: new ideation strategies for distributed HCI design. In Proceedings of the sigchi conference on human factors in computing systems. 1343–1352.
[23]
Claudia Flores-Saviaga, Yuwen Li, Benjamin Hanrahan, Jeffrey Bigham, and Saiph Savage. 2020. The Challenges of Crowd Workers in Rural and Urban America. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 8. 159–162.
[24]
Ujwal Gadiraju, Ricardo Kawase, and Stefan Dietze. 2014. A taxonomy of microtasks on the web. In Proceedings of the 25th ACM conference on Hypertext and social media. 218–223.
[25]
Benjamin M Good and Andrew I Su. 2013. Crowdsourcing for bioinformatics. Bioinformatics 29, 16 (2013), 1925–1933.
[26]
Catherine Grady and Matthew Lease. 2010. Crowdsourcing document relevance assessment with mechanical turk. In Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon’s mechanical turk. 172–179.
[27]
Mary L Gray and Siddharth Suri. 2019. Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
[28]
Jonathan Grudin and Steven Poltrock. 2012. Taxonomy and theory in computer supported cooperative work. The Oxford handbook of organizational psychology 2 (2012), 1323–1348.
[29]
Aakar Gupta, William Thies, Edward Cutrell, and Ravin Balakrishnan. 2012. mClerk: enabling mobile crowdsourcing in developing regions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1843–1852.
[30]
Benjamin V Hanrahan, Anita Chen, JiaHua Ma, Ning F Ma, Anna Squicciarini, and Saiph Savage. 2021. The Expertise Involved in Deciding which HITs are Worth Doing on Amazon Mechanical Turk. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1(2021), 1–23.
[31]
Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Chris Callison-Burch, and Jeffrey P Bigham. 2018. A data-driven analysis of workers’ earnings on Amazon Mechanical Turk. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–14.
[32]
Edgar Hassler, Jeffrey C Carver, Nicholas A Kraft, and David Hale. 2014. Outcomes of a community workshop to identify and rank barriers to the systematic literature review process. In Proceedings of the 18th international conference on evaluation and assessment in software engineering. 1–10.
[33]
DM Hegedus and RV Rasmussen. 1986. Task effectiveness and interaction process of a modified nominal group technique in solving an evaluation problem. Journal of Management 12, 4 (1986), 545–560.
[34]
Christothea Herodotou, Maria Aristeidou, Grant Miller, Heidi Ballard, and Lucy Robinson. 2020. What do we know about young volunteers? An exploratory study of participation in Zooniverse. Citizen Science: Theory and Practice 5, 1 (2020).
[35]
Danula Hettiachchi, Zhanna Sarsenbayeva, Fraser Allison, Niels van Berkel, Tilman Dingler, Gabriele Marini, Vassilis Kostakos, and Jorge Goncalves. 2020. ” Hi! I am the Crowd Tasker” Crowdsourcing through Digital Voice Assistants. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
[36]
Danula Hettiachchi, Senuri Wijenayake, Simo Hosio, Vassilis Kostakos, and Jorge Goncalves. 2020. How Context Influences Cross-Device Task Acceptance in Crowd Work. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 8. 53–62.
[37]
Mahmood Hosseini, Keith Phalp, Jacqui Taylor, and Raian Ali. 2014. The four pillars of crowdsourcing: A reference model. In 2014 IEEE Eighth International Conference on Research Challenges in Information Science (RCIS). IEEE, 1–12.
[38]
Maria Husmann, Alfonso Murolo, Nicolas Kick, Linda Di Geronimo, and Moira C Norrie. 2018. Supporting out of office software development using personal devices. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services. 1–11.
[39]
Heather M Hutchings and Jeffrey S Pierce. 2006. Understanding the whethers, hows, and whys of divisible interfaces. In Proceedings of the working conference on Advanced visual interfaces. 274–277.
[40]
Ursula Huws and Simon Joyce. 2016. Crowd working survey: size of the UK’s ’Gig Economy’ revealed for the first time. Technical Report. http://www.feps-europe.eu/assets/a82bcd12-fb97-43a6-9346-24242695a183/crowd-working-surveypdf.pdf
[41]
Ursula Huws and Simon Joyce. 2017. First survey results reveal high levels of crowd work in Switzerland. Technical Report. http://unieuropaprojects.org/content/uploads/2017-09-13-factsheet-ch.pdf
[42]
Kazushi Ikeda and Keiichiro Hoashi. 2017. Crowdsourcing go: Effect of worker situation on mobile crowdsourcing performance. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 1142–1153.
[43]
Shamsi T Iqbal, Jaime Teevan, Dan Liebling, and Anne Loomis Thompson. 2018. Multitasking with Play Write, a mobile microproductivity writing tool. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 411–422.
[44]
Lilly C Irani and M Six Silberman. 2013. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI conference on human factors in computing systems. 611–620.
[45]
Jason T Jacques and Per Ola Kristensson. 2017. Design strategies for efficient access to mobile device users via Amazon Mechanical Turk. In Proceedings of the First ACM Workshop on Mobile Crowdsensing Systems and Applications. 25–30.
[46]
Matt Jones, George Buchanan, and Harold Thimbleby. 2002. Sorting out searching on small screen devices. In International Conference on Mobile Human-Computer Interaction. Springer, 81–94.
[47]
Toni Kaplan, Susumu Saito, Kotaro Hara, and Jeffrey P Bigham. 2018. Striving to earn more: a survey of work strategies and tool use among crowd workers. In Sixth AAAI Conference on Human Computation and Crowdsourcing.
[48]
Amy K Karlson, Brian R Meyers, Andy Jacobs, Paul Johns, and Shaun K Kane. 2009. Working overtime: Patterns of smartphone and PC usage in the day of an information worker. In International Conference on Pervasive Computing. Springer, 398–405.
[49]
Mohammad Taha Khan, Maria Hyun, Chris Kanich, and Blase Ur. 2018. Forgotten but not gone: Identifying the need for longitudinal data management in cloud storage. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–12.
[50]
Aniket Kittur, Jeffrey V Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In Proceedings of the 2013 conference on Computer supported cooperative work. 1301–1318.
[51]
Siou Chew Kuek, Cecilia Paradi-Guilford, Toks Fayomi, Saori Imaizumi, Panos Ipeirotis, Patricia Pina, and Manpreet Singh. 2015. The global opportunity in online outsourcing. (2015).
[52]
Pilar Pazos Lago, Mario G Beruvides, Jiun-Yin Jian, Ana Maria Canto, Angela Sandoval, and Roman Taraban. 2007. Structuring group decision making in a web-based environment by using the nominal group technique. Computers & Industrial Engineering 52, 2 (2007), 277–295.
[53]
Walter S Lasecki, Juho Kim, Nick Rafter, Onkur Sen, Jeffrey P Bigham, and Michael S Bernstein. 2015. Apparition: Crowdsourced user interfaces that come to life as you sketch them. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 1925–1934.
[54]
Zohar Laslo, Baruch Keren, and Hagai Ilani. 2008. Minimizing task completion time with the execution set method. European Journal of Operational Research 187, 3 (2008), 1513–1519.
[55]
Edith Law and Luis von Ahn. 2011. Human computation. Synthesis lectures on artificial intelligence and machine learning 5, 3(2011), 1–121.
[56]
Songil Lee, Gyouhyung Kyung, Jungyong Lee, Seung Ki Moon, and Kyoung Jong Park. 2016. Grasp and index finger reach zone during one-handed smartphone rear interaction: effects of task type, phone width and hand length. Ergonomics 59, 11 (2016), 1462–1472.
[57]
Jizi Li, Ying Wang, Dengku Yu, and Chunling Liu. 2021. Solvers’ committed resources in crowdsourcing marketplace: do task design characteristics matter?Behaviour & Information Technology(2021), 1–20.
[58]
David Martin, Benjamin V Hanrahan, Jacki O’Neill, and Neha Gupta. 2014. Being a turker. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. 224–235.
[59]
Ian McGraw, Chia-ying Lee, I Lee Hetherington, Stephanie Seneff, and James R Glass. 2010. Collecting Voices from the Cloud. In LREC. 1576–1583.
[60]
Brian Mullen, Craig Johnson, and Eduardo Salas. 1991. Productivity loss in brainstorming groups: A meta-analytic integration. Basic and applied social psychology 12, 1 (1991), 3–23.
[61]
Robbie T Nakatsu, Elissa B Grossman, and Charalambos L Iacovou. 2014. A taxonomy of crowdsourcing based on task complexity. Journal of Information Science 40, 6 (2014), 823–834.
[62]
Prayag Narula, Philipp Gutheim, David Rolnitzky, Anand Kulkarni, and Bjoern Hartmann. 2011. MobileWorks: A Mobile Crowdsourcing Platform for Workers at the Bottom of the Pyramid.Human Computation 11, 11 (2011), 45.
[63]
Michael Nebeling and Anind K Dey. 2016. XDBrowser: user-defined cross-device web page designs. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 5494–5505.
[64]
Michael Nebeling, Theano Mintsi, Maria Husmann, and Moira Norrie. 2014. Interactive development of cross-device user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2793–2802.
[65]
Michael Nebeling, Alexandra To, Anhong Guo, Adrian A de Freitas, Jaime Teevan, Steven P Dow, and Jeffrey P Bigham. 2016. WearWrite: Crowd-assisted writing from smartwatches. In Proceedings of the 2016 CHI conference on human factors in computing systems. 3834–3846.
[66]
MS Nikulin. 1973. Chi-squared test for normality. In Proceedings of the International Vilnius Conference on Probability Theory and Mathematical Statistics, Vol. 2. 119–122.
[67]
Elnaz Nouri, Adam Fourney, Robert Sim, and Ryen W White. 2019. Supporting complex tasks using multiple devices. In Proceedings of WSDM’19 Task Intelligence Workshop (TI@ WSDM19).
[68]
Antti Oulasvirta and Lauri Sumari. 2007. Mobile kits and laptop trays: managing multiple devices in mobile information work. In Proceedings of the SIGCHI conference on Human factors in computing systems. 1127–1136.
[69]
Veljko Pejovic and Artemis Skarlatidou. 2020. Understanding interaction design challenges in mobile extreme citizen science. International Journal of Human–Computer Interaction 36, 3(2020), 251–270.
[70]
Xin Peng, Jingxiao Gu, Tian Huat Tan, Jun Sun, Yijun Yu, Bashar Nuseibeh, and Wenyun Zhao. 2016. CrowdService: serving the individuals through mobile crowdsourcing and service composition. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. 214–219.
[71]
Jenny Preece, Yvonne Rogers, Helen Sharp, David Benyon, Simon Holland, and Tom Carey. 1994. Human-computer interaction. Addison-Wesley Longman Ltd.
[72]
Alexander J Quinn and Benjamin B Bederson. 2011. Human computation: a survey and taxonomy of a growing field. In Proceedings of the SIGCHI conference on human factors in computing systems. 1403–1412.
[73]
Veronica A Rivera and David T Lee. 2021. I Want to, but First I Need to: Understanding Crowdworkers’ Career Goals, Challenges, and Tensions. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1(2021), 1–22.
[74]
João GP Rodrigues, Ana Aguiar, and João Barros. 2014. Sensemycity: Crowdsourcing an urban sensor. arXiv preprint arXiv:1412.2070(2014).
[75]
Holly Rosser and Andrea Wiggins. 2018. Tutorial designs and task types in zooniverse. In Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing. 177–180.
[76]
Virpi Roto. 2005. Browsing on mobile phones. Nokia Research Center 10(2005), 2005.
[77]
Saiph Savage, Chun Wei Chiang, Susumu Saito, Carlos Toxtli, and Jeffrey Bigham. 2020. Becoming the super turker: Increasing wages via a strategy from high earning workers. In Proceedings of The Web Conference 2020. 1241–1252.
[78]
N Sadat Shami, Gilly Leshed, and David Klein. 2005. Context of use evaluation of peripheral displays (CUEPD). In IFIP Conference on Human-Computer Interaction. Springer, 579–587.
[79]
Kim Bartel Sheehan. 2018. Crowdsourcing research: data collection with Amazon’s Mechanical Turk. Communication Monographs 85, 1 (2018), 140–156.
[80]
M Six Silberman, Lilly Irani, and Joel Ross. 2010. Ethics and tactics of professional crowdwork. XRDS: Crossroads, The ACM Magazine for Students 17, 2 (2010), 39–43.
[81]
Robert Simpson, Kevin R Page, and David De Roure. 2014. Zooniverse: observing the world’s largest citizen science platform. In Proceedings of the 23rd international conference on world wide web. 1049–1054.
[82]
Mads Soegaard. 2015. Interaction styles. The glossary of human computer interaction(2015).
[83]
Saiganesh Swaminathan, Kotaro Hara, and Jeffrey P Bigham. 2017. The crowd work accessibility problem. In Proceedings of the 14th International Web for All Conference. 1–4.
[84]
Jaime Teevan. 2016. The future of microwork. XRDS: Crossroads, The ACM Magazine for Students 23, 2 (2016), 26–29.
[85]
Jaime Teevan, Shamsi T Iqbal, and Curtis Von Veh. 2016. Supporting collaborative writing with microtasks. In Proceedings of the 2016 CHI conference on human factors in computing systems. 2657–2668.
[86]
Peter Thomas and Robert D Macredie. 2002. Introduction to the new usability.
[87]
Carlos Toxtli, Angela Richmond-Fuller, and Saiph Savage. 2020. Reputation Agent: Prompting Fair Reviews in Gig Markets. In Proceedings of The Web Conference 2020. 1228–1240.
[88]
Khai N Truong, Thariq Shihipar, and Daniel J Wigdor. 2014. Slide to X: unlocking the potential of smartphone unlocking. In Proceedings of the SIGCHI conference on human factors in computing systems. 3635–3644.
[89]
Stephen Uzor, Jason T Jacques, John J Dudley, and Per Ola Kristensson. 2021. Investigating the Accessibility of Crowdwork Tasks on Mechanical Turk. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
[90]
Aditya Vashistha, Pooja Sethi, and Richard Anderson. 2017. Respeak: A voice-based, crowd-powered speech transcription system. In Proceedings of the 2017 CHI conference on human factors in computing systems. 1855–1866.
[91]
Viswanath Venkatesh and Venkataraman Ramesh. 2006. Web and wireless site usability: Understanding differences and modeling use. MIS quarterly (2006), 181–206.
[92]
E Vogels. 2021. Digital divide persists even as Americans with lower incomes make gains in tech adoption. Pew Research Center (2021).
[93]
Liang Wang, Zhiwen Yu, Qi Han, Dingqi Yang, Shirui Pan, Yuan Yao, and Daqing Zhang. 2020. Compact Scheduling for Task Graph Oriented Mobile Crowdsourcing. IEEE Transactions on Mobile Computing(2020).
[94]
Matthijs J Warrens. 2011. Cohen’s kappa is a weighted average. Statistical Methodology 8, 6 (2011), 473–484.
[95]
Alex C Williams, Harmanpreet Kaur, Shamsi Iqbal, Ryen W White, Jaime Teevan, and Adam Fourney. 2019. Mercury: Empowering Programmers’ Mobile Work Practices with Microproductivity. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 81–94.
[96]
Alex C Williams, Gloria Mark, Kristy Milland, Edward Lank, and Edith Law. 2019. The perpetual work life of crowdworkers: How tooling practices increase fragmentation in crowdwork. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–28.
[97]
Xiaocan Wu, Danlei Huang, Yu-E Sun, Xiaofei Bu, Yu Xin, and He Huang. 2017. An efficient allocation mechanism for crowdsourcing tasks with minimum execution time. In International Conference on Intelligent Computing. Springer, 156–167.
[98]
Tingxin Yan, Matt Marzilli, Ryan Holmes, Deepak Ganesan, and Mark Corner. 2009. mCrowd: a platform for mobile crowdsourcing. In Proceedings of the 7th ACM conference on embedded networked sensor systems. 347–348.
[99]
Dongsong Zhang and Boonlit Adipat. 2005. Challenges, methodologies, and issues in the usability testing of mobile applications. International journal of human-computer interaction 18, 3(2005), 293–308.
[100]
Guido Zuccon, Teerapong Leelanupab, Stewart Whiting, Emine Yilmaz, Joemon M Jose, and Leif Azzopardi. 2013. Crowdsourcing interactions: using crowdsourcing for evaluating interactive information retrieval systems. Information retrieval 16, 2 (2013), 267–305.
[101]
Kathryn Zyskowski, Meredith Ringel Morris, Jeffrey P Bigham, Mary L Gray, and Shaun K Kane. 2015. Accessible crowdwork? Understanding the value in and challenge of microtask employment for people with disabilities. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. 1682–1693.

Cited By

View all
  • (2022)Beyond a One-Size-Fits-All Approach: Towards Personalizing Multi-device Setups in CrowdworkAdjunct Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2022 ACM International Symposium on Wearable Computers10.1145/3544793.3560347(30-31)Online publication date: 11-Sep-2022

Index Terms

  1. Mobilizing Crowdwork:A Systematic Assessment of the Mobile Usability of HITs

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
    April 2022
    10459 pages
    ISBN:9781450391573
    DOI:10.1145/3491102
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 29 April 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Crowdwork
    2. Human Intelligence Tasks.
    3. Mobile Usability
    4. Taxonomy

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    CHI '22
    Sponsor:
    CHI '22: CHI Conference on Human Factors in Computing Systems
    April 29 - May 5, 2022
    LA, New Orleans, USA

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)65
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 20 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2022)Beyond a One-Size-Fits-All Approach: Towards Personalizing Multi-device Setups in CrowdworkAdjunct Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2022 ACM International Symposium on Wearable Computers10.1145/3544793.3560347(30-31)Online publication date: 11-Sep-2022

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media