Abstract
Labeling is a cornerstone of supervised machine learning. However, in industrial applications, data is often not labeled, which complicates using this data for machine learning. Although there are well-established labeling techniques such as crowdsourcing, active learning, and semi-supervised learning, these still do not provide accurate and reliable labels for every machine learning use case in the industry. In this context, the industry still relies heavily on manually annotating and labeling their data. This study investigates the challenges that companies experience when annotating and labeling their data. We performed a case study using a semi-structured interview with data scientists at two companies to explore their problems when labeling and annotating their data. This paper provides two contributions. We identify industry challenges in the labeling process, and then we propose mitigation strategies for these challenges.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arora, S., Nyberg, E., Rose, C.: Estimating annotation cost for active learning in a multi-annotator environment. In: Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pp. 18–26 (2009)
AzatiSoftware: AzatiSoftware Automated Data Labeling with Machine Learning (2019). https://azati.ai/automated-data-labeling-with-machine-learning
Bair, E.: Semi-supervised clustering methods. Wiley Interdiscip. Rev. Comput. Stat. 5(5), 349–361 (2013)
Baldridge, J., Osborne, M.: Active learning and the total cost of annotation. In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pp. 9–16 (2004)
Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3(2), 77–101 (2006)
Chang, J.C., Amershi, S., Kamar, E.: Revolt: collaborative crowdsourcing for labeling machine learning datasets. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 2334–2346 (2017)
Cloud Factory, H.: Crowd vs. Managed Team: A studo on Quality Data Processing at Scale (2020). https://go.cloudfactory.com/hubfs/02-Contents/3-Reports/Crowd-vs-Managed-Team-Hivemind-Study.pdf
Cognilytica Research: Data Preparation & Labeling for AI 2020. Technical report, Cognilytica Research (2020)
Culotta, A., McCallum, A.: Reducing labeling effort for structured prediction tasks. In: AAAI, vol. 5, pp. 746–751 (2005)
DataScience, T.: What To Do When Your Classification Data is Imbalanced? (2019). https://towardsdatascience.com/what-to-do-when-your-classification-dataset-is-imbalanced-6af031b12a36
Fredriksson, T., Bosch, J., Holmström-Olsson, H.: Machine learning models for automatic labeling: a systematic literature review (2020)
hackernoon.com: Crowdsourcing Data Labeling for Machine Learning Projects (2020). https://hackernoon.com/crowdsourcing-data-labeling-for-machine-learning-projects-a-how-to-guide-cp6h32nd
Haertel, R.A., Seppi, K.D., Ringger, E.K., Carroll, J.L.: Return on investment for active learning. In: Proceedings of the NIPS Workshop on Cost-Sensitive Learning, vol. 72 (2008)
Harpale, A.: Multi-task active learning. Ph.D. thesis, Carnegie Mellon University (2012)
Ipeirotis, P.G., Provost, F., Sheng, V.S., Wang, J.: Repeated labeling using multiple noisy labelers. Data Min. Knowl. Discov. 28(2), 402–441 (2013). https://doi.org/10.1007/s10618-013-0306-1
Kapoor, A., Horvitz, E., Basu, S.: Selective supervision: guiding supervised learning with decision-theoretic active learning. IJCAI 7, 877–882 (2007)
Körner, C., Wrobel, S.: Multi-class ensemble-based active learning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 687–694. Springer, Heidelberg (2006). https://doi.org/10.1007/11871842_68
Northcutt, C.G., Jiang, L., Chuang, I.L.: Confident learning: estimating uncertainty in dataset labels. arXiv preprint arXiv:1911.00068 (2019)
Reason, P., Bradbury, H.: Handbook of Action Research: Participative Inquiry and Practice. Sage, London (2001)
Ringger, E.K., et al.: Assessing the costs of machine-assisted corpus annotation through a user study. In: LREC, vol. 8, pp. 3318–3324 (2008)
Roh, Y., Heo, G., Whang, S.E.: A survey on data collection for machine learning: a big data-AI integration perspective. IEEE Trans. Knowl. Data Eng. (2019)
Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empir. Softw. Eng. 14(2), 131 (2009)
Schein, A.I., Ungar, L.H.: Active learning for logistic regression: an evaluation. Mach. Learn. 68(3), 235–265 (2007)
Settles, B.: Active learning. Morgan Claypool. Synthesis Lectures on AI and ML (2012)
Settles, B., Craven, M.: An analysis of active learning strategies for sequence labeling tasks. In: Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pp. 1070–1079 (2008)
Settles, B., Craven, M., Friedland, L.: Active learning with real annotation costs. In: Proceedings of the NIPS Workshop on Cost-Sensitive Learning, Vancouver, CA, pp. 1–10 (2008)
Sheshadri, A., Lease, M.: Square: a benchmark for research on computing crowd consensus. In: First AAAI Conference on Human Computation and Crowdsourcing (2013)
Staron, M.: Action Research in Software Engineering: Theory and Applications. Springe, Chamr (2019). https://doi.org/10.1007/978-3-030-32610-4
Sukhbaatar, S., Fergus, R.: Learning from noisy labels with deep neural networks. arXiv preprint arXiv:1406.2080 2(3), 4 (2014)
Vijayanarasimhan, S., Grauman, K.: What’s it going to cost you?: predicting effort vs. informativeness for multi-label image annotations. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2262–2269. IEEE (2009)
Wallace, B.C., Small, K., Brodley, C.E., Lau, J., Trikalinos, T.A.: Modeling annotation time to reduce workload in comparative effectiveness reviews. In: Proceedings of the 1st ACM International Health Informatics Symposium, pp. 28–35 (2010)
Zhang, J., Sheng, V.S., Li, T., Wu, X.: Improving crowdsourced label quality using noise correction. IEEE Trans. Neural Netw. Learn. Syst. 29(5), 1675–1688 (2017)
Zhang, J., Wu, X., Sheng, V.S.: Learning from crowdsourced labeled data: a survey. Artif. Intell. Rev. 46(4), 543–576 (2016). https://doi.org/10.1007/s10462-016-9491-9
Zhu, X.J.: Semi-supervised learning literature survey. Technical report. University of Wisconsin-Madison Department of Computer Sciences (2005)
Acknowledgements
This work was partially supported by the Wallenberg AI Autonomous Systems and Software Program (WASP) funded by Knut and Alice Wallenberg Foundation.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Fredriksson, T., Mattos, D.I., Bosch, J., Olsson, H.H. (2020). Data Labeling: An Empirical Investigation into Industrial Challenges and Mitigation Strategies. In: Morisio, M., Torchiano, M., Jedlitschka, A. (eds) Product-Focused Software Process Improvement. PROFES 2020. Lecture Notes in Computer Science(), vol 12562. Springer, Cham. https://doi.org/10.1007/978-3-030-64148-1_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-64148-1_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-64147-4
Online ISBN: 978-3-030-64148-1
eBook Packages: Computer ScienceComputer Science (R0)