Abstract
The objective of AI-based masked language modeling (MLM) is to mask one or more words in a sentence and have the Natural Language Processing (NLP) model identify the masked words given the other words (representing context) in a sentence. In this study, using real examples collected from an online translation study group, we identify multiple human strategies to perform masked language modeling tasks: looking up definitions; comparing/contrasting; relying on common sense and knowledge; relying on statistical properties of past experiences; building an augmented context (using a list of keywords); and using it on a web search engine. In terms of human versus machine performance, the MLM algorithm’s performance is equal to the level of an average human expert, but it still cannot compete with the best human performance. The human experts’ strengths are the awareness of global knowledge, deep understanding of concepts, events, public opinion, etc. The human experts’ usual weaknesses, on the other hand, are the lack of domain knowledge and human biases. The machine’s strength is its comprehensive and encompassing coverage gleaned by learning from a large corpus, so that it can sometimes fill human experts’ knowledge gaps and correct human bias. But the machine could suffer from lack of true understanding and machine bias due to misleading statistical patterns. One common trait shared by human experts and the MLM algorithm is that they both can make decisions based on statistical observations. Therefore, it stands to reason that a human and machine can form a team to achieve better overall performance. In most cases, humans are not aware of their knowledge limitation or bias, so AI algorithm should take a proactive role in making suggestions, not a reactive role to be activated when human feels the need. In addition, it would be beneficial if the AI algorithm lists definition and sample usages of the word they suggest because humans need to be educated. The important skills demonstrated by human experts seem to be their ability to manipulate context for sensitivity analyses and/or the ability to gauge context-word interactions to understand the context. To improve human-machine teaming, it would be beneficial to incorporate human creativity into interface and interaction designs—humans can quickly input different context manipulation and word-context combinations, and the machine can provide quick feedback based on its extensive knowledge based from a large corpus. This teaming arrangement helps facilitate the joining of forces between human creativity and machine intelligence.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Wang, C., Li, M., L., Smola, A.J.: Language Models with Transformers (2019). https://arxiv.org/pdf/1904.09408.pdf
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (2018). https://arxiv.org/abs/1810.04805
Masked language modeling demo, AllenNLP, Allen Institute for AI. https://demo.allennlp.org/
Virginie Ségard, Professional Translators: an Endangered Species (2009). http://www.cttic.org/Opinions/VSegard0910_EN.pdf
Stephanidis, C.C., et al.: Seven HCI Grand Challenges. Int. J. HCI 35(14), 1229–1269 (2019). https://doi.org/10.1080/10447318.2019.1619259
Sartre, J.P.: Being and Nothingness: An Essay on Phenomenological Ontology. Editions Gallimard, Paris (1943)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Qian, M., Qian, D. (2020). Human Versus Machine and Human-Machine Teaming on Masked Language Modeling Tasks. In: Stephanidis, C., Kurosu, M., Degen, H., Reinerman-Jones, L. (eds) HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence. HCII 2020. Lecture Notes in Computer Science(), vol 12424. Springer, Cham. https://doi.org/10.1007/978-3-030-60117-1_37
Download citation
DOI: https://doi.org/10.1007/978-3-030-60117-1_37
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60116-4
Online ISBN: 978-3-030-60117-1
eBook Packages: Computer ScienceComputer Science (R0)