Skip to main content
Log in

Task priority reduces an adverse effect of task load on automation trust in a dynamic multitasking environment

  • Original Article
  • Published:
Cognition, Technology & Work Aims and scope Submit manuscript

Abstract

The present study examined how task priority influences operators’ scanning patterns and trust ratings toward imperfect automation. Previous research demonstrated that participants display lower trust and fixate less frequently toward a visual display for the secondary task assisted with imperfect automation when the primary task demanded more attention. One account for this phenomenon is that the increased primary task demand induced the participants to prioritize the primary task than the secondary task. The present study asked participants to perform a tracking task, system monitoring task, and resource management task simultaneously using the Multi-Attribute Task Battery (MATB) II. Automation assisted the system monitoring task with 70% reliability. Task load was manipulated via difficulty of the tracking task. Participants were explicitly instructed to either prioritize the tracking task over all other tasks (tracking priority condition) or reduce tracking performance (equal priority condition). The results demonstrate the effects of task load on attention distribution, task performance and trust ratings. Furthermore, participants under the equal priority condition reported lower performance-based trust when the tracking task required more frequent manual input (tracking condition), while no effect of task load was observed under the tracking priority condition. Task priority can modulate automation trust by eliminating the adverse effect of task load in a dynamic multitasking environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Baddeley AD, Hitch G (1974) Working memory. Psychol Learn Motiv 8:47–89

    Article  Google Scholar 

  • Bailey NR, Scerbo MW (2007) Automation-induced complacency for monitoring highly reliable systems: the role of task complexity, system experience, and operator trust. Theor Issues Ergon Sci 8:321–348

    Article  Google Scholar 

  • Bainbridge L (1983) Ironies of automation. Automatica 19:775–779

    Article  Google Scholar 

  • Barber B (1983) The logic and limits of trust. Rutgers University Press, New Brunswick

    Google Scholar 

  • Billings CE (1997) Aviation automation: the search for a human centered approach. Erlbaum, Mahwah

    Google Scholar 

  • Breznitz S (1984) Cry wolf: the psychology of false alarms. Erlbaum, Hillsdale

    Google Scholar 

  • Chancey ET, Bliss JP, Yamani Y, Handley HAH (2017) Trust and the compliance reliance paradigm: the effects of risk, error bias, and reliability on trust and dependence. Hum Factors 57:947–958

    Article  Google Scholar 

  • Chancey ET, Politowicz, MS, Le Vie L (2021). Enabling advanced air mobility operations through appropriate trust in human-autonomy teaming: foundational research approaches and applications. In: AIAA Scitech 2021 Forum, p 0880

  • Comstock JR, Arnegard RJ (1992) The multi-attribute task battery for human operator workload and strategic behavior research (NASA Tech. Memorandum 104174). NASA Langley Research Center, Hampton

    Google Scholar 

  • Dixon SR, Wickens CD (2006) Automation reliability in unmanned aerial vehicle control: a reliance-compliance model of automation dependence in high workload. Hum Factors 48:474–486

    Article  Google Scholar 

  • Freed M (2000) Reactive prioritization. In: Proceedings of the international workshop on planning and scheduling in space, San Francisco, 2000

  • Getty DJ, Swets JA, Pickett RM, Gonthier D (1995) System operator response to warnings of danger: a laboratory investigation of the effects of the predictive value of a warning on human response time. J Exp Psychol Appl 1:19–33

    Article  Google Scholar 

  • Gilbert KM, Wickens CD (2017) Experimental evaluation of STOM in a business setting. In: Proceedings of the human factors and ergonomics society annual meeting, vol 61. SAGE publications, Los Angeles, p 767–771

  • Gopher D, Brickner M, Navon D (1982) Different difficulty manipulations interact differently with task emphasis: evidence for multiple resources. J Exp Psychol Hum Percept Perform 8:146–157

    Article  Google Scholar 

  • Gutzwiller RS, Wickens CD, Clegg BA (2014) Workload overload modeling: an experiment with MATB II to inform a computational model of task management. In: Proceedings of the human factors and ergonomics society annual meeting, vol 58. SAGE publications, Los Angeles, p 849–853

  • Gutzwiller RS, Sitzman DM (2017) Examining task priority effects in multi-task management. In: Proceedings of the human factors and ergonomics society annual meeting, vol 61. SAGE publications, Los Angeles, p 762–766

  • Hart SG (2006) NASA-task load index (NASA-TLX); 20 years later. In: Proceedings of the human factors and ergonomics society annual meeting, vol 50. SAGE publications, Los Angeles, p 904–908

  • Hart SG, Staveland LE (1988) Development of NASA-TLX (task load index): results of empirical and theoretical research. Adv Psychol 52:139–183

    Article  Google Scholar 

  • Hoff KA, Bashir M (2015) Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors 57:407–434

    Article  Google Scholar 

  • Horrey WJ, Wickens CD, Consalus KP (2006) Modeling drivers’ visual attention allocation while interacting with in-vehicle technologies. J Exp Psychol Appl 12:67–78

    Article  Google Scholar 

  • Iani C, Wickens CD (2007) Factors affecting task management in aviation. Hum Factors 49:16–24

    Article  Google Scholar 

  • Jeffreys H (1961) Theory of probability, 3rd edn. University Press, Oxford

    MATH  Google Scholar 

  • Jian J, Bisantz AM, Drury CG (2000) Foundations for an empirically determined scale of trust in automated systems. Int J Cogn Ergon 4:53–71

    Article  Google Scholar 

  • Kahneman D (1973) Attention and effort. Prentice Hall, Englewood Cliffs

    Google Scholar 

  • Karpinsky ND, Chancey ET, Palmer DB, Yamani Y (2018) Automation trust and attention allocation in multitasking workspace. Appl Ergon 70:194–201

    Article  Google Scholar 

  • Lee JD, Moray N (1992) Trust, control strategies and allocation of function in human machine systems. Ergonomics 35:1243–1270

    Article  Google Scholar 

  • Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46:50–80

    Article  Google Scholar 

  • Li H, Wickens CD, Sarter N, Sebok A (2014) Stages and levels of automation in support of space teleoperations. Hum Factors 56:1050–1061

    Article  Google Scholar 

  • Loft S, Chapman M, Smith RE (2016) Reducing prospective memory error and costs in simulated air traffic control: external aids, extending practice, and removing perceived memory requirements. J Exp Psychol Appl 22:272–284

    Article  Google Scholar 

  • Long S, Sato T, Millner N, Mirabelli J, Loranger R, Yamani Y (2020) Empirically and theoretically driven scales on automation trust: a multi-level confirmatory factor analysis. In: Proceedings of the human factors and ergonomics society annual meeting, vol 64. SAGE publications, Los Angeles, p 1829–1832

  • Lyons JB, Stokes CK (2012) Human–human reliance in the context of automation. Hum Factors 54:112–121

    Article  Google Scholar 

  • Mackworth NH (1948) The breakdown of vigilance during prolonged visual search. Quart J Exp Psychol 1:6–21

    Article  Google Scholar 

  • Molloy R, Parasuraman R (1996) Monitoring an automated system for a single failure: vigilance and task complexity effects. Hum Factors 38:311–322

    Article  Google Scholar 

  • Muir BM (1987) Trust between humans and machines, and the design of decision aids. Int J Man Mach Stud 27:527–539

    Article  Google Scholar 

  • Muir BM (1994) Trust in automation: part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37:1905–1922

    Article  Google Scholar 

  • Muir BM, Moray N (1996) Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39:429–460

    Article  Google Scholar 

  • Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors 39:230–253

    Article  Google Scholar 

  • Parasuraman R, Sheridan TB, Wickens CD (2000) A model for types and levels of human interaction with automation. IEEE Trans Syst Man Cybern Part A Syst Hum 30:286–297

    Article  Google Scholar 

  • Rempel JK, Holmes JG, Zanna MP (1985) Trust in close relationships. J Pers Soc Psychol 49:95–112

    Article  Google Scholar 

  • Rouder JN, Morey RD (2012) Default bayes factors for model selection in regression. Multivar Behav Res 47:877–903

    Article  Google Scholar 

  • Santiago-Espada Y, Myer RR, Latorella KA, Comstock JR (2011) The Multi-attribute task battery II (MATB-II) software for human performance and workload research: a user’s guide (NASA/TM-2011–217164). National Aeronautics and Space Administration, Langley Research Center, Hampton

    Google Scholar 

  • Sato T, Yamani Y, Liechty M, Chancey ET (2020) Automation trust increases under high-workload multitasking scenarios involving risk. Cogn Technol Work 22:399–407

    Article  Google Scholar 

  • Schaefer KE, Chen JYC, Szalma JL, Hancock PA (2016) A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum Factors 58:377–400

    Article  Google Scholar 

  • Schriver AT, Morrow DG, Wickens CD, Talleur DA (2017) Expertise differences in attentional strategies related to pilot decision making. Decision making in aviation. Routledge, London, pp 371–386

    Chapter  Google Scholar 

  • Sorkin RD (1988) Why are people turning off our alarms? J Acoust Soc Am 84:1107–1108

    Article  Google Scholar 

  • Warm JS, Parasuraman R, Matthews G (2008) Vigilance requires hard mental work and is stressful. Hum Factors 50:433–441

    Article  Google Scholar 

  • Wetzels R, Matzke D, Lee MD, Rouder JN, Iverson GJ, Wagenmakers EJ (2011) Statistical evidence in experimental psychology: an empirical comparison using 855 t tests. Perspect Psychol Sci 6:291–298

    Article  Google Scholar 

  • Wickens CD (2002) Multiple resources and performance prediction. Theor Issues Ergon Sci 3:159–177

    Article  Google Scholar 

  • Wickens CD, Alexander AL (2009) Attentional tunneling and task management in synthetic vision displays. Int J Aviat Psychol 19:182–199

    Article  Google Scholar 

  • Wickens CD, Goh J, Helleburg J, Horrey WJ, Talleur DA (2003) Attentional models of multi-task pilot performance using advanced display technology. Hum Factors 45:360–380

    Article  Google Scholar 

  • Wickens CD, Hollands JG, Banbury S, Parasuraman R (2015) Engineering psychology and human performance. Psychology Press

    Book  Google Scholar 

  • Wickens CD, Gutzwiller RS, Vieane A, Clegg BA, Sebok A, Janes J (2016) Time sharing between robotics and process control: validating a model of attention switching. Hum Factors 58:322–343

    Article  Google Scholar 

  • Yamani Y, Horrey WJ (2018) A theoretical model of human-automation interaction grounded in resource allocation policy during automated driving. Int J Hum Factors Ergonom 5:225–239

    Article  Google Scholar 

  • Yamani Y, Long SK, Itoh M (2020) Human–automation trust to technologies for naïve users amidst and following the COVID-19 pandemic. Hum Factors 62:1087–1094

    Article  Google Scholar 

  • Young MS, Stanton NA (2002) Malleable attentional resources theory: a new explanation for the effects of mental underload on performance. Hum Factors 44:365–375

    Article  Google Scholar 

  • Vanderhaegen F, Wolff M, Mollard R (2020) Non-conscious errors in the control of dynamic events synchronized with heartbeats: a new challenge for human reliability study. Saf Sci 129:1–11

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

TS and YY developed the experimental design and the experimental protocol. TS performed the data analysis and wrote the original manuscript. All authors reviewed and edited the manuscript.

Corresponding author

Correspondence to Tetsuya Sato.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

Scale items from the Chancey et al. (2017) trust questionnaire. (The numbers indicate the order that the items were presented to the participants when administered).

Performance

2. For me to perform well, I can rely on the automated aid to function.

4. The automated aid’s advice reliably helps me perform well.

5. The automated aid’s advice consistently helps me perform well.

12. The automated aid always provides the advice I require to help me perform well.

13. The automated aid adequately analyzes the system consistently, to help me perform well.

Process

3. It is easy to follow what the automated aid does to help me perform well.

6. I understand how the automated aid will help me perform well.

8. Although I may not know exactly how the automated aid works, I know how to use it to perform well.

10. To help me perform well, I recognize what I should do to get the advice I need from the automated aid the next time I use it.

11. I will be able to perform well the next time I use the automated aid because I understand how it behaves.

Purpose

1. Even when the automated aid gives me unusual advice, I am certain that the aid’s advice will help me to perform well.

7. Even if I have no reason to expect that the automated aid will function properly, I still feel certain that it will help me to perform well.

9. To help me perform well, I believe advice from the automated aid even when I don’t know for certain that it is correct.

Appendix B

Scale items from the Jian et al. (2000) trust questionnaire.

  1. 1.

    The system is deceptive.

  2. 2.

    The system behaves in an underhanded manner.

  3. 3.

    I am suspicious of the system’s intent, action, or outputs.

  4. 4.

    I am wary of the system.

  5. 5.

    The system’s actions will have a harmful or injurious outcome.

  6. 6.

    I am confident in the system.

  7. 7.

    The system provides security.

  8. 8.

    The system has integrity.

  9. 9.

    The system is dependable.

  10. 10.

    The system is reliable.

  11. 11.

    I can trust the system.

  12. 12.

    I am familiar with the system.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sato, T., Islam, S., Still, J.D. et al. Task priority reduces an adverse effect of task load on automation trust in a dynamic multitasking environment. Cogn Tech Work 25, 1–13 (2023). https://doi.org/10.1007/s10111-022-00717-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10111-022-00717-z

Keywords