Skip to main content

Trust of Learning Systems: Considerations for Code, Algorithms, and Affordances for Learning

  • Chapter
  • First Online:
Human and Machine Learning

Part of the book series: Human–Computer Interaction Series ((HCIS))

Abstract

This chapter provides a synthesis on the literature for Machine Learning (ML), trust in automation, trust in code, and transparency. The chapter introduces the concept of ML and discusses three drivers of trust in ML-based systems: code structure; algorithm performance, transparency, and error management – algorithm factors; and affordances for learning. Code structure offers a static affordance for trustworthiness evaluations that can be both deep and peripheral. The overall performance of the algorithms and the transparency of the inputs, process, and outputs provide an opportunity for dynamic and experiential trustworthiness evaluations. Predictability and understanding are the foundations of trust and must be considered in ML applications. Many ML paradigms neglect the notion of environmental affordances for learning, which from a trust perspective, may in fact be the most important differentiator between ML systems and traditional automation. The learning affordances provide contextualised pedigree for trust considerations. In combination, the trustworthiness aspects of the code, dynamic performance and transparency, and learning affordances offer structural, evidenced performance and understanding, as well as pedigree information from which ML approaches can be evaluated.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alarcon, G.M., Ryan, T.J.: Trustworthiness perceptions of computer code: a heuristic-systematic processing model. In: Proceedings of the Hawaii International Conference on System Sciences, Hawaii, USA (2018)

    Google Scholar 

  2. Alarcon, G.M., Gamble, R.F., Ryan, T.J., Walter, C., Jessup, S.A., Wood, D.W.: The influence of commenting validity, placement, and style on perceptions of computer code trustworthiness: a heuristic-systematic processing approach. Cogent Psychology (2017)

    Google Scholar 

  3. Alarcon, G.M., Militello, L.G., Ryan, P., Jessup, S.A., Calhoun, C.S., Lyons, J.B.: A descriptive model of computer code trustworthiness. J. Cogn. Eng. Decis. Mak. 11(2), 107–121 (2017)

    Article  Google Scholar 

  4. Ayodele, T.O.: Types of machine learning algorithms. In: Zhang, Y. (ed.) New Advances in Machine Learning. InTech (2010)

    Google Scholar 

  5. Bailey, N.R., Scerbo, M.W.: Automation-induced complacency for monitoring highly reliable systems: the role of task complexity, system experience, and operator trust. Theor. Issues Ergon. Sci. 8(4), 321–348 (2007)

    Article  Google Scholar 

  6. Brinton, C.: A Framework for explanation of machine learning decisions. In: Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Melbourne, Australia (2017)

    Google Scholar 

  7. Chapelle, O., Sindhwani, V., Keerthi, S.S.: Optimization techniques for semi-supervised support vector machines. J. Mach. Learn. Res. 9, 203–233 (2008)

    MATH  Google Scholar 

  8. Christensen, J.C., Lyons, J.B.: Trust between humans and learning machines: developing the gray box. In: American Society of Mechanical Engineers (ASME) Dynamic Systems and Control (DSC) Magazine, pp. 9–13 (2017)

    Article  Google Scholar 

  9. Clos, J., Wiratunga, N., Massie, S.: towards explainable text classification by jointly learning lexicon and modifier terms. In: Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Melbourne, Australia (2017)

    Google Scholar 

  10. Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines: And Other Kernel-based Learning Methods. Cambridge University Press, New York (2000)

    Book  Google Scholar 

  11. Das, K., Behera, R.: A survey on machine learning: concept, algorithms and applications. Int. J. Innov. Res. Comput. Commun. Eng. 5(2), 1301–1309 (2017)

    Google Scholar 

  12. Dixon, S.R., Wickens, C.D.: Automation reliability in unmanned aerial vehicle control: a reliance-compliance model of automation dependence in high workload. Hum. Factors 48(3), 474–486 (2006)

    Article  Google Scholar 

  13. Fox, M., Long, D., Magazzeni, D.: Explainable planning. In: Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Melbourne, Australia (2017)

    Google Scholar 

  14. Geels-Blair, K., Rice, S., Schwark, J.: Using system-wide trust theory to reveal the contagion effects of automation false alarms and misses on compliance and reliance in a simulated aviation task. Int. J. Aviat. Psychol. 23(3), 245–266 (2013)

    Article  Google Scholar 

  15. Goues, C.L., Nguyen, T., Forrest, S., Weimer, W.: GenProg: a generic method for automatic software repair. IEEE Trans. Softw. Eng. 38(1), 54–72 (2012)

    Article  Google Scholar 

  16. Guznov, S., Lyons, J., Nelson, A., Woolley, M.: The effects of automation error types on operators trust and reliance. In: Proceedings of International Conference on Virtual, Augmented and Mixed Reality. Lecture Notes in Computer Science, pp. 116–124. Springer, Cham (2016)

    Google Scholar 

  17. Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 53(5), 517–527 (2011)

    Article  Google Scholar 

  18. Hildebrand, A.S., Eck, M., Vogel, S., Waibel, A.: Adaptation of the translation model for statistical machine translation based on information retrieval. In: Proceedings of the 10th Conference of the European Association for Machine Translation (EAMT) (2005)

    Google Scholar 

  19. Johnfx: 5 best practices for commenting your code. https://improvingsoftware.com/2011/06/27/5-best-practices-for-commenting-your-code/ (2011). Accessed 10 Nov 2017

  20. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)

    Google Scholar 

  21. Khasawneh, M.T., Bowling, S.R., Jiang, X., Gramopadhye, A.K., Melloy, B.J.: Effect of error severity on human trust in hybrid systems. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 48, pp. 439–443. SAGE Publications, Los Angeles, CA (2004)

    Article  Google Scholar 

  22. Kim, A., Eustice, R.M.: Active visual SLAM for robotic area coverage: theory and experiment. Int. J. Robotics Res. 34(4–5), 457–475 (2015)

    Article  Google Scholar 

  23. Knight, W.: The dark secret at the heart of AI. MIT Technology Review (2017)

    Google Scholar 

  24. Koehn, P., Schroeder, J.: Experiments in domain adaptation for statistical machine translation. In: Proceedings of the Second Workshop on Statistical Machine Translation, StatMT ’07, pp. 224–227. Association for Computational Linguistics, Stroudsburg, PA, USA (2007)

    Google Scholar 

  25. Lee, J.D., Seppelt, B.D.: In: Nof, S. (ed.) Human Factors in Automation Design. Springer Handbook of Automation, pp. 417–436. Springer, Berlin (2009)

    Chapter  Google Scholar 

  26. Lyons, J.: Being transparent about transparency: a model for human-robot interaction. In: Sofge, D., Kruijff, G.J., Lawless, W.F. (eds.) Trust and Autonomous Systems: Papers from the AAAI Spring Symposium (Technical Report SS-13-07), pp. 48–53. AAAI Press, Menlo Park, CA (2013)

    Google Scholar 

  27. Lyons, J.B., Stokes, C.K.: Human-human reliance in the context of automation. Hum. Factors 54(1), 112–121 (2012)

    Article  Google Scholar 

  28. Lyons, J.B., Koltai, K.S., Ho, N.T., Johnson, W.B., Smith, D.E., Shively, R.J.: Engineering trust in complex automated systems. Ergon. Design 24(1), 13–17 (2016)

    Article  Google Scholar 

  29. Lyons, J.B., Clark, M.A., Wagner, A.R., Schuelke, M.J.: Certifiable trust in autonomous systems: making the intractable tangible. AI Mag. 38, 37–49 (2017)

    Article  Google Scholar 

  30. Madhavan, P., Wiegmann, D.A., Lacson, F.C.: Automation failures on tasks easily performed by operators undermine trust in automated aids. Hum. Factors 48(2), 241–256 (2006)

    Article  Google Scholar 

  31. Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995)

    Article  Google Scholar 

  32. Mercado, J.E., Rupp, M.A., Chen, J.Y.C., Barnes, M.J., Barber, D., Procci, K.: Intelligent agent transparency in human-agent teaming for multi-UxV management. Hum. Factors 58(3), 401–415 (2016)

    Article  Google Scholar 

  33. Miller, T., Piers, P., Sonenberg, L.: Explainable AI: beware of inmates running the Asylum. Or: how i learnt to stop worrying and love the social and behavioural sciences. In: Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Melbourne, Australia (2017)

    Google Scholar 

  34. Mitchell, T.: Machine Learning. McGraw-Hill International (1997)

    Google Scholar 

  35. Onnasch, L., Wickens, C.D., Li, H., Manzey, D.: Human performance consequences of stages and levels of automation: an integrated meta-analysis. Hum. Factors 56(3), 476–488 (2014)

    Article  Google Scholar 

  36. Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39(2), 230–253 (1997)

    Article  Google Scholar 

  37. Rajani, N.F., Mooney, R.J.: Using explanations to improve ensembling of visual question answering systems. In: Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Melbourne, Australia (2017)

    Google Scholar 

  38. Robinette, P., Wagner, A.R., Howard, A.M.: The effect of robot performance on human-robot trust in time-critical situations. Technical report, Georgia Institute of Technology, Georgia, USA (2015)

    Google Scholar 

  39. Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot?: effects of error, task type and personality on human-robot cooperation and trust. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI ’15, pp. 141–148 (2015)

    Google Scholar 

  40. Sanchez, J., Rogers, W.A., Fisk, A.D., Rovira, E.: Understanding reliance on automation: effects of error type, error distribution, age and experience. Theor. Issues Ergon. Sci. 15(2), 134–160 (2014)

    Article  Google Scholar 

  41. Schaefer, K.E., Chen, J.Y.C., Szalma, J.L., Hancock, P.A.: A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum. Factors 58(3), 377–400 (2016)

    Article  Google Scholar 

  42. Sharif, B., Falcone, M., Maletic, J.I.: An eye-tracking study on the role of scan time in finding source code defects. In: Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 381–384. ACM, New York, NY, USA (2012)

    Google Scholar 

  43. Sliwinski, J., Strobel, M., Zick, Y.: A characterization of monotone influence measures for data classification (2017). arXiv:1708.02153 [cs]

  44. Thelisson, E., Padh, K., Celis, L.E.: Regulatory mechanisms and algorithms towards trust in AI/ML. In: Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Melbourne, Australia (2017)

    Google Scholar 

  45. Vapnik, V.N.: Statistical Learning Theory, 1st edn. Wiley-Interscience, New York (1998)

    MATH  Google Scholar 

  46. Walter, C., Gamble, R., Alarcon, G., Jessup, S., Calhoun, C.: Developing a mechanism to study code trustworthiness. In: Proceedings of the 50th Hawaii International Conference on System Sciences, Hawaii, USA, pp. 5817–5826 (2017)

    Google Scholar 

  47. Wang, J., Jebara, T., Chang, S.F.: Semi-supervised learning using greedy max-cut. J. Mach. Learn. Res. 14(1), 771–800 (2013)

    MathSciNet  MATH  Google Scholar 

  48. Wikipedia: Comment (computer programming). https://en.wikipedia.org/wiki/Comment

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joseph Lyons .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 This is a U.S. government work and its text is not subject to copyright protection in the United States; however, its text may be subject to foreign copyright protection

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Lyons, J., Ho, N., Friedman, J., Alarcon, G., Guznov, S. (2018). Trust of Learning Systems: Considerations for Code, Algorithms, and Affordances for Learning. In: Zhou, J., Chen, F. (eds) Human and Machine Learning. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-90403-0_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-90403-0_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-90402-3

  • Online ISBN: 978-3-319-90403-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics