Skip to main content

Finding the Path Toward Design of Synergistic Human-Centric Complex Systems

  • Chapter
  • First Online:
Engineering Artificially Intelligent Systems

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 13000))

  • 931 Accesses

Abstract

Modern decision support systems are becoming increasingly sophisticated due to the unprecedented volume of data that must be processed through their underlying information architectures. As advances are made in artificial intelligence and machine learning (AI/ML), a natural expectation would be to assume that the complexity and sophistication of these systems will become daunting in terms of comprehending their design complexity, effective operations, and managing total lifecycle costs. Considering the fact that such systems operate holistically with humans, the interdependencies created between the information architectures, AI/ML processes and humans begs that a fundamental question be asked –“how do we design complex systems such as to yield and exploit effective and efficient human-machine interdependencies and synergies?” A simple example of these interdependencies may include the effects of human actions changing the behavior of algorithms and vice-versa. The algorithms may serve in the extraction and fusion of heterogeneous data, employ a variety of AI/ML algorithms that range from hand crafted, supervised and unsupervised approaches coupled with federated models and simulations to reason and infer about future outcomes.

The purpose of this chapter is to gain a high-level insight into such interdependencies by examining three interrelated topics that can be viewed as working in synergy towards the development of human-centric complex systems: Artificial Intelligence for Systems Engineering (AI4SE), Systems Engineering for Artificial Intelligence (SE4AI), and Human Centered Design (HCD) and Human Factors (HF). From the viewpoint of AI4SE, topics for consideration may include approaches for identifying the design parameters associated with a complex system to ensure code maintainability, to minimize unexpected system failures, and to ensure that the assumptions associated with the algorithms are consistent with the required input data while optimizing the appropriate level of interaction and feedback from the human. Considering SE4AI, how can the synergies between different AI/ML approaches from handcrafted rules to strictly data-driven learning within the data-to-decisions information pipeline be realized, again while maximally leveraging human inputs? From the lens of HCD and HF, a system is likely to be necessarily complex, and a key aspect of the designer may be to ensure an optimal balance between the human systems or software developer and the end-user. For instance, can principles from HCD/HF engineering permit us to design better systems that enhance end-users’ strengths (e.g., intuition, novel thinking) while helping to overcome their limitations (e.g., helping a user maintain focus and attention during tasks that require significant multi-tasking)?

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The complexities and resultant opaqueness of AI/ML processes have demanded that an explanation utility be delivered with these processes to aid users in understanding, trusting, and operating systems with these complex operations; see ([3]) as an example.

References

  1. Lluch, I., Golkar, A.: Architecting federations of systems: a framework for capturing synergy. Syst. Eng. 22(4), 295–312 (2019)

    Article  Google Scholar 

  2. Gunning, D.: Explainable artificial intelligence (XAI): technical report defense advanced research projects agency darpa-baa-16-53. DARPA, Arlington (2016)

    Google Scholar 

  3. Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82–115, June 2020. https://doi.org/10.1016/j.inffus.2019.12.012

  4. Chalmers, D.J.: Strong and weak emergence. In: Davies, P., Clayton, P. (eds.) The Re-Emergence of Emergence: The Emergentist Hypothesis From Science to Religion, Oxford University Press, Oxford (2006)

    Google Scholar 

  5. Neace, K.S., Chipkevich, M.B.A.: Designed complex adaptive systems exhibiting weak emergence. In: IEEE National Aerospace and Electronics Conference, NAECON 2018, pp. 214–221. July 2018. https://doi.org/10.1109/NAECON.2018.8556693

  6. INCOSE. INCOSE Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities, 4 edn, Wiley, Hoboken (2015)

    Google Scholar 

  7. Rouse, W.B.: Complex engineered, organizational and natural systems: issues underlying the complexity of systems and fundamental research needed to address these issues. Syst. Eng. 10(3), 260–271 (2007)

    Article  Google Scholar 

  8. Raz, A.K., Kenley, C.R., DeLaurentis, D.A.: System architecting and design space characterization. ChSyst. Eng. 21(3), 227–242 (2018)

    Article  Google Scholar 

  9. Raz, A.K., Llinas, J., Mittu, R., Lawless, W.: Engineering for emergence in information fusion systems: a review of some challenges. In: 2019 22th International Conference on Information Fusion (FUSION), pp. 1–8. July 2019

    Google Scholar 

  10. Higginbotham Hey, S.: Data scientists: show your machine-learning work. IEEE Spectrum

    Google Scholar 

  11. Sculley, D., et al.: Hidden technical debt in machine learning systems. In: Cortes, C., Lawrence, N.D.D., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 2503–2511. Curran Associates Inc., Red Hook (2015)

    Google Scholar 

  12. Barabàsi, A.L.: Network science: understanding the internal organization of complex systems (Invited Talk). In: 2012 AAAI Spring Symposium Series, March 2012, https://www.aaai.org/ocs/index.php/SSS/SSS12/paper/view/4333. Accessed 29 Apr 2020

  13. Liu, Y.-Y. Barabási, A.-L.: Control principles of complex systems. Rev. Mod. Phys. 88(3), 035006, September 2016. https://doi.org/10.1103/RevModPhys.88.035006

  14. Hansen, L.P.: Nobel lecture: uncertainty outside and inside economic models. J. Polit. Econ 122(5), 945–987 (2014). https://doi.org/10.1086/678456

    Article  Google Scholar 

  15. Mann, R.P.: Collective decision making by rational individuals. PNAS 115(44) (2018)

    Google Scholar 

  16. Lawless, W.F.: The interdependence of autonomous human-machine teams: the entropy of teams, but not individuals. Adv. Sci. Entropy 21(12), 1195 (2019)

    Google Scholar 

  17. Pearl, J.: Reasoning with cause and effect. AI Mag. 23(1), 95–95 (2002). https://doi.org/10.1609/aimag.v23i1.1612

    Article  Google Scholar 

  18. Pearl, J., Mackenzie, D.: AI can’t reason why. Wall Street J. (2018). https://www.wsj.com/articles/ai-cant-reason-why-1526657442. Accessed 27 Apr 2020

  19. Lawless, W.F., Mittu, R., Sofge, D., Hiatt, L.: Artificial intelligence, autonomy, and human-machine teams – interdependence, context, and explainable AI. AI Mag. 40(3), 5–13 (2019)

    Google Scholar 

  20. Cummings, J.: Team Science successes and challenges. In: National Science Foundation Sponsored Workshop on Fundamentals of Team Science and the Science of Team Science, Bethesda (2015)

    Google Scholar 

  21. Cooke, N.: Effective human-artificial intelligence teaming. In: AAAI-2020 Spring Symposium, Stanford (2020)

    Google Scholar 

  22. NTSB: Preliminary Report Released for Crash Involving Pedestrian, Uber Technologies Inc., Test Vehicle. https://www.ntsb.gov/news/press-releases/Pages/NR20180524.aspx. Accessed 13 Mar 2019

  23. Fouad, H., Moskowitz, I., Brock, D., Scott, M.: Integrating expert human decision-making in artificial intelligence applications. In: Lawless, W.F., Ranjeev, M., Sofge, D. (eds.) Human-machine Shared Contexts, Elsevier, London (2020)

    Google Scholar 

  24. Marois, R., Ivanoff, J.: Capacity limits of information processing in the brain. Trends Cogn. Sci. 9(6), 296–305 (2005). https://doi.org/10.1016/j.tics.2005.04.010

    Article  Google Scholar 

  25. Chérif, L., Wood, V., Marois, A., Labonté, K., Vachon, F.: Multitasking in the military: cognitive consequences and potential solutions. Appl. Cogn. Psychol. 32(4), 429–439 (2018). https://doi.org/10.1002/acp.3415

    Article  Google Scholar 

  26. Brock, D., Wasylyshyn, C., McClimens, B., Perzanowski, B.: Facilitating the watchstander’s voice communications task in future Navy operations. In: 2011 - MILCOM 2011 Military Communications Conference, pp. 2222–2226, November 2021

    Google Scholar 

  27. Brock, S.C., McClimens, P.B., McClimens, B., Radivilova, T., Bulakh, V.: Evaluating Listeners’ Attention to, and Comprehension of, Serially Interleaved, Rate-accelerated Speech (2012)

    Google Scholar 

  28. Saaty, T.L.: The Analytic Hierarchy Process. McGraw-Hill, New York (1980)

    Google Scholar 

  29. Martyushev, L.M.: Entropy and entropy production: old misconceptions and new breakthroughs. Entropy 15(4), 1152–1170 (2013). https://doi.org/10.3390/e15041152

    Article  MathSciNet  MATH  Google Scholar 

  30. Holland, O.T.: Taxonomy for the modeling and simulation of emergent behavior systems. In: Proceedings of the 2007 Spring Simulation Multiconference, vol. 2, pp. 28–35 (2007)

    Google Scholar 

  31. Fromm, J.: On engineering and emergence (2006)

    Google Scholar 

  32. Belani, H., Vuković, M.Ž., Car, M.: Requirements engineering challenges in building AI-based complex systems (2020). arXiv:1908.11791 [cs], Accessed 29 Apr 2020

  33. Flach, P.: performance evaluation in machine learning: the good, the bad, the ugly, and the way forward. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9808–9814 (2019). https://doi.org/10.1609/aaai.v33i01.33019808

  34. Hernández-Orallo, J.: Evaluation in artificial intelligence: from task-oriented to ability-oriented measurement. Artif. Intell. Rev. 48(3), 397–447 (2016). https://doi.org/10.1007/s10462-016-9505-7

    Article  Google Scholar 

  35. Fouad, H., Moskowitz, I.S.: Meta-agents: using multi-agent networks to manage dynamic changes in the internet of things. In: Lawless, W., Mittu, R., Sofge, D., Moskowitz, I.S., Russell, S. (eds.) Artificial Intelligence for the Internet of Everything, Academic Press, pp. 271–281 (2019)

    Google Scholar 

  36. Mihailescu, R.-C., Spalazzese, R., Heyer, C., Davidsson. P.: A role-based approach for orchestrating emergent configurations in the internet of things. Internet of Things, arXiv:1809.09870 [cs], September 2018

  37. Simon, J.: The Sciences of the artificial. MIT Press, Cambridge (2019)

    Google Scholar 

  38. Valckenaers, P., Brussel, H.V., Holvoet, T.: Fundamentals of holonic systems and their implications for self-adaptive and self-organizing systems. In: 2008 Second IEEE International Conference on Self-Adaptive and Self-Organizing Systems Workshops, pp. 168–173 October 2008

    Google Scholar 

  39. Moradi, M., Moradi, M., Bayat, F., Nadjaran Toosi, A.: Collective hybrid intelligence: towards a conceptual framework. Int. J. Crowd Sci. 3(2), 198–220, January 2019. https://doi.org/10.1108/IJCS-03-2019-0012

  40. Stephanidis, C.C., et al. : Seven HCI Grand Challenges. Int. J. Hum. Comput. Int. 35 (14), 1229–1269, August 2019. https://doi.org/10.1080/10447318.2019.1619259

  41. Boy, G.A.: Human-centered design of complex systems: an experience-based approach. Des. Sci. 3 (2017)

    Google Scholar 

  42. Madni, A.M., Madni, C.C.: Architectural framework for exploring adaptive human-machine teaming options in simulated dynamic environments. Systems 6(4), 44 (2018). https://doi.org/10.3390/systems6040044

    Article  Google Scholar 

  43. Timme, N., Alford, W., Flecker, B., Beggs, J.M.: Multivariate information measures: an experimentalist’s perspective, August 2012. arXiv:1111.6857 [physics, stat], http://arxiv.org/abs/1111.6857. Accessed 29 Apr 2020

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hesham Y. Fouad .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Fouad, H.Y., Raz, A.K., Llinas, J., Lawless, W.F., Mittu, R. (2021). Finding the Path Toward Design of Synergistic Human-Centric Complex Systems. In: Lawless, W.F., Llinas, J., Sofge, D.A., Mittu, R. (eds) Engineering Artificially Intelligent Systems. Lecture Notes in Computer Science(), vol 13000. Springer, Cham. https://doi.org/10.1007/978-3-030-89385-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-89385-9_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-89384-2

  • Online ISBN: 978-3-030-89385-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics