Abstract
The command and control of teams of autonomous vehicles provides a strong model of the control of cyber-physical systems in general. Using the definition of command and control for military systems, we can recognize the requirements for the operational control of many systems and see some of the problems that must be resolved. Among these problems are the need to distinguish between aberrant behaviors and optimal but quirky behaviors so that the human commander can determine if the behaviors conform to standards and align with mission goals. Similarly the commander must able to recognize when goals will not be met in order to reapportion assets available to the system. Robustness in the face of a highly variable environment can be met through machine learning, but must be done in a way that the tactics employed are recognizable as correct. Finally, because cyber-physical systems will involve decisions that must be made at great speed, we consider the use of the Rainbow framework for autonomics to provide rapid but robust command and control at pace.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Willard, R.F.: Rediscovering the Art of Command & Control. Proceedings of the US Naval Institute (2002)
Rodas, M.O., Szatkowski, C.X., Veronda, M.C.: Modeling Operator Cognitive Capacity in Complex C2 Environments. In: 16th International Command and Control Research and Technology Symposium (2011)
Parasuraman, R., Wickens, C.D.: Humans: Still vital after all these years of automation. Human Factors 50(3), 511–520 (2008)
Parasuraman, R., Molly, R., Singh, I.L.: Performance consequences of automation induced “complacency”. The International Journal of Aviation Psychology 3(1), 1–23 (1993)
Wiener, E.L.: Cockpit automation. In: Wiener, E.L., Nagel, D.C. (eds.) Human Factors in Aviation, pp. 433–461. Academic, San Diego (1988)
Parsasuraman, R., Manzey, D.H.: Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors 52, 381–410 (2010)
32nd Army Air and Missile Defense Command: Patriot Missile Defense Operations during Operation Iraqi Freedom, Washington, DC (2003)
Chen, T.L., Pritchett, A.R.: Development and evaluation of a cockpit decision-aid for emergency trajectory generation. Journal of Aircraft 38, 935–943 (2001)
Johnson, K., Ren, L., Kuchar, J., Oman, C.: Interaction of automation and time pressure in a route replanning task. In: International Conference on Human-Computer Interaction in Aeronautics, pp. 132–137 (2002)
Layton, C., Smith, P.J., McCoy, E.: Design of a cooperative problem-solving system for en-route flight planning: An empirical evaluation. Human Factors 36, 94–119 (1994)
Mosier, K.L., Skitka, L.J., Dunbar, M., McDonnell, L.: Aircrews and automation bias: The advantages of teamwork? The International Journal of Aviation Psychology 11(1), 1–14 (2001)
Leli, D., Filskov, S.: Clinical detection of intellectual deterioration associated with brain damage. Journal of Clinical Psychology 40(6), 1435–1441 (1984)
D’Ambrosio, D.B., Stanley, K.O.: Generative encoding for multiagent learning. In: Proceedings of the Genetic and Evolutionary Computation Conference (2008)
D’Ambrosio, D.B., Lehman, J., Risi, S., Stanley, K.O.: Evolving policy geometry for scalable multiagent learning. In: Proceedings of the Ninth International Conference on Autonomous Agents and Multiagent Systems (2010)
Stein, G.: FALCONET: Force-feedback approach for learning from coaching and observation using natural and experiential training. Ph.D. Thesis, University of Central Florida (2009)
Feigenbaum, E.A.: Knowledge Engineering: The Applied Side of Artificial Intelligence. Annals of the New York Academy of Sciences 426(1 Computer Culture: The Scientific, Intellectual, and Social Impact of the Computer), 91–107 (1984)
Dejong, G., Mooney, R.: Explanation-based learning: An alternative view. Machine Learning 1(2), 145–176 (1986)
Lee, S., Shimoji, S.: Machine acquisition of skills by neural networks. In: IEEE International Joint Conference on Neural Networks, vol. II, pp. 781–788 (1991)
Sammut, C., Hurst, S., Kedzier, D., Michie, D.: Learning to fly. In: Proceedings of the Ninth International Workshop on Machine Learning, pp. 385–393 (1992)
Henninger, A.E., Gonzalez, A.J., Georgipoulos, M., DeMara, R.F.: The limitations of static performance metrics for dynamic tasks learned through observation. Ann Arbor 1001, 43031 (2001)
Fernlund, H.K.G., Gonzalez, A.J., Georgiopoulos, M., DeMara, R.F.: Learning tactical human behavior through observation of human performance. IEEE Systems, Man, and Cybernetics, Part B: Cybernetics 36(1), 128–140 (2006)
Garlan, D., Cheng, S., Huang, A., Schmerl, B., Steenkiste, P.: Rainbow: Architecture-Based Self Adaptation with Reusable Infrastructure. Computer 37(10), 46–54 (2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lange, D.S., Verbancsics, P., Gutzwiller, R.S., Reeder, J., Sarles, C. (2012). Command and Control of Teams of Autonomous Systems. In: Calinescu, R., Garlan, D. (eds) Large-Scale Complex IT Systems. Development, Operation and Management. Monterey Workshop 2012. Lecture Notes in Computer Science, vol 7539. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34059-8_4
Download citation
DOI: https://doi.org/10.1007/978-3-642-34059-8_4
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-34058-1
Online ISBN: 978-3-642-34059-8
eBook Packages: Computer ScienceComputer Science (R0)