Abstract
In this chapter, we discuss the control-theoretic approach to cyber-security. Under the control-theoretic approach, the defender prescribes defense actions in response to security alert information that is generated as the attacker progresses through the network. This feedback information is inherently noisy, resulting in the defender being uncertain of the underlying status of the network. Two complementary approaches for handling the defender’s uncertainty are discussed. First, we consider the probabilistic case where the defender’s uncertainty can be quantified by probability distributions. In this setting, the defender aims to specify defense actions that minimize the expected loss. Second, we study the nondeterministic case where the defender is unable to reason about the relative likelihood of events. The appropriate performance criterion in this setting is minimization of the worst-case damage (minmax). The probabilistic approach gives rise to efficient computational procedures (namely sampling-based approaches) for finding an optimal defense policy, but requires modeling assumptions that may be difficult to justify in real-world cyber-security settings. On the other hand, the nondeterministic approach reduces the modeling burden but results in a significantly harder computational problem.
This work is partially supported by the Army Research Office under grant W911NF-13-1-0421.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Such systems are referred to as intrusion response systems in the cyber-security literature; see [1] for a review of the area.
- 2.
In some control settings, the “decision maker” may actually consist of a collection of agents making decisions based on their own localized information in order to achieve some common objective. Such problems still fall within the realm of control theory, due to all agents having an identical objective, but are referred to as decentralized control problems or team problems [2, 3].
- 3.
For a deeper discussion of this issue, see the related topic of vulnerability disclosure policies [5].
- 4.
For a deeper discussion of information structures and information states, see [9].
- 5.
This is a special case of a general probabilistic automaton where the dynamics are assumed to be Markovian.
- 6.
The uppercase notation, \(X_{t}\), is used to represent a random variable.
- 7.
Comparing to the probabilistic approach, one would only keep track of the support of the distribution and not the likelihoods.
- 8.
Such settings are sometimes referred to as games against nature in the literature [31]; however, since no strategy is assumed for the attacker (nature), it is not viewed as an active decision maker, and thus we view the problem in the context of control theory.
- 9.
For the infinite horizon case, \(\mathscr {C}= \big [0,\frac{\bar{c}}{1-\beta }\big ]\).
- 10.
In other words, \(\mathscr {O}_t\) is the range of the functions \(w\mapsto l_t(x_t,w)\).
References
Miehling, E., Rasouli, M., Teneketzis, D.: A POMDP approach to the dynamic defense of large-scale cyber networks. IEEE Trans. Inf. Forensics Secur. 13(10), 2490–2505 (2018)
Marschak, J., Radner, R.: Economic Theory of Teams. Yale University Press, New Haven (1972)
Ho, Y.-C., Kastner, M., Wong, E.: Teams, signaling, and information theory. IEEE Trans. Autom. Control 23(2), 305–312 (1978)
Gorenc, B., Sands, F.: Hacker machine interface: the state of SCADA HMI vulnerabilities. Technical report, Trend Micro Zero Day Initiative Team (2017)
Arora, A., Telang, R., Xu, H.: Optimal policy for software vulnerability disclosure. Manage. Sci. 54(4), 642–656 (2008)
Shostack, A.: Threat Modeling: Designing for Security. Wiley, Hoboken (2014)
Kumar, P.R., Varaiya, P.: Stochastic Systems: Estimation, Identification, and Adaptive Control. Prentice Hall, Upper Saddle River (1986)
Sutton, R.S., Barto, A.G., Williams, R.J.: Reinforcement learning is direct adaptive optimal control. IEEE Control Syst. 12(2), 19–22 (1992)
Mahajan, A., Martins, N.C., Rotkowitz, M.C., Yüksel, S.: Information structures in optimal decentralized control. In: 51st Annual Conference on Decision and Control (CDC), pp. 1291–1306. IEEE (2012)
Mahajan, A., Mannan, M.: Decentralized stochastic control. Ann. Oper. Res. 241(1–2), 109–126 (2016)
Schuppen, J.H.: Information structures. In: van Schuppen, J.H., Villa, T. (eds.) Coordination Control of Distributed Systems. LNCIS, vol. 456, pp. 197–204. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-10407-2_24
Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)
Bertsekas, D.P.: Dynamic Programming and Optimal Control, vol. 1. Athena Scientific, Belmont (1995)
Shameli-Sendi, A., Ezzati-Jivan, N., Jabbarifar, M., Dagenais, M.: Intrusion response systems: survey and taxonomy. Int. J. Comput. Sci. Netw. Secur. 12(1), 1–14 (2012)
Iannucci, S., Abdelwahed, S.: A probabilistic approach to autonomic security management. In: IEEE International Conference on Autonomic Computing (ICAC), pp. 157–166. IEEE (2016)
S. Iannucci, et al.: A model-integrated approach to designing self-protecting systems. IEEE Trans. Software Eng. (Early Access) (2018)
Lewandowski, S.M., Van Hook, D.J., O’Leary, G.C., Haines, J.W., Rossey, L.M.: SARA: Survivable autonomic response architecture. In: DARPA Information Survivability Conference & Exposition II (DISCEX), vol. 1, pp. 77–88. IEEE (2001)
Kreidl, O.P., Frazier, T.M.: Feedback control applied to survivability: a host-based autonomic defense system. IEEE Trans. Reliab. 53(1), 148–166 (2004)
Musman, S., Booker, L., Applebaum, A., Edmonds, B.: Steps toward a principled approach to automating cyber responses. In: Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, vol. 11006, pp. 1–15. International Society for Optics and Photonics (2019)
Speicher, P., Steinmetz, M., Hoffmann, J., Backes, M., Künnemann, R.: Towards automated network mitigation analysis. In: Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pp. 1971–1978. ACM, New York (2019)
Miehling, E., Rasouli, M., Teneketzis, D.: Optimal defense policies for partially observable spreading processes on Bayesian attack graphs. In: Proceedings of the Second ACM Workshop on Moving Target Defense, pp. 67–76. ACM (2015)
Rasouli, M., Miehling, E., Teneketzis, D.: A supervisory control approach to dynamic cyber-security. In: Poovendran, R., Saad, W. (eds.) GameSec 2014. LNCS, vol. 8840, pp. 99–117. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-12601-2_6
Rasouli, M., Miehling, E., Teneketzis, D.: A scalable decomposition method for the dynamic defense of cyber networks. In: Rass, S., Schauer, S. (eds.) Game Theory for Security and Risk Management. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75268-6_4
Smallwood, R.D., Sondik, E.J.: The optimal control of partially observable Markov processes over a finite horizon. Oper. Res. 21(5), 1071–1088 (1973)
Albanese, M., Jajodia, S., Noel, S.: Time-efficient and cost-effective network hardening using attack graphs. In: 42nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), pp. 1–12. IEEE (2012)
Silver, D., Veness, J.: Monte-Carlo planning in large POMDPs. In: Advances in Neural Information Processing Systems, pp. 2164–2172 (2010)
Besold, T.R., Garcez, A.A., Stenning, K., van der Torre, L., van Lambalgen, M.: Reasoning in non-probabilistic uncertainty: logic programming and neural-symbolic computing as examples. Minds Mach. 27(1), 37–77 (2017)
Witsenhausen, H.: Sets of possible states of linear systems given perturbed observations. IEEE Trans. Autom. Control 13(5), 556–558 (1968)
Schweppe, F.: Recursive state estimation: unknown but bounded errors and system inputs. IEEE Trans. Autom. Control 13(1), 22–28 (1968)
Bertsekas, D.P.: Control of uncertain systems with a set-membership description of the uncertainty. Technical report, DTIC Document (1971)
Milnor, J.: Games against nature. In: Coombs, C.H., Davis, R.L., Thrall, R.M. (eds.) Decision Processes, pp. 49–60. Wiley, Hoboken (1954)
Akian, M., Quadrat, J.P., Viot, M.: Bellman processes. In: Cohen, G., Quadrat, J.P. (eds.) 11th International Conference on Analysis and Optimization of Systems Discrete Event Systems. LNCIS, vol. 199, pp. 302–311. Springer, Berlin, Heidelberg (1994). https://doi.org/10.1007/BFb0033561
Bernhard, P.: Expected values, feared values, and partial information optimal control. In: Olsder, G.J. (ed.) New Trends in Dynamic Games and Applications. AISDG, vol. 3, pp. 3–24. Birkhäuser Boston, Basel (1995). https://doi.org/10.1007/978-1-4612-4274-1_1
Bernhard, P.: A separation theorem for expected value and feared value discrete time control. ESAIM: Control Optimisation Calc. Var. 1, 191–206 (1996)
Akian, M., Quadrat, J.-P., Viot, M.: Duality between probability and optimization. Idempotency 11, 331–353 (1998)
Bernhard, P.: Minimax - or feared value - \(L1/L\infty \) control. Theoret. Comput. Sci. 293(1), 25–44 (2003)
Başar, T., Bernhard, P.: H-Infinity Optimal Control and Related Minimax Design Problems: A Dynamic Game Approach. Springer, Cham (2008)
Weiss, K., Khoshgoftaar, T.M., Wang, D.: A survey of transfer learning. J. Big data 3(1), 9 (2016)
Oh, J., Singh, S., Lee, H., Kohli, P.: Zero-shot task generalization with multi-task deep reinforcement learning. In: Proceedings of the 34th International Conference on Machine Learning, JMLR, pp. 2661–2670 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Miehling, E., Rasouli, M., Teneketzis, D. (2019). Control-Theoretic Approaches to Cyber-Security. In: Jajodia, S., Cybenko, G., Liu, P., Wang, C., Wellman, M. (eds) Adversarial and Uncertain Reasoning for Adaptive Cyber Defense. Lecture Notes in Computer Science(), vol 11830. Springer, Cham. https://doi.org/10.1007/978-3-030-30719-6_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-30719-6_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30718-9
Online ISBN: 978-3-030-30719-6
eBook Packages: Computer ScienceComputer Science (R0)