Abstract
Load forecast systems play a fundamental role the operation in power systems, because they reduce uncertainties about the system’s future operation. An increasing demand for precise forecasts motivates the design of complex models that use information from different sources, such as smart appliances. However, untrusted sources can introduce vulnerabilities in the system. For example, an adversary may compromise the sensor measurements to induce errors in the forecast. In this work, we assess the vulnerabilities of load forecast systems based on neural networks and propose a defense mechanism to construct resilient forecasters.
We model the strategic interaction between a defender and an attacker as a Stackelberg game, where the defender decides first the prediction scheme and the attacker chooses afterwards its attack strategy. Here, the defender selects randomly the sensor measurements to use in the forecast, while the adversary calculates a bias to inject in some sensors. We find an approximate equilibrium of the game and implement the defense mechanism using an ensemble of predictors, which introduces uncertainties that mitigate the attack’s impact. We evaluate our defense approach training forecasters using data from an electric distribution system simulated in GridLAB-D.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
These constraints prevent damage to the equipment and the environment. For example, generators may have operational limitations to prevent emissions or to regulate the use of water (for hydroelectric plants) [16].
- 2.
Although the impact depends on the particular model, that is, the set \(\mathcal {M}^d\), we assume that models with the same number of sensors \(m^d\) have the same impact function.
- 3.
Other forecast models make predictions using less information (e.g., the aggregate loads); hence, their accuracy decrease significantly with less loads [25].
References
Alfeld, S., Zhu, X., Barford, P.: Data poisoning attacks against autoregressive models. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)
Amini, S., Pasqualetti, F., Mohsenian-Rad, H.: Dynamic load altering attacks against power system stability: attack models and protection schemes. IEEE Trans. Smart Grid 9(4), 2862–2872 (2016)
Barreto, C., Cardenas, A.: Impact of the market infrastructure on the security of smart grids. IEEE Trans. Ind. Inform. 1 (2018)
Chen, Y., Tan, Y., Zhang, B.: Exploiting vulnerabilities of load forecasting through adversarial attacks. In: Proceedings of the Tenth ACM International Conference on Future Energy Systems, e-Energy 2019, pp. 1–11 (2019)
Choi, D.H., Xie, L.: Economic impact assessment of topology data attacks with virtual bids. IEEE Trans. Smart Grid 9(2), 512–520 (2016)
Chollet, F., et al.: Keras (2015). https://keras.io
Dhillon, G.S., et al.: Stochastic activation pruning for robust adversarial defense. arXiv preprint arXiv:1803.01442 (2018)
Esmalifalak, M., Nguyen, H., Zheng, R., Xie, L., Song, L., Han, Z.: A stealthy attack against electricity market using independent component analysis. IEEE Syst. J. 12(1), 297–307 (2015)
Fudenberg, D., Tirole, J.: Game Theory. The MIT Press, Cambridge (1991)
Hernandez, L., et al.: A survey on electric power demand forecasting: future trends in smart grids, microgrids and smart buildings. IEEE Commun. Surv. Tutor. 16(3), 1460–1495 (2014)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Hyndman, R.J., Koehler, A.B.: Another look at measures of forecast accuracy. Int. J. Forecast. 22(4), 679–688 (2006)
Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175 (2019)
Jia, L., Thomas, R.J., Tong, L.: Malicious data attack on real-time electricity market. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5952–5955 (2011)
Jones, E., Oliphant, T., Peterson, P., et al.: SciPy: open source scientific tools for Python (2001). http://www.scipy.org/
Kirschen, D.S., Strbac, G.: Fundamentals of Power System Economics. Wiley, Hoboeken (2004)
Klebanov, L.B., Rachev, S.T., Fabozzi, F.J.: Robust and Non-robust Models in Statistics. Nova Science Publishers, Hauppauge (2009)
Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D., Jana, S.: Certified robustness to adversarial examples with differential privacy. arXiv preprint arXiv:1802.03471 (2018)
Liu, C., Zhou, M., Wu, J., Long, C., Kundur, D.: Financially motivated FDI on SCED in real-time electricity markets: attacks and mitigation. IEEE Trans. Smart Grid 10(2), 1949–1959 (2019)
Liu, X., Cheng, M., Zhang, H., Hsieh, C.-J.: Towards robust neural networks via random self-ensemble. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 381–397. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_23
Liu, Y., Ning, P., Reiter, M.K.: False data injection attacks against state estimation in electric power grids. In: Proceedings of the 16th ACM Conference on Computer and Communications Security, CCS 2009, pp. 21–32 (2009)
Nudell, T.R., Annaswamy, A.M., Lian, J., Kalsi, K., D’Achiardi, D.: Electricity markets in the United States: a brief history, current operations, and trends. In: Stoustrup, J., Annaswamy, A., Chakrabortty, A., Qu, Z. (eds.) Smart Grid Control. PEPS, pp. 3–27. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-98310-3_1
Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016)
Schneider, K.P., Chen, Y., Chassin, D.P., Pratt, R.G., Engel, D.W., Thompson, S.E.: Modern grid initiative distribution taxonomy final report. Technical report, Pacific Northwest National Laboratory (2008)
Sevlian, R., Rajagopal, R.: A scaling law for short term load forecasting on varying levels of aggregation. Int. J. Electr. Power Energy Syst. 98, 350–361 (2018)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Tan, S., Song, W.Z., Stewart, M., Yang, J., Tong, L.: Online data integrity attacks against real-time electrical market in smart grid. IEEE Trans. Smart Grid 9(1), 313–322 (2016)
Xie, L., Mo, Y., Sinopoli, B.: Integrity data attacks in power market operations. IEEE Trans. Smart Grid 2(4), 659–666 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Appendix
A Appendix
Proof
(Lemma 1). Let \(\delta (B_a)\) and \(p^{DA} - p^{RT}\) be independent random variables; hence, we can approximate their expected value using a Monte Carlo integration with T terms, that is,
Now, since two independent random variables X and Y satisfy \(\mathrm {E}[XY] = \mathrm {E}[X]\mathrm {E}[Y]\), we can approximate their expected product \(\mathrm {E}[\delta (B_a)(p^{DA} - p^{RT})]\) as
Thus, if either \(\sum _t p^{DA}(t) - p^{RT}(t) \ge 0\) and \(\sum _t \delta (B_a(t)) \le 0\) or \(\sum _t p^{DA}(t) - p^{RT}(t) \le 0\) and \(\sum _t \delta (B_a(t)) \ge 0\), then the attacker has positive profit (see Eq. (7)).
Proof
(Proposition 1). Let us consider the following bounds on the difference between expected impact and its approximation from Eq. (12)
Since \(\varPi ^d (\rho ^d, \rho ^a) = -\varPi ^a(\rho ^d, \rho ^a)\), then the previous expression implies
and
Moreover, the solution to Eq. (13), denoted \((\rho ^d, \rho ^a)\), satisfies the following properties
for some strategies \(\tilde{\rho }^d\) and \(\tilde{\rho }^a\). Thus, from Eqs. (15) and (17) we have
Now, using the previous expression with Eq. (16) we obtain
where \(\xi = \overline{\xi } - \underline{\xi } \ge 0\). With a similar approach we can show that
Proof
(Proposition 2). Since \(\delta (\cdot )\) is increasing with respect to the number of sensors compromised, the following holds
The previous property can be applied also to minimization problems; hence, we can express the game’s equilibrium of Eq. (12) as
This means that the adversary designs its strategy to maximize the number of compromised sensors, while the defender pursues the opposite goal.
The adversary’s optimal strategy consists in attacking the sensors with highest selection probability. Without loss of generality, let \(\rho _1^d \ge \rho _2^d \ge \ldots \ge \rho _m^d\). Then, the attack strategy \(\rho _i^a = 1\) and \(\rho _j^a = 0\) for \(1\le i \le m^a\) and \(j>m^a\) leads to the following expected number of compromised sensors
Since a different attack strategy cannot increase the number of compromised sensors, this attack strategy is weakly dominant.
Given the previous attack strategy, the defender’s optimal strategy consists in selecting all the sensors with the same probability
Observe that any deviation from this strategy increases the number of sensors compromised.
Proof
(proposition 3). Here we consider that the adversary compromises \(m^a\) sensors. Let \(\sigma _i\) be the proportion of resources allocated to the set \(\mathcal {P}_i\). According to Sect. 4, we create a partition of sensors \(\{\mathcal {P}_i\}_{i=1}^n\). First, let us consider ensembles trained with sensors in \(\mathcal {M}_i = \cup _{j \ne i} \mathcal {P}_j\), for \(i=1, \ldots , n\), where \(\frac{n-1}{n}=frac{ m^d }{ m }\). Thus, the total number of compromised sensors used by the \(i^{th}\) model amount to \( m^a \sum \nolimits _{j\ne i} \sigma _j = m^a(1-\sigma _i). \)
Due to the concavity of the impact function, the expected impact on the ensemble satisfies
Thus, the allocation that maximizes the impact attains the previous upper bound satisfying \( \sigma _i = \frac{1}{n}, \) for all \(i=1, \ldots , n\). In other words, the adversary’s best strategy consists in allocating its resources uniformly in the partition’s sets.
Now, if \(\mathcal {M}_i = \mathcal {P}_i\), for \(i=1, \ldots , n\), with \(n=\frac{m}{m^d}\), then the expected impact on the ensemble becomes
In this case, the attack strategy that attains the upper bound satisfies \(\sigma _i = \frac{1}{n}\). Therefore, the adversary allocates its resources equally in all the sensors in the partition.
In practice, the adversary can compromise at most \(m^d\) sensors form each partition. Hence, the optimal attack policy must satisfy \(\sigma _i = \min \{ 1 / n, m^d / m^a \}\). When \(1 / n > m^d / m^a\) the adversary cannot implement its ideal strategy.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Barreto, C., Koutsoukos, X. (2019). Design of Load Forecast Systems Resilient Against Cyber-Attacks. In: Alpcan, T., Vorobeychik, Y., Baras, J., Dán, G. (eds) Decision and Game Theory for Security. GameSec 2019. Lecture Notes in Computer Science(), vol 11836. Springer, Cham. https://doi.org/10.1007/978-3-030-32430-8_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-32430-8_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-32429-2
Online ISBN: 978-3-030-32430-8
eBook Packages: Computer ScienceComputer Science (R0)