Skip to main content

Network Games with Strategic Machine Learning

  • Conference paper
  • First Online:
Decision and Game Theory for Security (GameSec 2021)

Abstract

In this paper, we study the strategic machine learning problem with a planner (decision maker) and multiple agents. The planner is the first-mover, who designs, publishes, and commits to a decision rule. The agents then best-respond by manipulating their input features to obtain a desirable decision outcome so as to maximize their utilities. Earlier works in strategic machine learning assume that every agent’s strategic action is independent of others’. By contrast, we consider a different case where agents are connected in a network and can either benefit from their neighbors’ positive decision outcomes from the planner or benefit from their neighbors’ actions. We study the Stackelberg equilibrium in this new setting and highlight the similarities and differences between this model and the literature on network/graphical games and strategic machine learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bramoullé, Y., Kranton, R., D’amours, M.: Strategic interaction and networks. Am. Econ. Rev. 104(3), 898–930 (2014)

    Article  Google Scholar 

  2. Brückner, M., Kanzow, C., Scheffer, T.: Static prediction games for adversarial learning problems. J. Mach. Learn. Res. 13, 2617–2654 (2012)

    MathSciNet  MATH  Google Scholar 

  3. Chen, Y., Wang, J., Liu, Y.: Strategic classification with a light touch: Learning classifiers that incentivize constructive adaptation (2021)

    Google Scholar 

  4. Chen, Y., Podimata, C., Procaccia, A.: Strategyproof linear regression in high dimensions, pp. 9–26 (2018). https://doi.org/10.1145/3219166.3219175

  5. Galeotti, A., Golub, B., Goyal, S.: Targeting interventions in networks. SSRN Electron. J. (2017). https://doi.org/10.2139/ssrn.3054353

  6. Haghtalab, N., Immorlica, N., Lucier, B., Wang, J.: Maximizing welfare with incentive-aware evaluation mechanisms, pp. 160–166 (2020). https://doi.org/10.24963/ijcai.2020/23

  7. Hamilton, W., Ying, R., Leskovec, J.: Inductive representation learning on large graphs (2017)

    Google Scholar 

  8. Hardt, M., Megiddo, N., Papadimitriou, C., Wootters, M.: Strategic classification, pp. 111–122 (2016). https://doi.org/10.1145/2840728.2840730

  9. Hu, L., Immorlica, N., Vaughan, J.: The disparate effects of strategic manipulation, pp. 259–268 (2019). https://doi.org/10.1145/3287560.3287597

  10. Kipf, T., Welling, M.: Semi-supervised classification with graph convolutional networks (2016)

    Google Scholar 

  11. Kleinberg, J., Raghavan, M.: How do classifiers induce agents to invest effort strategically? ACM Trans. Econ. Comput. 8, 1–23 (2020). https://doi.org/10.1145/3417742

    Article  MathSciNet  Google Scholar 

  12. La, R.J.: Interdependent security with strategic agents and cascades of infection. IEEE/ACM Trans. Netw. 24(3), 1378–1391 (2016)

    Article  Google Scholar 

  13. Miller, J., Milli, S., Hardt, M.: Strategic classification is causal modeling in disguise. In: III, H.D., Singh, A. (eds.) Proceedings of the 37th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 119, pp. 6917–6926. PMLR (2020)

    Google Scholar 

  14. Milli, S., Miller, J., Dragan, A., Hardt, M.: The social cost of strategic classification, pp. 230–239 (2019). https://doi.org/10.1145/3287560.3287576

  15. Naghizadeh, P., Liu, M.: Budget balance or voluntary participation? Incentivizing investments in interdependent security games (2014). https://doi.org/10.1109/ALLERTON.2014.7028578

  16. Naghizadeh, P., Liu, M.: Exit equilibrium: towards understanding voluntary participation in security games, pp. 1–9 (2016). https://doi.org/10.1109/INFOCOM.2016.7524353

  17. Naghizadeh, P., Liu, M.: Opting out of incentive mechanisms: a study of security as a non-excludable public good. IEEE Trans. Inf. Foren. Secur. 11, 2790–2803 (2016). https://doi.org/10.1109/TIFS.2016.2599005

    Article  Google Scholar 

  18. Naghizadeh, P., Liu, M.: On the uniqueness and stability of equilibria of network games. In: 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 280–286. IEEE (2017)

    Google Scholar 

  19. Naghizadeh, P., Liu, M.: Provision of public goods on networks: on existence, uniqueness, and centralities. IEEE Trans. Netw. Sci. Eng. 5(3), 225–236 (2018)

    Article  MathSciNet  Google Scholar 

  20. Parise, F., Ozdaglar, A.: A variational inequality framework for network games: existence, uniqueness, convergence and sensitivity analysis. Games Econ. Behav. 114, 47–82 (2019)

    Article  MathSciNet  Google Scholar 

  21. Park, J., Schaar, M.: Intervention mechanism design for networks with selfish users (2010)

    Google Scholar 

  22. Rebille, Y., Richefort, L.: Equilibrium uniqueness in network games with strategic substitutes (2012)

    Google Scholar 

  23. Scutari, G., Facchinei, F., Pang, J.S., Palomar, D.P.: Real and complex monotone communication games. IEEE Trans. Inf. Theory 60(7), 4197–4231 (2014)

    Article  MathSciNet  Google Scholar 

  24. Scutari, G., Palomar, D.P., Barbarossa, S.: Asynchronous iterative water-filling for Gaussian frequency-selective interference channels. IEEE Trans. Inf. Theory 54(7), 2868–2878 (2008)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Kun Jin , Tongxin Yin , Charles A. Kamhoua or Mingyan Liu .

Editor information

Editors and Affiliations

Appendices

A Proof of Lemma 1

Proof

First of all, we have from Sect. 2 that \(\pmb {z}^{(i)} = P \pmb {x}^{(i)}\), and thus \(\pmb {w}^T \pmb {z}^{(i)} = \pmb {w}^T P \pmb {x}^{(i)}\).

We can see that any manipulation action \(\pmb {x}^{(i)}\) that violate lemma 1 will be strictly dominated by its projection onto \(P^T \pmb {w}\). This is because the action cost increases but the decision outcome \(f(\pmb {z}^{(i)})\) remains the same and thus the utility strictly decreases. Therefore, any best response should satisfy Lemma 1.

B Proof of Proposition 1

Proof

We can rewrite utility function of \(a_i\) as follows

$$\begin{aligned} u_i = \pmb {w}^T P \pmb {x}^{(i)} + \pmb {w}^T P \pmb {x}^{(i)} \cdot \sum _{j \in \mathcal {N}_i} g_{ij} \pmb {w}^T P \pmb {x}^{(j)} - \frac{1}{2} ||\pmb {x}^{(i)}||^2. \end{aligned}$$

From Lemma 1, we can denote \((\pmb {x}^{(i)})^* = \alpha _i \frac{P^T \pmb {w}}{||P^T \pmb {w}||}\), and then

$$\begin{aligned} u_i = ||P^T \pmb {w}|| \alpha _i + ||P^T \pmb {w}||^2 \alpha _i \sum _{j \in \mathcal {N}_i} g_{ij} \alpha _j - \frac{1}{2} \alpha _i^2, \end{aligned}$$

and thus from the first order derivatives

$$\begin{aligned} \frac{\partial u_i}{\partial \alpha _i} = - \alpha _i + ||P^T \pmb {w}|| + ||P^T \pmb {w}||^2 \sum _{j \in \mathcal {N}_i} g_{ij} \alpha _j, \end{aligned}$$

we know that the agents have a unique Nash equilibrium

$$\begin{aligned} \pmb {\alpha }^* = ||P^T \pmb {w}|| (I - ||P^T \pmb {w}||^2 G)^{-1} \pmb {1}, \end{aligned}$$

if \(I - ||P^T \pmb {w}||^2 G\) is positive definite.

C Proof of Proposition 2

Proof

Since \(G \succ \pmb {0}\) and \((I - r^2 G) \succ \pmb {0}\), we can write out its eigendecomposition as \(G = V \varLambda V^T\), where \(\varLambda = \mathbf{diag} (\pmb {\lambda })\), \(\lambda _1 \ge \lambda _2 \ge \dots \ge \lambda _N > 0\), and we denote \(\tilde{\lambda }_i := (1 - l^2 \lambda _i)^{-1} > 0, \tilde{\varLambda } := \mathbf{diag} (\tilde{\pmb {\lambda }})\), \(l = ||P^T \pmb {w}|| \le r\), then

$$\begin{aligned} \pmb {\alpha }^* = l \cdot V \tilde{\varLambda } V^T \pmb {1}. \end{aligned}$$

For the planner, it’s equivalent to maximize

$$\begin{aligned} \sum _{i=1}^N \pmb {q}^T \pmb {x}^*_i = \pmb {1}^T \pmb {\alpha }^* \cdot \frac{\pmb {q}^T (P^T \pmb {w})}{||P^T \pmb {w}||} = \pmb {1}^T V \tilde{\varLambda } V^T \pmb {1} \cdot \pmb {q}^T (P^T \pmb {w}), \end{aligned}$$

which is monotonically increasing in l (since both \(\pmb {1}^T V \tilde{\varLambda } V^T \pmb {1}\) and \(\pmb {q}^T (P^T \pmb {w})\) are positive and monotonically increasing in l) and thus the planner’s optimal linear mechanism satisfies \(||P^T \pmb {w}|| = l = r\).

Then since the first term is independent of the \(\pmb {w}\), we need to choose \(\pmb {w}\) that maximizes \(\pmb {w}^T (P \pmb {q})\). When they have cosine similarity 1, the objective is maximized.

D Proof of Lemma 2

Proof

We have

$$\begin{aligned} f(\pmb {z}^{(i)}) = \pmb {w}^T P (\pmb {x}^{(i)} + \sum _{j \in \mathcal {N}_i} g_{ij} \pmb {x}^{(j)}) = (P^T \pmb {w})^T \pmb {x}^{(i)} + (P^T \pmb {w})^T \sum _{j \in \mathcal {N}_i} g_{ij} \pmb {x}^{(j)}, \end{aligned}$$

where the second part remains the same when \(a_i\)’s neighbors’ actions are fixed. For an arbitrary \(\pmb {x}^{(i)} \ge 0, s_{cos}(\hat{\pmb {x}}^{(i)}, \pmb {e}_k) \ne 1\), we can show that it is strictly dominated.

Since \(s_{cos}(\hat{\pmb {x}}^{(i)}, \pmb {e}_k) \ne 1\), there exist a dimension t, such that \(\frac{(P^T \pmb {w})_t}{c_t} < \frac{(P^T \pmb {w})_k}{c_k}\), and \((\pmb {x}^{(i)})_t > 0\). We consider

$$\begin{aligned} (\pmb {x}^{(i)})' = \pmb {x}^{(i)} - (\pmb {x}^{(i)})_t \pmb {e}_t + \frac{(\pmb {x}^{(i)})_t c_k}{c_t} (\pmb {x}^{(i)})_k, \end{aligned}$$

then it’s not hard to see that the action cost remains the same

$$\begin{aligned} \pmb {c}^T \pmb {x}^{(i)} = \sum _{k=1}^K c_k (\pmb {x}^{(i)})_k = \pmb {c}^T (\pmb {x}^{(i)})', \end{aligned}$$

and \((\pmb {x}^{(i)})'\) achieves a higher decision outcome since

$$\begin{aligned} \sum _{k=1}^K (P^T \pmb {w})_k (\pmb {x}^{(i)})'_k > \sum _{k=1}^K (P^T \pmb {w})_k (\pmb {x}^{(i)})'_k. \end{aligned}$$

This means that \((\pmb {x}^{(i)})'\) strictly dominates \(\pmb {x}^{(i)}\). Similarly, any dimension other than k is suboptimal investment for a rational agent and thus any best response should satisfy Lemma 2.

E Proof of Proposition 3

Proof

The direction of \((\pmb {x}^{(i)})^*\) follows Lemma 2, it remains to show the expression of \(\pmb {\alpha }^*\) and the uniqueness claim.

We can rewrite the utility functions as follows

$$\begin{aligned} u^{(i)}(\alpha ^{(i)}, \pmb {\alpha }^{(-i)}) = b_i((P^T \pmb {w})_k [\alpha ^{(i)} + \sum _{j \in \mathcal {N}_i} g_{ij} \alpha ^{(j)}]) - c_k \alpha ^{(i)}. \end{aligned}$$
(22)

The first order derivatives are

$$\begin{aligned} \frac{\partial u^{(i)}}{\partial \alpha ^{(i)}} = (P^T \pmb {w})_k b'_i(\alpha ^{(i)} + \sum _{j \in \mathcal {N}_i} g_{ij} \alpha ^{(j)}) - c_k, \end{aligned}$$

and thus solving for them gives us the LCP in Eq. (11).

The uniqueness result follows Theorem 1 of [19].

F Proof of Lemma 3

Proof

Consider the following linear program

$$\begin{aligned} \text {maximize}_{\pmb {w} \in \mathbf {R}^K} ~&~ \frac{\pmb {w}^T \pmb {p}_k}{c_k} \nonumber \\ \text {subject to} ~&~ \frac{(P^T \pmb {w})_r}{c_r} \le 1, \forall r \\&~ \pmb {w} \ge 0 \nonumber \end{aligned}$$
(23)

If the optimal objective value in Eq. (23) is no less than 1, then \(\mathcal {L}_k\) is non-empty. The dual problem of Eq. (23) is

$$\begin{aligned} \text {minimize}_{\pmb {y} \in \mathbf {R}^M} ~&~ \pmb {y}^T \pmb {c} \nonumber \\ \text {subject to} ~&~ P \pmb {y} \ge \pmb {p}_k \\&~ \pmb {y} \ge 0 \nonumber \end{aligned}$$
(24)

We can rewrite the constraints in Eq. (24) as follows

$$\begin{aligned}{}[P \pmb {y}]_t \ge (\pmb {p}_k)_t \Leftrightarrow \sum _{r=1}^M p_{tr} y_r \ge p_{tk}, \end{aligned}$$
(25)

and thus we know from the definition of \(\kappa _k\) that Eq. (24) has optimal objective value 1. By duality, the optimal objective value in Eq. (23) is also 1 which shows that \(\mathcal {L}_k\) is non-empty. Moreover, the linear program in Eq. (23) can be solved in polynomial time, which concludes the proof.

G Proof of Lemma 4

Proof

We denote \(\tilde{\pmb {y}}\) as the solution to the optimization problem in Eq. (12), i.e., \(\tilde{\pmb {y}}^T \pmb {c} = \kappa _k < \pmb {e}_k^T \pmb {c}\) and \(\tilde{\pmb {z}} = P \tilde{\pmb {y}} \ge \pmb {p}_k, \tilde{\pmb {y}} \ge 0\).

This is equivalent to say that comparing the two action profiles \(\tilde{\pmb {y}}\) and \(\pmb {e}_k\), we know that \(\tilde{\pmb {y}}\) achieves a weakly higher benefit \(f(\tilde{\pmb {z}}) \ge f(\pmb {p}_k)\) while having a strictly lower cost \(\tilde{\pmb {y}}^T \pmb {c} = \kappa _k < \pmb {e}_k^T \pmb {c}\). Since \(b^{(i)}(\cdot )\) is strictly increasing, we know \(u^{(i)}(\tilde{\pmb {z}}) > u^{(i)}(\pmb {e}_k)\), indicating that any action profile \(\pmb {x}^{(i)}\) s.t., \(\pmb {x}^{(i)} = l > 0\) is strictly dominated by \(\pmb {x}^{(i)} - l \cdot \pmb {e}_k + l \cdot \tilde{\pmb {z}}\) which completes the proof.

H Proof of Lemma 5

Proof

First of all, we show that \(\pmb {w}^T P (\pmb {x}^{(i)})^* = \tau \). If \(\pmb {w}^T P (\pmb {x}^{(i)})^* < \tau \) and \((\pmb {x}^{(i)})^* \ne \pmb {0}\), then the decision outcome is \(f(\pmb {z}^{(i)}) = 0\) and the action cost is \(\frac{1}{2} ||(\pmb {x}^{(i)})^*||_2^2 > 0\), which means \((\pmb {x}^{(i)})^*\) is strictly dominated by \(\pmb {0}\). On the other hand, if \(\pmb {w}^T P (\pmb {x}^{(i)})^* > \tau \), we can see that action \(\hat{\pmb {x}}^{(i)} = \frac{\tau }{\pmb {w}^T P (\pmb {x}^{(i)})^*}\) also results in \(f(\pmb {z}^{(i)}) = 1\) and lowers the action cost, and thus strictly dominates \((\pmb {x}^{(i)})^*\).

Then we show that \(s_{cos}(\pmb {x}^{(i)}, P^T \pmb {w}) = 1\). We can write out the agent’s optimization problem as follows

$$\begin{aligned} \text {minimize} ~&~ ||\pmb {x}^{(i)}||_2^2 \\ \text {subject to} ~&~ \pmb {w}^T P \pmb {x}^{(i)} = \tau , \end{aligned}$$

which clearly gives us \(s_{cos}(\pmb {x}^{(i)}, P^T \pmb {w}) = 1\), since following the normal vector of the hyperplane is the shortest path to reach the hyperplane.

I Proof of Lemma 6

Proof

Given \(f = \pmb {1}(\pmb {w}^T \pmb {z} \ge \tau )\), we denote the set of active agents (agents getting decision outcome 1 and thus non-zero action) as \(S^{+}_{\tau }\) and the set of inactive agents as \(S^{-}_{\tau }\), where obviously \(S^{+}_{\tau } \bigcup S^{-}_{\tau } = \{a_1, \dots , a_N\}, S^{+}_{\tau } \bigcap S^{-}_{\tau } = \emptyset \).

Then we consider an alternative threshold \(\tilde{\tau }\) s.t., \(\tilde{\tau } > \tau \), and if \(a_i \in S^+_{\tilde{\tau }}\), we have from Eq. (16) that

$$\begin{aligned} 1 + \sum _{j \in \mathcal {N}_i} g_{ij} \pmb {1}((\pmb {x}^{(j)})_{\tilde{\tau }}^* \ne \pmb {0}) \ge \frac{1}{2} \tilde{\tau }^2 > \frac{1}{2} \tau ^2, \end{aligned}$$

where we add a subscript to the equilibrium action to indicate the corresponding decision rule. This shows that it is profitable for all \(a_i \in S^+_{\tilde{\tau }}\) to jointly manipulate, which is a sufficient condition to conclude joint manipulation is profitable for all \(a_i \in S^+_{\tilde{\tau }}\) at a lower threshold \(\tau \). In other words, \(a_i \in S^+_{\tilde{\tau }} \Rightarrow a_i \in S^+_{\tau }\) if \(\tilde{\tau } > \tau \), and thus \(S^+_{\tilde{\tau }} \subseteq S^+_{\tau }\) if \(\tilde{\tau } > \tau \). Equivalently, \(a_i \in S^-_{\tau } \Leftrightarrow a_i \notin S^+_{\tau } \Rightarrow a_i \notin S^+_{\tilde{\tau }} \Leftrightarrow a_i \in S^-_{\tilde{\tau }}\) if \(\tilde{\tau } > \tau \), and thus \(S^{-}_{\tilde{\tau }} \supseteq S^{-}_{\tau }\) if \(\tilde{\tau } > \tau \).

J Proof of Proposition 4

Proof

We first show that \(\pmb {w}\) such that \(s_{cos}(\pmb {w}, P \pmb {q}) = 1\) (weakly) dominates all other \(\pmb {v}\) such that \(s_{cos}(\pmb {v}, P \pmb {q}) < 1\) in the linear threshold mechanism.

For an arbitrary linear threshold mechanism \(f_0(\pmb {z}) = \pmb {1}(\pmb {v}^T \pmb {z} \ge \tau _0)\) such that \(s_{cos}(\pmb {v}, P \pmb {q}) < 1\), the agents’ best responses are

$$\begin{aligned} (\pmb {x}^{(i)})^* = {\left\{ \begin{array}{ll} \tau _0 P^T \pmb {v} &{} ~\text {if}~ 1 + \sum _{j \in \mathcal {N}_i} g_{ij} \pmb {1}((\pmb {x}^{(j)})^* \ne \pmb {0}) \ge \frac{1}{2} \tau _0^2 \\ \pmb {0} ~&{}~ ~\text {o.w.}~ \end{array}\right. } \end{aligned}$$

Then let \(\pmb {w}\) be such that \(s_{cos}(\pmb {w}, P \pmb {q}) = 1\), \(||P^T \pmb {w}||_2 = ||P^T \pmb {v}||_2\), then \(f(\pmb {z}) = \pmb {1}(\pmb {w}^T \pmb {z} \ge \tau _0)\) is a (weakly) better option for the planner. This is because agents’ best responses become

$$\begin{aligned} (\pmb {x}^{(i)})^* = {\left\{ \begin{array}{ll} \tau _0 P^T \pmb {w} &{} ~\text {if}~ 1 + \sum _{j \in \mathcal {N}_i} g_{ij} \pmb {1}((\pmb {x}^{(j)})^* \ne \pmb {0}) \ge \frac{1}{2} \tau _0^2 \\ \pmb {0} ~&{}~ ~\text {o.w.}~ \end{array}\right. } \end{aligned}$$

and thus \(U(f) \ge U(f_0)\) since \(\pmb {w}^T (P \pmb {q}) > \pmb {v}^T (P \pmb {q})\).

Next, we show the performance lower bound part in Eq. (18). Suppose the planner’s optimal choice of threshold is \(\tau ^*\), then during the scanning, there exist \(\tau _0\) such that \(\tau _0 \in (\tau ^* - \epsilon , \tau ^*]\). We denote the number of agents incentivized to manipulate at threshold \(\tau \) as \(N_{\tau }\). From Lemma 6, we know that \(N \ge N_{tau_0} \ge N_{\tau ^*}\), and then

$$\begin{aligned} \frac{U(f_{\tau _0})}{\max _f U(f)} = \frac{U(f_{\tau _0})}{U(f_{\tau ^*})} \ge \frac{N_{\tau _0} \tau _0}{N_{\tau ^*} \tau ^*} \ge \frac{\tau _0}{\tau ^*} > 1 - \frac{\epsilon }{\tau ^*} \ge 1 - \frac{\epsilon }{\sqrt{2}}, \end{aligned}$$

which completes the proof.

K Proof of Proposition 5

Proof

We begin by showing that \(s_{cos}((\pmb {x}^{(i)})^*, \pmb {e}_k) = 1\). This part is similar to the proof of Lemma 2. For \(\forall \pmb {x}^{(i)}, \text {s.t.} s_{cos}(\pmb {x}^{(i)}, \pmb {e}_k) < 1, \pmb {w}^T P \pmb {x}^{(i)} = \tau \), we can show that

$$\begin{aligned} (\pmb {x}^{(i)})' = \pmb {x}^{(i)} - (\pmb {x}^{(i)})_t \pmb {e}_t + \frac{(\pmb {x}^{(i)})_t c_k}{c_t} (\pmb {x}^{(i)})_k, \end{aligned}$$

keeps the same action cost and satisfies \(\pmb {w}^T P (\pmb {x}^{(i)})' > \pmb {w}^T P \pmb {x}^{(i)}\). We denote

$$\begin{aligned} \tau ' := \pmb {w}^T P ((\pmb {x}^{(i)})' + \sum _{j \in \mathcal {N}_i} g_{ij} \pmb {x}^{(j)} ),~~ \eta := \pmb {w}^T P \sum _{j \in \mathcal {N}_i} g_{ij} \pmb {x}^{(j)}, ~~ \gamma := \frac{\tau -\eta }{\tau '-\eta } < 1. \end{aligned}$$

Then the agent can choose the action \(\gamma (\pmb {x}^{(i)})'\) to increase it’s utility and still gets decision outcome of 1. Therefore, any dimension other than k is suboptimal investment for a rational agent.

Then we need to show the expression of \(\pmb {\beta }^*\). For every agent in S, they need \(f(\pmb {z}^{(i)}) = 1\), which is equivalent to

$$\begin{aligned} \pmb {w}^T P (\pmb {x}^{(i)} + \sum _{j \in \mathcal {N}_i} g_{ij} \pmb {x}^{(j)} ) = \tau , ~\forall i ~\Leftrightarrow ~ (P^T \pmb {w})_k \cdot (\beta _S^{(i)} + \sum _{j \in \mathcal {N}_i} g_{ij} \beta _S^{(j)}) = \tau , \forall i, \end{aligned}$$

which is equivalent to

$$\begin{aligned} (I + G_S) \pmb {\beta } = \frac{\tau }{(P^T \pmb {w})_k} \cdot \pmb {1} ~\Leftrightarrow ~ \pmb {\beta } = \frac{\tau }{(P^T \pmb {w})_k} (I+G_S)^{-1} \pmb {1}. \end{aligned}$$

For the individual rationality part, the cost cannot exceed 1, which requires \((\beta ^{(i)}_S)^* \le \frac{1}{c_k}, \forall i\).

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jin, K., Yin, T., Kamhoua, C.A., Liu, M. (2021). Network Games with Strategic Machine Learning. In: Bošanský, B., Gonzalez, C., Rass, S., Sinha, A. (eds) Decision and Game Theory for Security. GameSec 2021. Lecture Notes in Computer Science(), vol 13061. Springer, Cham. https://doi.org/10.1007/978-3-030-90370-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-90370-1_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-90369-5

  • Online ISBN: 978-3-030-90370-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics