Skip to main content

Hypothesis Testing Game for Cyber Deception

  • Conference paper
  • First Online:
Decision and Game Theory for Security (GameSec 2018)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 11199))

Included in the following conference series:

Abstract

Deception is a technique to mislead human or computer systems by manipulating beliefs and information. Successful deception is characterized by the information-asymmetric, dynamic, and strategic behaviors of the deceiver and the deceivee. This paper proposes a game-theoretic framework to capture these features of deception in which the deceiver sends the strategically manipulated information to the deceivee while the deceivee makes the best-effort decisions based on the information received and his belief. In particular, we consider the case when the deceivee adopts hypothesis testing to make binary decisions and the asymmetric information is modeled using a signaling game where the deceiver is a privately-informed player called sender and the deceivee is an uninformed player called receiver. We characterize perfect Bayesian Nash equilibrium (PBNE) solution of the game and study the deceivability of the game. Our results show that the hypothesis testing game admits pooling and partially-separating-pooling equilibria. In pooling equilibria, the deceivability depends on the true types, while in partially-separating-pooling equilibria, the deceivability depends on the cost of the deceiver. We introduce the receiver operating characteristic curve to visualize the equilibrium behavior of the deceiver and the performance of the decision making, thereby characterizing the deceivability of the hypothesis testing game.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bodmer, S., Kilger, M., Carpenter, G., Jones, J.: Reverse Deception: Organized Cyber Threat Counter-Exploitation. McGraw Hill Professional, New York (2012)

    Google Scholar 

  2. Brown, G., Carlyle, M., Diehl, D., Kline, J., Wood, K.: A two-sided optimization for theater ballistic missile defense. Oper. Res. 53(5), 745–763 (2005)

    Article  MathSciNet  Google Scholar 

  3. Cott, H.B.: Adaptive Coloration in Animals. Methuen, London (1940)

    Google Scholar 

  4. Crawford, V.P., Sobel, J.: Strategic information transmission. Econom. J. Econometric Soc. 50(6), 1431–1451 (1982)

    Article  MathSciNet  Google Scholar 

  5. Ettinger, D., Jehiel, P.: A theory of deception. Am. Econ. J. Microecon. 2(1), 1–20 (2010)

    Article  Google Scholar 

  6. Fudenberg, D., Tirole, J.: Game Theory (1991). Cambridge, Massachusetts 393(12), 80 (1991)

    Google Scholar 

  7. Gneezy, U.: Deception: the role of consequences. Am. Econ. Rev. 95(1), 384–394 (2005)

    Article  Google Scholar 

  8. Grossman, S.J.: The informational role of warranties and private disclosure about product quality. J. Law Econ. 24(3), 461–483 (1981)

    Article  Google Scholar 

  9. Jajodia, S., Subrahmanian, V.S.S., Swarup, V., Wang, C. (eds.): Cyber Deception. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-32699-3

    Book  Google Scholar 

  10. Janczewski, L., Colarik, A.: Cyber warfare and cyber terrorism (2008)

    Google Scholar 

  11. Kartik, N.: Strategic communication with lying costs. Rev. Econ. Stud. 76(4), 1359–1395 (2009)

    Article  MathSciNet  Google Scholar 

  12. Levy, B.C.: Principles of Signal Detection and Parameter Estimation. Springer Science & Business Media, Boston (2008). https://doi.org/10.1007/978-0-387-76544-0

    Book  Google Scholar 

  13. Milgrom, P.R.: Good news and bad news: representation theorems and applications. Bell J. Econ. 12(2), 380–391 (1981)

    Article  Google Scholar 

  14. Pawlick, J., Colbert, E., Zhu, Q.: Analysis of leaky deception for network security using signaling games with evidence

    Google Scholar 

  15. Powell, R.: Allocating defensive resources with private information about vulnerability. Am. Polit. Sci. Rev. 101(4), 799–809 (2007)

    Article  Google Scholar 

  16. Vrij, A., Mann, S.A., Fisher, R.P., Leal, S., Milne, R., Bull, R.: Increasing cognitive load to facilitate lie detection: the benefit of recalling an event in reverse order. Law Hum. Behav. 32(3), 253–265 (2008)

    Article  Google Scholar 

  17. Zhang, T., Zhu, Q.: Strategic defense against deceptive civilian GPS spoofing of unmanned aerial vehicles. In: Rass, S., An, B., Kiekintveld, C., Fang, F., Schauer, S. (eds.) GameSec 2017. LNCS, vol. 10575, pp. 213–233. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68711-7_12

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Zhang .

Editor information

Editors and Affiliations

Appendices

A Appendix A: Proof of Lemma 2

Expand the total Bayes risk in Eq. 7 as follow,

$$\begin{aligned} \begin{aligned} \bar{C}^R( \delta | m, \sigma ^S, \mu ) =&\sum _{j=0}^1 \lambda (\delta | H_j, \sigma ^S ) \mu (H_j | m)\\ =&\sum _{j=0}^1 \sum _{i=0}^1 \sum _{m'\in M_i} c_{ij}\sigma ^S(m'|H_j) \mu (H_j|m). \end{aligned} \end{aligned}$$
(17)

Let \(\varXi (M_i |H_j)\) be defined as

$$ \varXi (M_i |H_j) = \sum _{m\in M_i} p(m|H_j). $$

Then, we have \(\varXi (M_i|H_0) + \varXi (M_i |H_1) = 1\) \(\forall j\in \{0,1\}\). Thus, Eq. 17 can be written as

$$\begin{aligned} \begin{aligned} \bar{C}^R( \delta | m, \sigma ^S, \mu ) =&\sum _{j=0}^1 c_{oj}\mu (H_j |m) + \sum _{j=0}^1 (c_{1j} - c_{0j})\varXi (M_1 |H_j)\mu (H_j|m)\\ =&\sum _{j=0}^1 c_{oj}\mu (H_j |m) + \sum _{m'\in M_1}\big (\sum _{j=0}^1 (c_{1j} - c_{0j})\sigma ^S(m' |H_j)\mu (H_j|m) \big ). \end{aligned} \end{aligned}$$
(18)

Therefore, a decision function \(\delta ^*\) is optimum if it can partition \(\varTheta \) into \(M_0\) and \(M_1\) such that \(M_1\) satisfies

$$ M_1 = \{m\in \theta : \sum _{j=0}^1 (c_{1j} - c_{0j})\sigma ^S(m' |H_j)\mu (H_j|m) \le 0 \}. $$

Under Assumption 1, we have

$$ c_{10} \sigma ^S(m|H_0) \mu (H_0 | m) - c_{01} \sigma ^S(m|H_1) \mu (H_1|m)\le 0. $$

Therefore, \(H_1\) is selected, i.e., \(\delta ^*(m) = 1\) if the following inequality holds,

$$ \frac{ \sigma ^S(m|H_1) }{ \sigma ^S(m|H_0) } \ge \frac{ c_{10} }{c_{01} } \frac{ \mu (H_0 | m) }{\mu (H_1|m)}. $$

Similarly, we can find the condition for \(H_0\).   \(\triangle \)

B Appendix B: Proof of Theorem 2

Suppose the true type is \(H_0\). S wants R to believe the type is \(H_1\), i.e., \(\delta ^*(m) = 1\). This requires the strategy \(\sigma ^{S*}\) of S to satisfy

$$\begin{aligned} \frac{\sigma ^{S*}(m|H_1)}{\sigma ^{S*}(m|H_0)} \ge \frac{ c_{10} }{c_{01} } \frac{ \mu (H_0|m) }{\mu (H_1|m)}. \end{aligned}$$
(19)

Given R’s action a, the corresponding costs are \(C^S(H_0, m, a=0)\) and \(C^S(H_0, m, a=1)\).

Similarly, if the true type is \(H_1\), the successful deception requires \(\sigma ^{S*}\) to satisfy

$$\begin{aligned} \frac{\sigma ^{S*}(m|H_1)}{\sigma ^{S*}(m|H_0)} < \frac{ c_{10} }{c_{01} } \frac{ \mu (H_0|m) }{\mu (H_1|m)}. \end{aligned}$$
(20)

Given R’s action a, the corresponding costs are \(C^S(H_1, m, a=0)\) and \(C^S(H_1, m, a=1)\).

Clearly, (19) and (20) cannot hold at the same time. Therefore, S has to decide between (19) and (20) such that the cost is minimized given the true type \(H_j\), \(\forall j\in \{0,1\}\). Therefore, if \(C^S(H_0,m,1) < C^S(H_1,m,0)\), S chooses the strategy \(\sigma ^{S*}\) that satisfies (19); if \(C^S(H_0,m,1) > C^S(H_1,m,0)\)

S chooses the strategy \(\sigma ^{S*}\) that satisfies (19) if \(C^S(H_0,m,1) < C^S(H_1,m,0)\). In this case, R is deceivable if \(H_0\) holds and is not deceivable if \(H_1\) holds. The corresponding rate of successful deception is the probability of occurrence of \(H_0\), i.e., \(\pi _0\). If \(C^S(H_0,m,1) > C^S(H_1,m,0)\), S chooses \(\sigma ^{S*}\) satisfying (19). In this case, S can deceive R if \(H_1\) holds and cannot deceive her if \(H_0\) holds. The rate of successful deception is \(\pi _1\). If \(C^S(H_0,m,1) = C^S(H_1,m,0)\), and chooses \(\sigma ^{S*}\), S is indifferent between (19) and (20), and he can choose either strategy.   \(\triangle \)

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, T., Zhu, Q. (2018). Hypothesis Testing Game for Cyber Deception. In: Bushnell, L., Poovendran, R., BaÅŸar, T. (eds) Decision and Game Theory for Security. GameSec 2018. Lecture Notes in Computer Science(), vol 11199. Springer, Cham. https://doi.org/10.1007/978-3-030-01554-1_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-01554-1_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-01553-4

  • Online ISBN: 978-3-030-01554-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics