Skip to main content

Advertisement

Log in

Leader’s dilemma game: An experimental design for cyber insider threat research

  • Published:
Information Systems Frontiers Aims and scope Submit manuscript

Abstract

One of the problems with insider threat research is the lack of a complete 360° view of an insider threat dataset due to inadequate experimental design. This has prevented us from modeling a computational system to protect against insider threat situations. This paper provides a contemporary methodological approach for using online games to simulate insider betrayal for predictive behavioral research. The Leader’s Dilemma Game simulates an insider betrayal scenario for analyzing organizational trust relationships, providing an opportunity to examine the trustworthiness of focal individuals, as measured by humans as sensors engaging in computer-mediated communication. This experimental design provides a window into trustworthiness attribution that can generate a rigorous and relevant behavioral dataset, and contributes to building a cyber laboratory that advances future insider threat study.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. KGB (transliteration of “КГБ”) is the Russian abbreviation for Committee for State Security (Комите́т Госуда́рственной Безопа́сности).

  2. SVR is the Russian abbreviation for Foreign Intelligence Service (Служба Внешней Разведки), which is Russia’s primary external intelligence agency.

  3. This interface design can be found in Fig. 3 of Appendix B.

References

  • Abbink, K., Irlenbusch, B., & Renner, E. (2000). The moonlighting game: an experimental study on reciprocity and retribution. Journal of Economic Behavior & Organization, 42(2), 265–277.

    Article  Google Scholar 

  • Al-Shaer, E. S., & Hamed, H. H. (2003). Firewall policy advisor for anomaly discovery and rule editing. IFIP/IEEE 8th International Symposium on Integrated Network Management 17–30. doi:10.1109/INM.2003.1194157.

  • Anderson, C. L., & Agarwal, R. (2010). Practicing safe computing: a multimethod empirical examination of home computer user security behavioral intentions. MIS Quarterly, 34(3), 613–643.

    Google Scholar 

  • Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122–142.

    Article  Google Scholar 

  • The Editorial Board of New Your Times. (2014). Edward Snowden, Whistle-Blower, The Opinion Pages, The New York Times.

  • Bretton, H. L. (1980). The power of money: A political-economic analysis with special emphasis on the american political system: SUNY Press.

  • Bulgurcu, B., Cavusoglu, H., & Benbasat, I. (2010). Information security policy compliance: an empirical study of rationality-based beliefs and information security awareness. MIS Quarterly, 34(3), 523–548.

    Google Scholar 

  • Butler, J. M. (2012). Privileged password sharing: “root” of all evil SANS Analyst Program (February 2012 ed., pp. 1-12): Quest Software.

  • Cappelli, D. (2012). The CERT top 10 list for winnign the battle against insider threats. Paper presented at the RSA Conference 2012. http://www.cert.org/insider_threat/.

  • Chivers, H., Clark, J. A., Nobles, P., Shaikh, S., & Chen, H. (2013). Knowing who to watch: Identifying attackers whose actions are hidden within false alarms and backgroudn noise. Information Systems Frontiers, 15(1), 17–34. doi:10.1007/s10796-010-9268-7.

    Article  Google Scholar 

  • Cooper, J., & Brady, D. W. (1981). Institutional context and leadership style: the house from Cannon to Rayburn. The American Political Science Review, 75(2), 411–425.

    Article  Google Scholar 

  • Costa-Gomes, M., Crawford, V. P., & Broseta, B. (2001). Cognition and behavior in normal-form games: an experimental study. Journal of the Econometric Society, 69(5), 1193–1235. doi:10.1111/1468-0262.00239.

    Article  Google Scholar 

  • Croson, R., & Buchan, N. (1999). Gender and culture: international experimental evidence from tust games. The American Economic Review, 89(2), 386–391.

    Article  Google Scholar 

  • Crossler, R. E., Johnston, A. C., Lowry, P. B., Hu, Q., Warkentin, M., & Baskerville, R. (2013). Future directions for behavioral information security research. Computers and Security, 32, 90–101.

    Article  Google Scholar 

  • CSI. (2010-2011). 2010/2011 CSI Computer Crime and Security Survey. In Richardson, R. (Ed.), (2010-2011 ed., Vol. 2010-2011, pp. 1-42). New York, NY: Computer Security Institute.

  • Dalberg-Acton, J. E. E. (1887). Power corrupts; absolute pwoer corrupts absolutely. The Phrase Finder.

  • Denning, D. E. (1987). An intrusion-detection model. IEEE Transactions on Software Engineering, SE-13(2), 222–232.

    Article  Google Scholar 

  • Emonds, G., Declerck, C. H., Boone, C., Seurinck, R., & Achten, R. (2014). Establishing cooperation in a mixed-motive social dilemma. An fMRI study investigating the role of social value orientation and dispositional trust. Social Neuroscience, 9(1), 10–22. doi:10.1080/17470919.2013.858080.

    Article  Google Scholar 

  • Farahmand, F., & Spafford, E. H. (2013). Understanding insiders: an analysis of risk-taking behavior. Information Systems Frontiers, 15(1), 5–15. doi:10.1007/s10796-010-9265-x.

    Article  Google Scholar 

  • FBI. (2001). FBI history famous cases: Robert Philip Hanssen espionage case. Federal Bureau of Investigation Retrieved from http://www.fbi.gov/libref/historic/famcases/hanssen/hanssen.htm.

  • Fodor, E. M., & Farrow, D. L. (1979). The power motive as an influence on use of power. Journal of Personality and Social Psychology, 37(11), 2091–2097.

    Article  Google Scholar 

  • Goode, S., & Lacey, D. (2011). Detecting complex account fraud in the enterprise: the role of technical and non-technical controls. Decision Support Systems, 50, 702–714. doi:10.1016/j.dss.2010.08.018.

    Article  Google Scholar 

  • Gouda, M. G., & Liu, X. Y. A. (2004). Firewall design: consistency, completeness, and compactness. Proc 24th International Conference on Distributed Computing Systems, 320–327. doi:10.1109/ICDCS.2004.1281597.

  • Greitzer, F., Moore, A., Cappelli, D., Andrews, D., Carroll, L., & Hull, T. D. (2008). Combating the insider cyber threat. IEEE Security and Privacy, 6(1), 61–64.

    Article  Google Scholar 

  • Guo, K. H., Yuan, Y., Archer, N. P., & Connelly, C. E. (2011). Understanding nonmalicious security violations in the workplace: a composite behavior model. Journal of Management Information Systems, 28(2), 203–236. doi:10.2753/MIS0742-1222280208.

    Article  Google Scholar 

  • Heider, F. (1958). The psychology of interpersonal relations. New York: Wiley.

    Book  Google Scholar 

  • Herath, T., & Rao, H. R. (2009a). Encouraging information security behaviors in organizations: role of penalties, pressures and perceived effectiveness. Decision Support Systems, 47(2), 154–165.

    Article  Google Scholar 

  • Herath, T., & Rao, H. R. (2009b). Protection motivation and deterrence: a framework for security policy compliance in organizations. European Journal of Information Systems, 18, 106–125.

    Article  Google Scholar 

  • Ho, S. M. (2014). Cyber insider threat: Trustworthiness in virtual organization. Germany: LAP Lambert Academic Publishing, 978-3-659-51702-0.

  • Ho, S. M., & Benbasat, I. (2014). Dyadic attribution model: a mechanism to assess trustworthiness in virtual organizations. Journal of the American Society for Information Science and Technology, 65(8), 1555–1576. doi:10.1002/asi.23074.

    Article  Google Scholar 

  • Ho, S. M., & Hollister, J. (2015). Cyber insider threat in virtual organizations In Khosrow-Pour, M. (Ed.), Encyclopedia of Information Science and Technology, Third Edition, USA: IGI Global. 741–749, doi: 10.4018/978-1-4666-5888-2.ch145.

  • Ho, S. M., Timmarajus, S. S., Burmester, M., & Liu, X. (2014). Dyadic attribution: a theoretical model for interpreting online words and actions. Social Computing Behavioral Cultural Modeling and Prediction Lecture Notes in Computer Science, 8393, 277–284. doi:10.1007/978-3-319-05579-4_34.

    Article  Google Scholar 

  • Ho, S. M., Fu, H., Timmarajus, S. S., Booth, C., Baeg, J. H., & Liu, M. (2015). Insider threat: Language-action cues in group dynamics. SIGMIS-CPR'15 (pp. 101–104). ACM, Newport Beach, CA. doi:10.1145/2751957.2751978.

  • Ho, S. M., Hancock, J. T., Booth, C., Burmester, M., Liu, X., & Timmarajus, S. S. (2016). Demystifying insider threat: Language-action cues in group dynamics. Hawaii International Conference on System Sciences (HICSS-49) (pp. 1–10). IEEE, January 5-6, Kauai, Hawaii.

  • Holmes, J. G., & Rempel, J. K. (1989a). Trust in close relationships. In C. Hendrick (Ed.), Review of personality and social psychology (Vol. 10). Beverly Hills: Sage.

    Google Scholar 

  • Holmes, J. G., & Rempel, J. K. (1989b). Trust in close relationships. In C. Hendrick (Ed.), Close relationship (pp. 187–220). Newbury Park: Sage.

    Google Scholar 

  • Howard, E. S., Gardner, W. L., & Thompson, L. (2007). The role of the self-concept and the social context in determining the behavior of power holders: self-construal in intergroup versus dyadic dispute resolution negotiations. Journal of Personality and Social Psychology, 94(4), 614–631. doi:10.1037/0022-3514.93.4.614.

    Article  Google Scholar 

  • Jarvenpaa, S. L., Dickson, G. W., & DeSanctis, G. (1985). Methodological issues in experimental IS research: experiences and recommendations. MIS Quarterly, 9(2), 141–156. doi:10.2307/249115.

    Article  Google Scholar 

  • Johnston, A. C., & Warkentin, M. (2010). Fear appeals and information security behaviors: an empirical study. MIS Quarterly, 34(3), 549–566.

    Google Scholar 

  • Keeney, M., Kowalski, E., Cappelli, D., Moore, A. P., Shimeall, T. J., & Rogers, S. (2005). Insider threat study: Computer system sabotage in critical infrastructure sectors. http://resources.sei.cmu.edu/library/asset-view.cfm?assetid=51934.

  • Kelley, H. H., Holmes, J. G., Kerr, N. L., Reis, H. T., Rusbult, C. E., & Van Lange, P. A. M. (1973). The process of causal attribution. American Psychology, 28(2), 107–128.

    Article  Google Scholar 

  • Krueger, F., McCabe, K., Moll, J., Kriegeskorte, N., Zahn, R., Strenziok, Heinecke, A., & Grafman, J. (2007). Neural correlates of trust. Proceedings of the National Academy of Sciences of the United States of America, 20084–20089, PNAS. doi:10.1073/pnas.0710103104.

  • Kwon, J., & Johnson, M. E. (2011). An organizational learning perspective on proactive vs. reactive investment in information security. The 10th Workshop on Economics of Information Security (WEIS 2011), George Mason University, USA.

  • Lee, A. S. (1999). Rigor and relevance in MIS research: beyond the approach of positivism alone. MIS Quarterly, 23(1), 29–33.

    Article  Google Scholar 

  • Lee, A. S., & Baskerville, R. L. (2003). Generalizing generalizability in information systems research. Information Systems Research, 14(3), 221–243.

    Article  Google Scholar 

  • Lieberman, J. K. (1981). The litigious society. New York: Basic Books.

    Google Scholar 

  • Lumension. (2010). Anatomy of insider risk (pp. 1–10). Scottsdale: Lumension.

    Google Scholar 

  • Magklaras, G. B., & Furnell, S. M. (2001). Insider threat prediction tool: evaluating the probability of IT misuse. Computers and Security, 21(1), 62–73.

    Article  Google Scholar 

  • Magklaras, G. B., & Furnell, S. M. (2005). A preliminary model of end user sophistication for insider threat prediction in IT systems. Computers and Security, 24(5), 371–380.

    Article  Google Scholar 

  • Mayer, R. C., & Davis, J. H. (1999). The effect of the performance appraisal system on trust for management: a field quasi-experiment. Journal of Applied Psychology, 84(1), 123–136.

    Article  Google Scholar 

  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734.

    Google Scholar 

  • McCabe, K. A., Rigdon, M. L., & Smith, V. L. (2003). Positive reciprocity and intentions in trust games. Journal of Economic Behavior & Organization, 52(2), 267–275.

    Article  Google Scholar 

  • McDermott, J., & Fox, C. (1999). Using abuse case models for security requirements analysis. Proceedings of the 15th Annual Computer Security Applications Conference (ACSAC’99), Phoenix, AZ, 55–64.

  • McGrath, J. E. (Ed.). (1995). Methodology matters: Doing research in the behavioral and social science. San Mateo: Morgan Kaufmann Publishers.

    Google Scholar 

  • Muthaiyah, S., & Kerschberg, L. (2007). Virtual organization security policies: an ontology-based integration approach. Information Systems Frontiers, 9(5), 505–514. doi:10.1007/s10796-007-9050-7.

    Article  Google Scholar 

  • Myyry, L., Siponen, M., Pahnila, S., Vartiainen, T., & Vance, A. (2009). What levels of moral reasoning and values explain adherence to information security rules? An empirical study. European Journal of Information Systems, 18(2), 126–139.

    Article  Google Scholar 

  • Nash, J. (1950). Equilibrium points in n-person games. Proceedings of the National Academy of Sciences, 36(1), 48–49.

    Article  Google Scholar 

  • Nash, J. (1951). Non-cooperative games. The Annals of Mathematics, 54(2), 286–295.

    Article  Google Scholar 

  • Office of the National Counterintelligence Executive. (2014). Insider Threat. Retrieved July 9, 2014, 2014.

  • Pasmore, W. A. (1988). Designing effective organizations: The sociotechnical systems perspective (pp. 978–0471887850). New York: Wiley.

    Google Scholar 

  • Podsakoff, P. M., MacKenzie, S. M., Lee, J., & Podsakoff, N. P. (2003). Common method variance in behavioral research: a critical review of the literature and recommended remedies. Journal of Applied Psychology, 88, 879–903.

    Article  Google Scholar 

  • Ponemon Institute. (2011). Insecurity of privileged users Global survey of IT practitioners (pp. 1-33): Ponemon Institute Research Report.

  • Predd, J., Pfleeger, S. L., Hunker, J., & Bulford, C. (2008). Insiders behaving badly. IEEE Security and Privacy, 6(4), 66–70.

    Article  Google Scholar 

  • Randazzo, M. R., Keeney, M., Kowalski, E., Cappelli, D., & Moore, A. P. (2004). Insider threat study: Illicit cyber activity in the banking and finance sector. http://resources.sei.cmu.edu/library/asset-view.cfm?assetid=50287.

  • Rempel, J. K., Holmes, J. G., & Zanba, M. D. (1985). Trust in close relationship. Journal of Personality and Social Psychology, 49, 95–112.

    Article  Google Scholar 

  • Roesch, M. (1999). Snort - Lightweight intrusion detection for networks. Proceedings of the LISA’99: 13th Systems Administration Conference, Seattle, Washington, USA, 229-238, USENIX Association.

  • Siponen, M., & Vance, A. (2010). Neutralization: new insights into the problem of employee information systems security policy violations. MIS Quarterly, 34(3), 487–502.

    Google Scholar 

  • Siponen, M., & Vance, A. (2014). Guidelines for improving the contextual relevance of field surveys: the case of information security policy violations. European Journal of Information Systems, 23(3), 289–305. doi:10.1057/ejis.2012.59.

    Article  Google Scholar 

  • Straub, D. W. (1989). Validating instruments in MIS research. MIS Quarterly, 13(2), 147–166.

    Article  Google Scholar 

  • Toxen, B. (2014). The NSA and Snowden: securing the all-seeing eyes. Communication of the ACM, 57(5), 44–51. doi:10.1145/2594502.

    Article  Google Scholar 

  • Venkatesh, V., Brown, S. A., & Bala, H. (2013). Bridging the qualitative-quantitative divide: guidelines for conducting mixed methods research in information systems. MIS Quarterly, 37(1), 21–54.

    Google Scholar 

  • Warkentin, M., & Mutchler, L. A. (2014). Research in behavioral information security management. In H. Topi & A. Tucker (Eds.), Information systems and information technology (Computing Handbook Set (3rd ed., Vol. 2). Boca Raton: Taylor and Francis.

    Google Scholar 

  • Warkentin, M., Straub, D., Malimage, K. (2012). Measuring secure behavior: A research commentary. Proceedings of the Annual Symposium on Information Assurance, Albany, NY, 1–8.

  • Whetten, D. A., & Mackey, A. (2002). A social actor conception of organizational identity and its implications for the study of organizational reputation. Business and Society, 41(4), 393–414. doi:10.1177/0007650302238775.

    Article  Google Scholar 

  • Willison, R., & Warkentin, M. (2013). Beyond deterrence: an expanded view of employee computer abuse. MIS Quarterly, 37(1), 1–20.

    Google Scholar 

  • Yan, H. (2015, August 19). Wikileaks source Chelsea Manning convicted over magazines, toothpaste, CNN. URL: http://www.cnn.com/2015/08/19/politics/chelsea-manning-new-convictions/.

Download references

Acknowledgments

The first author wishes to thank National Science Foundation for the support of Secure and Trustworthy Cyberspace EAGER award #1347113 09/01/13-08/31/15, Florida Center for Cybersecurity award #2108-1072-00-O 03/01/15-02/28/16, and Conrad Metcalfe for his editing assistance.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuyuan Mary Ho.

Appendices

Appendix A: Instrument items for participant observers

Table 2 Research daily survey (misleading) questionnaire for the actor team-leader
Table 3 Research daily survey questionnaire for the team players

Appendix B: Demonstration of the efficacy of experimental design

We conducted a game simulation based on the research assumptions and considerations described in this manuscript to validate this experimental design of the “Leader’s Dilemma” Game. Discussion of our findings follows. Both Team-Leaders (from teams Crocodile and Dragon) who were presented with bait actually did betray their teams. The reliability of observers’ ratings to validate the experimental design is discussed in the B.4. Accuracy of the Measurement. The comparative content analysis was conducted for four different cases (2 control groups and 2 treatment groups) over 5-day archived data (including chats, blogs, and emails). We included some end-of-game qualitative survey results and face-to-face interviews.

2.1 Predictive results with qualitative attribution

According to the “Leader’s Dilemma” Game design principle, some information was made public and some was kept private. The Team-Leader in the game was empowered to be the sole person to contact the Game-Master, and was the only person allowed to submit the team’s answers at the end of each session. For example, according to the experimental design, Team-Leader Crocodile was treated and influenced with bait by the Game-Master. The bait was given in the form of 200 MerryBux (a micro-payment system). When the bait was presented to the Team-Leader Crocodile, Team-Leader Crocodile quietly decided to betray his team. This was reflected in his decision to delay the submission of the team answers. The conversation between the Team-Leader Crocodile and the Game-Master - which disclosed his or her intention to fail their team—is displayed in Table 4.

Table 4 Crocodile team-leader and game-master private chats on day 5

Team-Leader Dragon was also treated with bait from the Game-Master. While the Team-Leader Dragon was the only one to submit answers on behalf of her team, her team players noticed and confronted the actor because the answers were not submitted properly (Table 5).

Table 5 Dragon team private chat on day 4

In discussions between Team-Leader Dragon and the Game-Master, the Team-Leader seemed to pretend that he did not understand what the Game-Master was saying, and asked the Game-Master repeatedly about the rules around distribution of the 200 MerryBux. This Team-Leader also seemed to test the attitude of the Game-Master to determine the response if he were to distribute the bait to the team—just to be on the safe side (Table 6).

Table 6 Dragon team-leader and game-master private chat on day 4

2.2 Predictive results with quantitative attribution

The design of the “Leader’s Dilemma Game” being situated within a small virtual group setting does not permit a large quantity of data to be collected about the actor’s perceived trustworthiness. Still, observers situated within a simulated insider threat scenario were able to evaluate the threat situation and assess the actor’s trustworthiness (Fig. 5) based on limited interaction with the actor, and with each other. In the games, Team-Leaders for Crocodile and Dragon both decided to take the bait and betray their team. The ensuing insider threat behaviors were apparent to the observers. Figure 5 denotes the trustworthiness of the four focal actors as attributed by their group members in four virtual teams, which was plotted on a 5-point scale. These team players neither had prior knowledge of how the game was designed, nor the dilemma that the Team-Leader faced.

Fig. 5
figure 5

Quantitative Trustworthiness Attribution Represented in a Line Graph

Due to the nature of this experimental design, small virtual group interactions (rather than large-scale asynchronous game competitions) were tracked (Fig. 4). The resulting aggregate sample size does not allow for extensive statistical analyses, but patterns do emerge. The present experimental design allows for interactive behavioral observations across time in a longitudinal setting. This experiment method is in contrast with the method of collecting self-reported cognitions or indications of behavioral intention (e.g., survey research) (Warkentin et al. 2012). Furthermore, this experimental design allows researchers to capture and analyze two distinct behaviors; (1) the actual orthogonal trust violation itself as the behind-the-scenes truth when described by the perpetrator (e.g., Team-Leader) in the archived online dialogue, as well as (2) the third-party attribution of the trustworthiness of the perpetrator by team members. This methodological pluralism provides added rigor to the findings (Venkatesh et al. 2013). In addition, rich qualitative data (e.g., chat, blog, email, etc.) was obtained from this online game simulation, which created behavioral observation opportunities to obtain insights about the operations of virtual teams, and how focal actors make decisions during an ethical dilemma.

The survey results, illustrated in Fig. 5, indicate that the integrity in the dimension of justice value for the Team-Leader Buffalo (not influenced; group sensitivity reduced) exceeds – comparatively – that of any other team. The integrity of the Team-Leaders Crocodile and Dragon (who influenced, and betrayed their teams) in the dimension of justice was significantly lower. After removing the mole’s input, the rating for Team-Leader Dragon (influence; group sensitivity enhanced) was found to be lower than any other team. This infers that these observers were able to make their own independent judgments about the actor’s trustworthiness. After averaging the observations, the group believed that Team-Leader Dragon had low integrity. Data analysts interpreted that both Team-Leaders Alligator and Dragon did not communicate their values well to the members of Teams Alligator and Dragon. This finding has a significant implication in that the attribution of a person’s trustworthiness can be used as an indicator for whether this person is likely to betray his or her organization.

2.3 Privacy and general settings in the online game environment

Due to the sensitive nature of insider threats, and the nature of manipulation of human psychology, researchers need to be careful in the manipulation of human subjects. In our online game approach, team players found the experiment of the group dynamics to be very interesting, especially regarding how the players started to question the actor’s (i.e., Team-Leader’s) trustworthiness. Players also like the virtual way of communication that:

“People quickly gravitate toward their natural roles and the way the alliances are formed. Everyone wants a feeling of community, even in this very temporary virtual world. People want to fit in.”

Privacy is a good aspect of this type of game,” as one player reported. “The room was well set up and no one knows who is who,” another player answered. Because this experiment is in temporary virtual setting, and because all players are given pseudonyms, no social responsibility is really ingrained outside of the 5-day games, thereby avoiding social desirability bias and ensuring that true personality behaviors are exhibited. As one player reported,

“I was also most interested in the fact that I formed an alliance at all. I had determined to be distant and anonymous for this game, but my natural talkativeness and willingness to try to answer questions took over. Immediately I noticed that [Ricky] and I answered similarly and that we had the most in common. I wasn’t planning to talk, but my wanting to take some kind of leadership role when there was a gap to be filled took over. I fell into my natural (my usual, however they were constructed) patterns of behaviors. Even though I planned to make the most out of having a fake ID, my personality took over and I was the same person I really am—I didn’t act like I was a different person; I started acting like myself. That was a very strong drive for me.”

2.4 Accuracy of the measurement

Measurement of the experiment includes two categories: (1) the accuracy of the participants’ views (accuracy) is illustrated in Fig. 6, and (2) judgments about the participants’ performance (outcome instrumentality) are illustrated in Fig. 7. Generally, each line in the graphs represents a group’s average view toward each category. Four lines represent four virtual teams. When lines are closer to outer circles, it means that the group’s views about the above four categories are higher (on 5-point Likert scale), and vice versa. If four lines in a graph are close to one another, it means that players’ views among four teams are close to one another.

Figure 6 illustrates that the participants’ judgment about the accuracy of the instrument was pretty close among each other for all four virtual teams. When the outlier data was removed (shown on the right hand side of the Fig. 6), the consistent pattern regarding the instrument’s accuracy, among team players in all four cases, can be found.

Fig. 6
figure 6

Comparison measures of observers’ rating accuracy

Figure 7 illustrates the Alligator team players’ own judgment about their performance was higher than all other three team players (outcome instrumentality). When the outlier data was removed (showed on the right hand side of Fig. 7), a consistent pattern regarding team players’ performance as outcome instrumentality, among team players in all four cases, was found.

Fig. 7
figure 7

Comparison measures of observers’ judgment about their performance as outcome instrumentality

2.5 An Interface design utilizing Google+ HangOut

Fig. 8
figure 8

A sample interface design of the online game utilizing Google+ Hangout (Ho et al. 2015, 2016)

Figure 8 illustrates an interface design of the “Leader’s Dilemma Game” utilizing Google+ Hangout. Participants’ role assignments are indicated in the lower right hand corner. It is important to note that participants’ privacy and data confidentiality are addressed in this instrument artifact by having their real identities replaced with pseudo identities. Participants’ group-oriented task assignments are given in the upper right hand corner. The human-to-human and human-to-computer interactions are captured in the chat boxes. Team Leaders’ private chat with the Game-Master is separated from the team’s group chat.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ho, S.M., Warkentin, M. Leader’s dilemma game: An experimental design for cyber insider threat research. Inf Syst Front 19, 377–396 (2017). https://doi.org/10.1007/s10796-015-9599-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10796-015-9599-5

Keywords

Navigation