Skip to main content

On Policies and Intents

  • Conference paper
Information Systems Security (ICISS 2012)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 7671))

Included in the following conference series:

  • 968 Accesses

Abstract

A policy is a set of guidelines meant to accomplish some intent. In information security, a policy will take the form of an access control policy that describes the conditions under which entities can perform actions on data objects. Further, such policies are prolific in modern society, where information must flow between different enterprises, states, and countries, all of which will likely have different policies. Unfortunately, policies have proven to be extremely difficult to evaluate. Even with formal policies, basic questions about policy completeness and consistency can be undecidable. These problems are confounded when multiple policies must be considered in aggregation. Even worse, many policies are merely “formal-looking” or are completely informal. Thus, they cannot be reasoned about in a formal way and it may not even be possible to reliably determine whether a given course of action is allowed. Even with all of these problems, policies face issues related to their validity. That is, to be valid, a policy should reflect the intent of the policy makers and it should be clear what the consequences are if a policy is violated. It is the contention of the authors that when evaluating policies, one needs to be able to understand and reason about the policy maker’s intentions and the consequences associated with them. This paper focuses on the intent portion of this perspective. Unfortunately, because policy makers are humans, policy maker intentions are not readily captured by existing policy languages and notations. To rectify this, we take inspiration from task analytic methods, a set of tools and techniques human factors engineers and cognitive scientists use to represent and reason about the intentions behind human behavior. Using task analytic models as a template, we describe how policies can be represented in task-like models as hierarchies of goals and rules, with logics specifying when goals are contextually relevant and what outcomes are expected when goals are achieved. We then discuss how this framing could be used to reason about policy maker intent when evaluating policies. We further outline how this approach could be extended to facilitate reasoning about consequences. Support for legacy systems is also explored.

This research was supported in part by NSF grants CCF-1141863, CNS-1228947, IIS-0747369, and IIS-0812258.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Applebaum, A., Levitt, K.N., Rowe, J., Parsons, S.: Arguing about firewall policy. In: Verheij, B., Szeider, S., Woltran, S. (eds.) Computational Models of Argument - Proceedings of COMMA 2012, Vienna, Austria, September 10-12. Frontiers in Artificial Intelligence and Applications, vol. 245, pp. 91–102. IOS Press (2012)

    Google Scholar 

  2. Al-Shaer, E., Hamed, H.: Firewall policy advisor for anomaly detection and rule editing. In: Proc. IEEE/IFIP 8th Int. Symp. Integrated Network Management, IM 2003, pp. 17–30 (March 2003)

    Google Scholar 

  3. Al-Shaer, E., Hamed, H.: Discovery of policy anomalies in distributed firewalls. In: INFOCOM (2004)

    Google Scholar 

  4. Bolton, M.L., Bass, E.J.: Formally verifying human-automation interaction as part of a system model: Limitations and tradeoffs. Innovations in Systems and Software Engineering: A NASA Journal 6(3), 219–231 (2010)

    Article  Google Scholar 

  5. Bolton, M.L., Bass, E.J., Siminiceanu, R.I.: Using formal verification to evaluate human-automation interaction in safety critical systems, a review. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans (in press, expected 2012)

    Google Scholar 

  6. Barth, A., Datta, A., Mitchell, J.C., Nissenbaum, H.: Privacy and contextual integrity: Framework and applications. In: Proceedings of 27th IEEE Symposium on Security and Privacy (May 2006)

    Google Scholar 

  7. Bell, D., LaPadula, L.: Secure computer system unified exposition and multics interpretation. Technical Report MTR-2997, MITRE Corp., Bedford, MA (July 1975)

    Google Scholar 

  8. Bolton, M.L., Siminiceanu, R.I., Bass, E.J.: A systematic approach to model checking human-automation interaction using task-analytic models. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans 41(5), 961–976 (2011)

    Article  Google Scholar 

  9. Erman, L.D., Hayes-Roth, F., Lesser, V.R., Reddy, D.R.: The hearsay-II speech understanding system: Integrating knowledge to resolve uncertainty. ACM Computing Surveys 12(2), 213–253 (1980)

    Article  Google Scholar 

  10. Giese, M., Mistrzyk, T., Pfau, A., Szwillus, G., von Detten, M.: AMBOSS: A Task Modeling Approach for Safety-Critical Systems. In: Forbrig, P., Paternò, F. (eds.) HCSE/TAMODIA 2008. LNCS, vol. 5247, pp. 98–109. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  11. Guttman, J.D.: Security Goals: Packet Trajectories and Strand Spaces. In: Focardi, R., Gorrieri, R. (eds.) FOSAD 2000. LNCS, vol. 2171, pp. 197–261. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  12. Harel, D., Marelly, R.: Come, let’s play: Scenario-based programming using LSCs and the play-engine. Springer (2003)

    Google Scholar 

  13. Harel, D., Marron, A., Weiss, G.: Behavioral programming. Commun. ACM 55(7), 90–100 (2012)

    Article  Google Scholar 

  14. Hayes-Roth, B.: A blackboard architecture for control. Artificial Intelligence 26(3), 251–321 (1985)

    Article  Google Scholar 

  15. Harrison, M.A., Ruzzo, W.L., Ullman, J.D.: Protection in operating systems. Communications of the ACM 19(8), 461–471 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  16. Hartson, H.R., Siochi, A.C., Hix, D.: The UAN: A user-oriented representation for direct manipulation interface designs. ACM Transactions on Information Systems 8(3), 181–203 (1990)

    Article  Google Scholar 

  17. Kirwan, B., Ainsworth, L.K.: A Guide to Task Analysis. Taylor and Francis, London (1992)

    Google Scholar 

  18. Leveson, N.G.: Intent specifications: An approach to building human-centered specifications. IEEE Transactions on Software Engineering 26(1), 15–35 (2000)

    Article  Google Scholar 

  19. Mitchell, C.M., Miller, R.A.: A discrete control model of operator function: A methodology for information display design. IEEE Transactions on Systems Man Cybernetics Part A: Systems and Humans 16(3), 343–357 (1986)

    Article  Google Scholar 

  20. Norman, D.: Some observations on mental models. In: Gentner, D., Stevens, A.L. (eds.) Mental Models, pp. 7–14. Lawrence Erlbaum Associates, Mahwah (1983)

    Google Scholar 

  21. Paternò, F., Mancini, C., Meniconi, S.: ConcurTaskTrees: A diagrammatic notation for specifying task models. In: Proceedings of the IFIP TC13 Interantional Conference on Human-Computer Interaction, pp. 362–369. Chapman and Hall, Ltd., London (1997)

    Google Scholar 

  22. Rubin, K.S., Jones, P.M., Mitchell, C.M.: OFMspert: Inference of operator intentions in supervisory control using a blackboard architecture. IEEE Transactions on Systems, Man and Cybernetics 18(4), 618–637 (1988)

    Article  Google Scholar 

  23. Rahmouni, H.B., Solomonides, T., Mont, M.C., Shiu, S.: Privacy compliance and enforcement on european healthgrids: an appraoch through ontology. Philosophical Transactions of the Royal Society (368), 4057–4072 (2010)

    Google Scholar 

  24. Schraagen, J.M., Chipman, S.F., Shalin, V.L.: Cognitive Task Analysis. Lawrence Erlbaum Associates, Inc., Philadelphia (2000)

    Google Scholar 

  25. Wool, A.: A quantitative study of firewall configuration errors. Computer 37(6), 62–67 (2004)

    Article  Google Scholar 

  26. Wool, A.: Trends in firewall configuration errors: Measuring the holes in swiss cheese. IEEE Internet Computing 14(4), 58–65 (2010)

    Article  Google Scholar 

  27. Yuan, L., Mai, J., Su, Z., Chen, H., Chuah, C., Mohapatra, P.: FIREMAN: A toolkit for FIREwall Modeling and ANalysis. In: IEEE Symposium on Security and Privacy, pp. 199–213. IEEE Computer Society (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bolton, M.L., Wallace, C.M., Zuck, L.D. (2012). On Policies and Intents. In: Venkatakrishnan, V., Goswami, D. (eds) Information Systems Security. ICISS 2012. Lecture Notes in Computer Science, vol 7671. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35130-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-35130-3_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-35129-7

  • Online ISBN: 978-3-642-35130-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics