In the eye of the beholder: A visualization-based approach to information system security

https://doi.org/10.1016/j.ijhcs.2005.04.021Get rights and content

Abstract

Computer system security is traditionally regarded as a primarily technological concern; the fundamental questions to which security researchers address themselves are those of the mathematical guarantees that can be made for the performance of various communication and computational challenges. However, in our research, we focus on a different question. For us, the fundamental security question is one that end-users routinely encounter and resolve for themselves many times a day—the question of whether a system is secure enough for their immediate needs.

In this paper, we will describe our explorations of this issue. In particular, we will draw on three major elements of our research to date. The first is empirical investigation into everyday security practices, looking at how people manage security as a practical, day-to-day concern, and exploring the context in which security decisions are made. This empirical work provides a foundation for our reconsideration of the problems of security to a large degree as an interactional problem. The second is our systems approach, based on visualization and event-based architectures. This technical approach provides a broad platform for investigating security and interaction, based on a set of general principles. The third is our initial experiences in a prototype deployment of these mechanisms in an application for peer-to-peer file sharing in face-to-face collaborative settings. We have been using this application as the basis of an initial evaluation of our technology in support of everyday security practices in collaborative workgroups.

Introduction

Networked computer systems are increasingly the site of people's work and activity. Millions of ordinary citizens conduct commercial transactions over the Internet, or manage their finances and pay their bills on-line; companies increasingly use the Internet to connect different offices, or form virtual teams to tackle mission-critical problems through entirely “virtual” interaction; e.g. interaction between citizens and local and federal government agencies can increasingly be conducted electronically; and the 2004 national elections in Brazil and (to a much more limited extent) the US saw the introduction of electronic voting, which will no doubt become more widespread.

However, these new opportunities have costs associated with them. Commercial, political and financial transactions involve disclosing sensitive information. The media regularly carry stories about hackers breaking into commercial servers, credit card fraud and identity theft. Many people are nervous about committing personal information to electronic information infrastructures. Even though modern PCs are powerful enough to offer strong cryptographic guarantees and high levels of security, these concerns remain.

The need for secure systems is broadly recognized, but most discussions of the “problem of security” focus on the foundational elements of information systems (such as network transmission and information storage) and the mechanisms available to system developers, integrators, and managers to ensure secure operation and management of data. Security, though, is a broader concern, and a problem for the end-users of information systems as much as for their administrators. Participation in activities such as electronic commerce requires that people be able to trust the infrastructures that will deliver these services to them.

This is not quite the same as saying that we need more secure infrastructures. We believe that it is important to separate theoretical security (the level of secure communication and computation that is technically feasible) from effective security (the level of security that can practically be achieved in everyday settings). Levels of effective security are almost always lower than those of theoretical security. A number of reasons for this disparity have been identified, including poor implementations of key security algorithms (Kelsey et al., 1998), insecure programming techniques (Wagner et al., 2000; Shankar et al., 2002), insecure protocol design (Kemmerer et al., 1994; Schneier and Mudge, 1998), and inadequate operating systems support (Ames et al., 1983; Bernaschi et al., 2000).

One important source of the disparity, though, is problems around the extent to which users can comprehend and make effective use of security mechanisms. Approaches that attempt to make the provision of system security “automatic” or “transparent” essentially remove security from the domain of the end-user. However, in situations where only the end-user can determine the appropriate use of information or the necessary levels of security, then this explicit disempowerment becomes problematic. We have been tackling these problems in the Swirl project. Here, rather than regarding the user as a potential security hole to be “routed around,” we attempt, instead, to understand how to create systems in which security is a joint production of technical, human, and social resources.

We will begin by discussing some existing work in this area, before introducing our approach. We will briefly summarize the results of our empirical work and the conclusions that we draw from these investigations, before presenting our design approach and an example of a system based on this approach. We will then briefly discuss some early usage feedback.

Section snippets

Previous approaches

It is broadly recognized that one of the major challenges to the effective deployment of information security systems is getting people to use them correctly. Psychological acceptability is one of the design principles that Saltzer and Schroeder (1975) identify. Even beyond the domain of electronic information systems, there are many examples of the fact that overly complex security systems actually reduce effective security. For example, Kahn (1967), cited by Anderson (1993), suggests that

Design approach for effective security

Our goal in undertaking both a broad review of the literature and these empirical investigations has been to understand how best to approach the design of technologies supporting usable security. As we have noted, one design approach involves giving specific attention to the security features of a system, such as those components through which information encryption might be controlled, or through which privacy preferences might be expressed, and tackling the usability problems that typically

Applying the principles

The two principles—visualizing system state, and integrating configuration and action—are broadly applicable. They have informed the design of a number of prototypes, and are part of a developing design “vocabulary” that is the primary focus of our work. In order to show how we have used them, we will spend some time discussing our most recent application design.

Our current testbed for experimentation is an application called Impromptu. Impromptu is a collaborative peer-to-peer file sharing

Initial usage experiences

A formal evaluation exercise is ongoing, but it is useful to reflect on some initial experiences introducing people to the use of Impromptu. We conducted three informal pilots, in which pairs of users drawn from our department worked together on a self-selected, real-world task. The goal was to determine the extent to which the design principles that we had adopted facilitated the interpretation of the system in terms of security concerns and impacts on action. Although very preliminary, our

Conclusion

Computer and communication security has been an important research topic for decades. However, the pressing concern at the moment is not simply with advancing the state of the art in theoretical security, but with being able to incorporate powerful security technology into the kinds of networked computational environments that more and more people rely on every day. We see the problem of creating a trustable infrastructure—one that end-users can see is visibly trustworthy—as a major problem for

Acknowledgements

This work was supported in part by the National Science Foundation under awards 0133749, 0205724 and 0326105, and by a grant from Intel Corporation.

References (53)

  • D.G. Tatar et al.

    Design for conversation: lessons from Cognoter

    International Journal of Man–Machine Studies

    (1991)
  • M.S. Ackerman et al.

    Privacy Critics: UI Components to Safeguard Users’ Privacy. CHI ’99 Extended Abstracts on Human Factors in Computing Systems

    (1999)
  • M.S. Ackerman et al.

    Privacy in e-commerce: examining user scenarios and privacy preferences. Proceedings of the First ACM Conference on Electronic Commerce

    (1999)
  • A. Adams et al.

    Users are not the enemy: why users compromise security mechanisms and how to take remedial measures

    Communications of the ACM

    (1999)
  • A. Adams et al.

    Making passwords secure and usable. Proceedings of HCI on People and Computers XII

    (1997)
  • S. Ames et al.

    Security Kernel Design and Implementation: An Introduction

    (1983)
  • R. Anderson

    Why cryptosystems fail. Proceedings of the First ACM Conference on COMPUTER and Communications Security

    (1993)
  • V. Bellotti et al.

    Design for privacy in ubiquitous environments

  • M. Bernaschi et al.

    Operating system enhancements to prevent the misuse of system calls. Proceedings of the Seventh ACM Conference on Computer and Communications Security

    (2000)
  • M.S. Blumenthal et al.

    Rethinking the design of the Internet: the end-to-end arguments vs. the brave new world

    ACM Transactions on Internet Technology

    (2001)
  • S. Brostoff et al.

    Are passfaces more usable than passwords? A field trial investigation

  • A. Carzaniga et al.

    Design and evaluation of a wide-area event notification service

    ACM Transactions Computer Systems

    (2001)
  • de Paula, R., Ding, X., Dourish, P., Nies, K., Pillet, B., Redmiles, D.F., Ren, J., Rode, J.A., Silva Filho, R., 2005....
  • D.E. Denning

    An intrusion-detection model

    IEEE Transactions on Software Engineering

    (1987)
  • P. Dewan et al.

    Flexible meta access-control for collaborative applications. Proceedings of the 1998 ACM Conference on Computer Supported Cooperative Work

    (1998)
  • Dhamija, R., Perrig, A., 2000. Deja Vu: A user study. using images for authentication. Proceedings of the Ninth USENIX...
  • P. Dourish

    Culture and control in a media space. Proceedings of the European Conference on Computer-Supported Cooperative Work ECSCW’93

    (1993)
  • Dourish, P., Anderson, K., 2005. Privacy, security…and risk and danger and secrecy and trust and identity and morality...
  • P. Dourish et al.

    An approach to usable security based on event monitoring and visualization. Proceedings of the 2002 Workshop on New Security Paradigms

    (2002)
  • P. Dourish et al.

    The doctor is in: helping end users understand the health of distributed systems

  • P. Dourish et al.

    Security in the wild: user strategies for managing security as an everyday, practical problem

    Personal Ubiquitous Computing

    (2004)
  • R.A. Finkel

    Pulsar: an extensible tool for monitoring large Unix sites

    Software-Practice & Experience

    (1997)
  • Goland, Y., Whitehead, E., Faizi, A., Carter, S., Jensen, D., 1999. HTTP extensions for distributed authoring—WEBDAV....
  • N.S. Good et al.

    Usability and privacy: a study of Kazaa P2P file-sharing. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems

    (2003)
  • S. Greenberg et al.

    Real time groupware as a distributed system: concurrency control and its effect on the interface. Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work

    (1994)
  • R.R. Henning

    Security service level agreements: quantifiable security for the enterprise? Proceedings of the 1999 Workshop on New Security Paradigms

    (2000)
  • Cited by (0)

    View full text