In the eye of the beholder: A visualization-based approach to information system security
Introduction
Networked computer systems are increasingly the site of people's work and activity. Millions of ordinary citizens conduct commercial transactions over the Internet, or manage their finances and pay their bills on-line; companies increasingly use the Internet to connect different offices, or form virtual teams to tackle mission-critical problems through entirely “virtual” interaction; e.g. interaction between citizens and local and federal government agencies can increasingly be conducted electronically; and the 2004 national elections in Brazil and (to a much more limited extent) the US saw the introduction of electronic voting, which will no doubt become more widespread.
However, these new opportunities have costs associated with them. Commercial, political and financial transactions involve disclosing sensitive information. The media regularly carry stories about hackers breaking into commercial servers, credit card fraud and identity theft. Many people are nervous about committing personal information to electronic information infrastructures. Even though modern PCs are powerful enough to offer strong cryptographic guarantees and high levels of security, these concerns remain.
The need for secure systems is broadly recognized, but most discussions of the “problem of security” focus on the foundational elements of information systems (such as network transmission and information storage) and the mechanisms available to system developers, integrators, and managers to ensure secure operation and management of data. Security, though, is a broader concern, and a problem for the end-users of information systems as much as for their administrators. Participation in activities such as electronic commerce requires that people be able to trust the infrastructures that will deliver these services to them.
This is not quite the same as saying that we need more secure infrastructures. We believe that it is important to separate theoretical security (the level of secure communication and computation that is technically feasible) from effective security (the level of security that can practically be achieved in everyday settings). Levels of effective security are almost always lower than those of theoretical security. A number of reasons for this disparity have been identified, including poor implementations of key security algorithms (Kelsey et al., 1998), insecure programming techniques (Wagner et al., 2000; Shankar et al., 2002), insecure protocol design (Kemmerer et al., 1994; Schneier and Mudge, 1998), and inadequate operating systems support (Ames et al., 1983; Bernaschi et al., 2000).
One important source of the disparity, though, is problems around the extent to which users can comprehend and make effective use of security mechanisms. Approaches that attempt to make the provision of system security “automatic” or “transparent” essentially remove security from the domain of the end-user. However, in situations where only the end-user can determine the appropriate use of information or the necessary levels of security, then this explicit disempowerment becomes problematic. We have been tackling these problems in the Swirl project. Here, rather than regarding the user as a potential security hole to be “routed around,” we attempt, instead, to understand how to create systems in which security is a joint production of technical, human, and social resources.
We will begin by discussing some existing work in this area, before introducing our approach. We will briefly summarize the results of our empirical work and the conclusions that we draw from these investigations, before presenting our design approach and an example of a system based on this approach. We will then briefly discuss some early usage feedback.
Section snippets
Previous approaches
It is broadly recognized that one of the major challenges to the effective deployment of information security systems is getting people to use them correctly. Psychological acceptability is one of the design principles that Saltzer and Schroeder (1975) identify. Even beyond the domain of electronic information systems, there are many examples of the fact that overly complex security systems actually reduce effective security. For example, Kahn (1967), cited by Anderson (1993), suggests that
Design approach for effective security
Our goal in undertaking both a broad review of the literature and these empirical investigations has been to understand how best to approach the design of technologies supporting usable security. As we have noted, one design approach involves giving specific attention to the security features of a system, such as those components through which information encryption might be controlled, or through which privacy preferences might be expressed, and tackling the usability problems that typically
Applying the principles
The two principles—visualizing system state, and integrating configuration and action—are broadly applicable. They have informed the design of a number of prototypes, and are part of a developing design “vocabulary” that is the primary focus of our work. In order to show how we have used them, we will spend some time discussing our most recent application design.
Our current testbed for experimentation is an application called Impromptu. Impromptu is a collaborative peer-to-peer file sharing
Initial usage experiences
A formal evaluation exercise is ongoing, but it is useful to reflect on some initial experiences introducing people to the use of Impromptu. We conducted three informal pilots, in which pairs of users drawn from our department worked together on a self-selected, real-world task. The goal was to determine the extent to which the design principles that we had adopted facilitated the interpretation of the system in terms of security concerns and impacts on action. Although very preliminary, our
Conclusion
Computer and communication security has been an important research topic for decades. However, the pressing concern at the moment is not simply with advancing the state of the art in theoretical security, but with being able to incorporate powerful security technology into the kinds of networked computational environments that more and more people rely on every day. We see the problem of creating a trustable infrastructure—one that end-users can see is visibly trustworthy—as a major problem for
Acknowledgements
This work was supported in part by the National Science Foundation under awards 0133749, 0205724 and 0326105, and by a grant from Intel Corporation.
References (53)
- et al.
Design for conversation: lessons from Cognoter
International Journal of Man–Machine Studies
(1991) - et al.
Privacy Critics: UI Components to Safeguard Users’ Privacy. CHI ’99 Extended Abstracts on Human Factors in Computing Systems
(1999) - et al.
Privacy in e-commerce: examining user scenarios and privacy preferences. Proceedings of the First ACM Conference on Electronic Commerce
(1999) - et al.
Users are not the enemy: why users compromise security mechanisms and how to take remedial measures
Communications of the ACM
(1999) - et al.
Making passwords secure and usable. Proceedings of HCI on People and Computers XII
(1997) - et al.
Security Kernel Design and Implementation: An Introduction
(1983) Why cryptosystems fail. Proceedings of the First ACM Conference on COMPUTER and Communications Security
(1993)- et al.
Design for privacy in ubiquitous environments
- et al.
Operating system enhancements to prevent the misuse of system calls. Proceedings of the Seventh ACM Conference on Computer and Communications Security
(2000) - et al.
Rethinking the design of the Internet: the end-to-end arguments vs. the brave new world
ACM Transactions on Internet Technology
(2001)