Intrusion-tolerant fine-grained authorization for Internet applications
Introduction
Authentication and authorization are key issues in computer security. The authentication process provides a way for a user to prove his identity (typically by presenting a valid username and password) while the authorization process consists in determining whether a user has permission to execute given operations in the system. This paper deals with authorization. While it is not a simple issue in standalone systems, it really becomes intricate when one considers distributed applications and particularly applications distributed over the Internet.
Most protection models are based on the notion of a reference monitor [1] that controls all interactions in the system to check whether each access is authorized or denied, using an access matrix that stores all the access rights of the system. If we consider applications distributed across the Internet, we can imagine a direct application of the standalone system paradigm by using a central reference monitor located on one host of the distributed system. This reference monitor would check all interactions in the whole distributed system. An implementation of this paradigm has been proposed in [2]. A major drawback of this approach is obviously the fact that security of the entire system relies on just one machine, which is thus a single point of failure. Even if the central reference monitor were to be implemented as a fault- and intrusion-tolerant server running on several separately administered machines, it would still be a major bottleneck regarding system performance. Another possible solution can be found in the Red Book [3]. In this approach, a local reference monitor in each site of the distributed system checks all the accesses from remote entities to local entities. The local reference monitor is part of the site’s Trusted Computing Base (TCB) and each TCB trusts all the other TCBs of the whole system: when an access from a remote entity is made to a local entity, the local TCB trusts the remote TCB to correctly supply the remote entity’s identity, which is used for the verification of the access rights. Thus, in this approach, if any of the TCBs is corrupted, the security of the whole system is compromised.
Our first objective was thus to design an authorization scheme that is a trade-off between totally centralized and totally distributed systems and that has none of their drawbacks.
Today, most Internet applications are based on the client–server model. This model is not rich enough to cope with composite operations involving more than two participants. For example, an electronic commerce transaction typically requires the cooperation of a customer, a merchant, a credit card company, a bank, a delivery company, etc. Each of these participants has different interests, and may thus distrust the other participants. Moreover, in this model, typically, the server distrusts clients, and grants each client access rights according to the client’s identity. This enables the server to record a lot of personal information about clients: identity, usual IP address, postal address, credit card number, shopping habits, etc. Such a model is thus necessarily privacy intrusive.
Our second objective was thus to design an authorization scheme that is able to distribute proofs of authorization to the participants of a composite operation whilst enforcing the “least privilege principle”: each participant should be granted the minimum set of proofs of authorization necessary to execute the composite operation. For that purpose, our authorization scheme defines a new proof of authorization concept and an authorization delegation scheme that is more flexible that the usual “proxy scheme”.
The paper is organised as follows. Section 2 presents the design of the authorization scheme and the security properties of the main entities. In Section 3, we give the definitions of the different proofs of authorization that are used in our scheme and discuss our delegation scheme. Sections 4 Authorization server, 5 Reference monitor give a detailed presentation of the two main entities of our scheme: the authorization server and the local reference monitor. Section 6 gives an overview of a prototype implementation of the authorization service. Some performance measures are also provided. Finally, Section 7 presents related work and Section 8 draws conclusions and proposes future work.
Section snippets
Design of the authorization service
This section gives a global presentation of our authorization scheme and the security properties required by the different entities that compose the authorization service.
Concepts and definitions
We build on the generic authorization scheme for distributed object systems that we proposed in [4]. In that scheme, an application is considered to be composed of objects that interact through method invocations. Execution can be viewed at two levels of abstraction: atomic operations and composite operations. An atomic operation is simply the execution of one method of one object. A composite operation corresponds to the coordinated execution of several atomic operations towards a common goal.
Authorization server
Since only composite operations are managed by the authorization server, system security is relatively easy to manage: the application security administrators of the realm have just to assign the rights to execute composite operations, they do not have to consider all the elementary rights to invoke object methods. Moreover, since only one request has to be checked by the authorization server for each composite operation, the communication overhead scales well.
The authorization server is a
Reference monitor
There is a local reference monitor on each participating host. The reference monitor is responsible for granting or denying atomic operations (i.e., local object method invocations), according to capabilities generated by the authorization server (or by the reference monitor itself, for transient local objects). In the context of wide-area networks (such as the Internet), the implementation of such a reference monitor is quite complex since, due to the heterogeneity of connected hosts, it would
Prototype: an implementation example
In this prototype, we chose to implement all the objects of a complete distributed application in Java. Invocation between objects are carried out through Java RMIs (Remote Method Invocations). The tamperproof hardware-resident part of the reference monitor is a JavaCard.2
Related work
International standard ISO 10181-3 [19] defines a general framework for access control in open interconnected systems. This standard distinguishes two access control functions: the Access Control Decision Function (ADF) and the Access Control Enforcement Function (AEF). The AEF ensures that only allowable accesses, as determined by the ADF, are performed on the target. The explicit separation of the ADF and AEF functions is one of the characteristics of our authorization scheme. In our scheme,
Conclusion
In this paper, we have proposed an architecture of an authorization service designed for composite operations involving many Internet partners. The proposed architecture matches our original objectives: (1) finding a compromise between a totally centralized and totally distributed authorization scheme (which both have significant drawbacks), (2) designing an authorization scheme that is able to distribute proofs of authorization to the participants of a composite operation whilst enforcing the
Vincent Nicomette is currently a teacher of Institut National des Sciences Appliquées de Toulouse (INSA) and member of the “Dependable Computing and Fault Tolerance” research group at LAAS–CNRS in Toulouse, France. He received his Ph.D. degree from Institut National Polytechnique de Toulouse (France) in 1996 and his Diploma of Computer Engineer from ENSEEIHT (France) in 1992.
His research mainly deals with security in distributed computing systems.
References (33)
- et al.
Fast reconfigurable systolic hardware for modular multiplication and exponentiation
Journal of Systems Architecture
(2003) - US Department of Defense Trusted Computer Security Evaluation Criteria (TCSEC), 5200.28-STD, December...
- et al.
A distributed secure system
IEEE Computer
(1983) - Trusted Network Interpretation of the Trusted Computer Security Evaluation Criteria, National Computer Security Center,...
- V. Nicomette, Y. Deswarte, An authorization scheme for distributed object systems, in: Proceedings of the International...
- V. Nicomette, Y. Deswarte, Symbolic rights and vouchers for access control in distributed object systems, in: J....
- Y. Deswarte, N. Abghour, V. Nicomette, D. Powell, An Intrusion-tolerant authorization scheme for internet applications,...
- B. Neuman, Proxy-based authorization and accounting for distributed systems, in: Proceedings of the 13th International...
- J. Tardo, K. Alagappan, Spx: global authentication using public key certificates, in: Proceedings of the IEEE Symposium...
- M. Gasser, E. MacDermott, An architecture for practical delegation in a distributed system, in: Proceedings of the IEEE...
Cited by (1)
A framework for the attack tolerance of cloud applications based on web services
2021, Electronics (Switzerland)
Vincent Nicomette is currently a teacher of Institut National des Sciences Appliquées de Toulouse (INSA) and member of the “Dependable Computing and Fault Tolerance” research group at LAAS–CNRS in Toulouse, France. He received his Ph.D. degree from Institut National Polytechnique de Toulouse (France) in 1996 and his Diploma of Computer Engineer from ENSEEIHT (France) in 1992.
His research mainly deals with security in distributed computing systems.
Yves Deswarte is currently a Research Director of CNRS, member of the “Dependable Computing and Fault Tolerance” research group at LAAS–CNRS in Toulouse, France. Successively at CII, CIMSA, INRIA and LAAS, his research work has dealt mainly with fault tolerance and security in distributed computing systems. Recently, his main research interests were in intrusion tolerance, quantitative security evaluation, dependability evaluation criteria, protection of safety – critical systems with multiple levels of integrity, flexible security policies, and privacy-preserving authorization schemes. He is the author or co-author of more than 100 international publications in these areas. He has been consultant for several organizations in France and for SRI-international in the USA. He has been a member of many international conference program committees and has chaired several of them. He is a senior member of SEE, a member of the IEEE TC on Security and Privacy and a member of the ACM SIGSAC. He is representing the IEEE Computer Society at IFIP TC-11 (Technical Committee on Security and Protection in Information Processing Systems).
David Powell is Directeur de Recherche at CNRS. He received his Bachelor of Science degree in Electronic Engineering from the University of Southampton, England in 1972, a Specialty Doctorate degree from the Toulouse Paul Sabatier University in 1975, and his Docteur ès-Sciences degree from the Toulouse National Polytechnic Institute in 1981. He is a member of the Dependable Computing and Fault Tolerance Research Group at LAAS–CNRS, Toulouse, France. His research interests include dependability in the face of accidental and intentional human faults, fault-tolerant distributed systems, and dependability assessment. His current focus is mobile computing and autonomous robot systems. He has authored or co-authored 3 books, 125 papers and 19 book chapters, managed several national and European research contracts, and acted as a consultant for companies in France and for the European Commission. Dr. Powell is a member of IEEE, ACM and the IFIP 10.4 working group on Dependable Computing and Fault Tolerance.
Noreddine Abghour is currently associate professor in the Faculty of Science of Hassan II University, Morocco. He received his Ph.D. degree from Institut National Polytechnique de Toulouse (France) in 2004. His research mainly deals with authorization schemes in distributed computing systems.
Christophe Zanon is currently research engineer at LAAS–CNRS, Toulouse. He received his Masters degree in 2001. He was involved in the European projects MAFTIA (Malicious and Accidental Fault Tolerance for Internet Applications) from 2000 to 2003, and PRIME (Privacy and Identity Management for Europe) from 2004 to 2008.