ABSTRACT
In this paper we investigate the problem of synchronising transactions in an object base. An object base is a collection of objects, much the way a database is a collection of data. An object, for our purposes, consists of a collection of variables (whose values at any point in time comprise the state of that object) and a set of operations, called methods, that are the only means of accessing (sensing or modifying) the object's variables
There is a certain sense in which a traditional database is an object base. It consists of “objects” (records, tuples or what have you) each of which has a state that can be accessed only through the operations Read and Write. The main difference is that in an object base, each object supplies its own methods and these are arbitrary. In particular, a method for a certain object may call methods of other objects to carry out its task. In contrast to certain models in which objects correspond to “levels of abstraction”, our model is completely general in this respect for example, it is permissible for a method of object A to call a method of object B which, in turn, may call some other method of object A again
One implication of this difference between data and object bases is that in the latter the assumption, commonly made in the former, that the operations which manipulate the state of the objects are short enough to be implemented serially (one at a time) is no longer valid. A related implication is that in object bases we are faced with the necessity of dealing with nested transactions, since the invocation of one method may result in further method invocations
Another, less fundamental, difference between data and object bases is that, in addition to being of uniform type, the “objects” of a database are usually assumed to be of uniform size as well. In an object base one can imagine objects of widely differing sizes. A clock and the New York City telephone directory could be objects differing in size by orders of magnitude, yet co-existing in the same object base
In spite of these differences it is possible to approach concurrency control in an object base in the following way. Each object is viewed as a database item. Further, each method invocation is treated as a group of Read or Write operations on those data items that were accessed as a result of that method invocation. With these analogies, any conventional database concurrency control method (two-phase locking, timestamp ordering, certification, and the whole lot) can be employed to synchronise concurrent transactions in the object base. This approach has the virtue of simplicity and may be well-suited to certain environments. It is, for example, the approach taken in the GemStone project and product (cf Maier and Stein [1987], Purdy et al [1987])
We are interested in exploring approaches to concurrency control in object bases which take into account their special features and differences from databases. The hope is that this will lead to more efficient techniques. More specifically, we would like to consider mechanisms that
Take into account the nested nature of transactions
Allow methods accessing an object to execute concurrently (but correctly) This seems especially important as multiprocessors become available, since forcing serial access to an object's methods restricts parallelism (bear in mind that each method could be a lengthy procedure)
Are modular, in that each object is responsible for synchronizing the invocations of its own methods as it sees fit
The first two of these points have been considered by others as well. For example, Argus (cf Liskov and Scheifler [1983]) uses a synchronisation algorithm which is an adaptation of strict two-phase locking in a nested transaction environment. In addition, Argus allows multiple concurrent invocations of an object's methods1
We believe that the third point is a novel idea, so we elaborate on it a bit. In accordance with (b), multiple invocations of an object's methods may be active simultaneously As these methods may operate on common data (the object's variables), they must be synchronised. That is, if we view the object's variables as a database, and the simultaneous method invocations as concurrent transactions, we have to solve the serialisability problem within a single object. We call this intra-object synchronisation
It is not difficult to see that simply ensuring serialisability within each object is not, in itself, enough to guarantee serialisability of the overall computation. For instance, then may be two transactions T1 and T2 each accessing objects A and B, so that in object A the concurrent computation of the two transactions' method executions serialises T1 before T2, while the reverse holds in object B.The effect of such an execution is not the same as running the two transactions serially in either order and the overall computation is therefore not serial, sable, even though the computation at each object is Thus, in addition to intra-object synchronisation, it is also necessary to exercise some inter-object synchronisation, whose goal will be to ensure the compatibility of the independent decisions made at each object
The potential advantage of separating intra- from inter-object synchronisation is seen if one recalls our previous observation regarding the non-uniformity of objects in an object base. Accordingly, we may be able to allow each object use, for intra-object synchronisation, the most suitable algorithm depending on its semantics, the implementation of its methods and so on. For example, an object representing and Delete) might be implemented as a B-tree. Thus, one of the many special B-tree algorithms could be used for intra-object synchronisation by this object (cf Bayer and Schkolnick [1977], Ellis [1980], Kung and Lehman [1980], Kwong and Wood [1982], Lehman and Yao [1981], Manber and Ladner [1982], Samadi [1976]) That object would enjoy the efficiency of the special algorithm, even though that algorithm is not applicable to other types of objects. Of course, the viability of such a scheme depends on the existence of efficient inter-object synchronisation schemes that can be used with disparate intra-object synchronisation algorithms. Even though we have no definitive answer for this question, our work so far leaves us hopeful that this may indeed be possible
The remainder of this paper is organised as follows. In Section 2 we present a formal model for representing executions in object bases, we also define what serialisable (i e correct) executions are in this context. In Section 3 we present an extension of the Serialisability Theorem of “classical” concurrency control, which relates the serialisability of an execution to the acyclicity of a graph. We exhibit the utility of this theorem by using it to derive simple proofs of the correctness of Nested Two-Phase Locking (a slight generalisation of the algorithm used in Argus) and Nested Timestamp Ordering (an algorithm proposed by Reed [1978]). We also present a corollary to this theorem that we feel justifies our hopes for modular synchronisation mechanisms. We conclude with a description of further work in Section 4
This work draws on three main sources “classical” serialisability theory (e.g. Bernstein et al [1987], Papadimitriou [1986]), the theory of nested transactions (e.g. Beeri et al [1980], Lynch and Merritt [1986]), and object-oriented systems (e.g. Stefik and Bobrow [1986])
- Bayer, R and Schkolmck, M "Concurrency of operations on B-trees" Acta Inf, 9(1-21), 1977Google Scholar
- Beer1, C, Bernstem, P A and Goodman, N "A model for concurrency m nested transaction systems" TR-86-03, School of Information Technology, Wang Institute of Graduate Studles, Tyngsboro, MA, March 1986Google Scholar
- Bernstem, P A, Hadzllacos, V and Goodman, N Concurre~cy control and recovery sn database 8y~tem~ Addison- Wesley, Reading, MA, 1987 Google ScholarDigital Library
- Elhs, C S "Concurrent search and inserts m 2-3 trees" Acta 14 (s3-86), 9soGoogle Scholar
- K ung, H T and Lehman, P L "Concurrent mampulatlon of binary search trees" TODS, 5 3(339-353), 1980Google Scholar
- Kwong, Y S and Wood, D "Method for concurrency m B- trees" IEEE TSE, SE-8 3(211-223), 1982Google Scholar
- Lehman, P L at/d Yao, S B "Emcmnt loelang for concurrent operations on B-trees" TODS, 6 4(650-670), Dec 1981 Google ScholarDigital Library
- L1skov, B and Schelfler, R "Guardlam and actions Lmgulstm support for robust, distributed programs" TOP~AS, 5 3(381--404), July 1983 Google ScholarDigital Library
- Lynch, N A and Merntt, M "Introductmn to the theory of nested transactmns" TR-367, Laboratory for Computer Scmnce, Massachusetts Institute of Technology, Cambridge, MA, july 1986 Google ScholarDigital Library
- Miner, D and Stem, J "Development and ~mplementatmn of an ob3ect-ormnted DBMS" Manuscript, 1987Google Scholar
- Manber, U and Ladner, R E "Concurrency control m a dynamic search structure" Proc o~ the 1st A CM Syrup on Prmc of Database Systems, pp 268-282, March 1982 Google ScholarDigital Library
- M,~s~ J E B Ne~ted transac~zon8 An approach to relzabl~ dT,~rTbuted compufmg Ph D thesm, Massachusetts InstltuT~ of Technology, Cambridge, MA, 1981Google Scholar
- Pap~Ldlmltrmu, C H The theory of database concurrcn, y control Computer Scmnce Press, Rockvllle, MD, 1986 Google ScholarDigital Library
- 'u~d~ a., Schuchurdt, B and Mamr, D "Integrating an ~bject-server with other worlds" TOOIS, 5 1(27-47), Jan 1987 Google ScholarDigital Library
- Reed, D P Nam,ng and 8ynchronzzatwn zn a decen~rahzed computer system Ph D thesis, Massachusetts Institute of Technology, Cambmdge, MA, 1978Google Scholar
- Samada, B "B-trees m a system with multaple users" IPL, 5 4(107-112) , 1976Google ScholarCross Ref
- Shasha, D E Ooncurrent algomthms for search struc~.ure$ Ph D thesls, Harvard Umverslty, Cambmdge, MA, 1984 Google ScholarDigital Library
- Stefik, M and Bobrow, D G "Object-ormnted programmmg Themes and vamatlons" AI Magazsne, 6 4(40-62), 1986 Google ScholarDigital Library
Recommendations
Transaction chopping: algorithms and performance studies
Chopping transactions into pieces is good for performance but may lead to nonserializable executions. Many researchers have reacted to this fact by either inventing new concurrency-control mechanisms, weakening serializability, or both. We adopt a ...
Transaction Healing: Scaling Optimistic Concurrency Control on Multicores
SIGMOD '16: Proceedings of the 2016 International Conference on Management of DataToday's main-memory databases can support very high transaction rate for OLTP applications. However, when a large number of concurrent transactions contend on the same data records, the system performance can deteriorate significantly. This is ...
Transaction communicators: enabling cooperation among concurrent transactions
PPoPP '11In this paper, we propose to extend transactional memory with transaction communicators, special objects through which concurrent transactions can communicate: changes by one transaction to a communicator can be seen by concurrent transactions before ...
Comments