Abstract
We are now facing the prospect of no increases in computer system’s performance unless we harness and efficiently exploit the concurrency that comes from multiple cores on a chip. It should be emphasised that the issues in exploiting concurrency are scale invariant and relate to a few simple parameters and issues. These are: the ratio of the throughput of computation and communication (both local and global), which determines how computation can be distributed and the cost of concurrency creation compared to computation. The latter determines the grain size of the computation. Finally we need virtual concurrency or parallel slack and an efficient data-driven scheduling mechanism, in order to tolerate the latency in any asynchronous activity in the computation, such as access to remote data and resource sharing. The concurrency model used must also be well behaved, i.e. provide determinism of the values computed (although not necessarily of the time required to compute them) and have safe composition. Although these concurrency issues are scale invariant it makes sense to implement them at the lowest scale possible, i.e. at the level of machine instructions, which have overheads measured in single cycles. In this way, all levels of concurrency may be exploited, which is important when dealing with legacy or constrained code. This presentation will explore work undertaken at the University of Amsterdam in designing and evaluating micro-grids of micro-threaded processors that meet these requirements. Moreover the concurrency model developed in this work, SVP, is free of deadlock under composition and has built into its implementations issues which are considered to be operating system ones. Namely it builds in the abstraction of a place, which capture resources and security both in using places and in controlling the execution of concurrency at a place. As the implementation of the concurrency model also manages mapping and scheduling of concurrency, it can truly be said that SVP is an operating system kernel built into the ISA of the processor.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Jesshope, C. (2009). Building a Concurrency and Resource Allocation Model into a Processor’s ISA. In: César, E., et al. Euro-Par 2008 Workshops - Parallel Processing. Euro-Par 2008. Lecture Notes in Computer Science, vol 5415. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-00955-6_17
Download citation
DOI: https://doi.org/10.1007/978-3-642-00955-6_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-00954-9
Online ISBN: 978-3-642-00955-6
eBook Packages: Computer ScienceComputer Science (R0)