It is with great pleasure that we welcome you to the ACM SIGPLAN 2012 International Symposium on Memory Management (ISMM'12). This year continues ISMM's tradition as the top venue for presenting research results on memory management.
This year ISMM'12 received 30 submissions out of which the program committee selected 12 to appear in the conference. These papers cover diverse and interesting aspects of memory management including multicore, program analysis, and mechanisms such as read and write barriers.
We used a double-blind reviewing process, an external review committee (XRC) to add reviewer expertise, and a rebuttal process, all of which worked smoothly and efficiently. Each program committee (PC) member reviewed seven or eight papers in a four-week time period. In turn, the authors were given a rebuttal period of three days, during which they could answer reviewer questions. The rebuttal was not limited in content, but was limited in length.
The XRC followed the effort started in 2008 to increase the breadth of the reviewer pool and the depth of reviewer expertise. Unlike PC members, XRC reviewers did not attend the PC meeting. The XRC provided expert reviews, but was established ahead of time, rather than on an ad-hoc basis. Each XRC member was assigned three to four papers to review. This light reviewing load encouraged XRC members to focus on producing especially careful critiques. All submissions received at least three PC reviews and at least one XRC review. All PC and XRC members had the opportunity to revise their reviews based on the rebuttal and based on discussions prior to and during the PC meeting. The XRC played no part in the final decisionmaking for non-PC submissions.
All non-PC papers were discussed at the PC meeting on March 23, 2012, in Seattle. All PC members attended the entire meeting. PC members who had a conflict with a submission left the room during the discussions of their conflict papers. The software also prevented conflicted PC members from reading reviews or knowing the reviewers of conflicted papers. Only the committee members who reviewed the paper made the acceptance decisions. We discussed all non-PC papers. All authors were notified of the decisions by email on March 23.
PC co-authored submissions were allowed, and we received four and three were accepted. The XRC provided four to five reviews for each of these submissions and met on a conference call on March 22, 2012. These papers were held to the customary higher standard. The General Chair, Martin Vechev, handled the conflicts of interest with the Program Chair. He assigned the reviewers and led the telephone discussions of these papers. Only XRC members who reviewed PC coauthored submissions participated in the call and in the final decision.
We were very happy with blind reviewing, rebuttal, and the XRC mechanism. In particular, handling PC submissions with non-PC reviewers in a separate meeting worked quite well. In the discussion of each paper, the program chair asked a PC member to summarize the paper, its strengths and weaknesses, and the authors' response. In some cases the authors' response strongly influenced the final decision.
Proceeding Downloads
Why is your web browser using so much memory?
Browsers are the operating systems of the Web. They support a vast universe of applications written in a modern garbage-collected programming language. Browsers expose a rich platform API mostly implemented in C++. Browsers are also consumer software ...
Memory management for many-core processors with software configurable locality policies
As processors evolve towards higher core counts, architects will develop more sophisticated memory systems to satisfy the cores' increasing thirst for memory bandwidth. Early many-core processor designs suggest that future memory systems will likely ...
The myrmics memory allocator: hierarchical,message-passing allocation for global address spaces
- Spyros Lyberis,
- Polyvios Pratikakis,
- Dimitrios S. Nikolopoulos,
- Martin Schulz,
- Todd Gamblin,
- Bronis R. de Supinski
Constantly increasing hardware parallelism poses more and more challenges to programmers and language designers. One approach to harness the massive parallelism is to move to task-based programming models that rely on runtime systems for dependency ...
GPUs as an opportunity for offloading garbage collection
GPUs have become part of most commodity systems. Nonetheless, they are often underutilized when not executing graphics-intensive or special-purpose numerical computations, which are rare in consumer workloads. Emerging architectures, such as integrated ...
Barriers reconsidered, friendlier still!
Read and write barriers mediate access to the heap allowing the collector to control and monitor mutator actions. For this reason, barriers are a powerful tool in the design of any heap management algorithm, but the prevailing wisdom is that they impose ...
Eliminating read barriers through procrastination and cleanliness
Managed languages typically use read barriers to interpret forwarding pointers introduced to keep track of copied objects. For example, in a multicore environment with thread-local heaps and a global, shared heap, an object initially allocated on a ...
Scalable concurrent and parallel mark
Parallel marking algorithms use multiple threads to walk through the object heap graph and mark each reachable object as live. Parallel marker threads mark an object "live" by atomically setting a bit in a mark-bitmap or a bit in the object header. Most ...
Down for the count? Getting reference counting back in the ring
Reference counting and tracing are the two fundamental approaches that have underpinned garbage collection since 1960. However, despite some compelling advantages, reference counting is almost completely ignored in implementations of high performance ...
The Collie: a wait-free compacting collector
We describe the Collie collector, a fully concurrent compacting collector that uses transactional memory techniques to achieve wait-free compaction. The collector uses compaction as the primary means of reclaiming unused memory, and performs "individual ...
new Scala() instance of Java: a comparison of the memory behaviour of Java and Scala programs
- Andreas Sewe,
- Mira Mezini,
- Aibek Sarimbekov,
- Danilo Ansaloni,
- Walter Binder,
- Nathan Ricci,
- Samuel Z. Guyer
While often designed with a single language in mind, managed runtimes like the Java virtual machine (JVM) have become the target of not one but many languages, all of which benefit from the runtime's services. One of these services is automatic memory ...
A generalized theory of collaborative caching
Collaborative caching allows software to use hints to influence cache management in hardware. Previous theories have shown that such hints observe the inclusion property and can obtain optimal caching if the access sequence and the cache size are known ...
Exploiting the structure of the constraint graph for efficient points-to analysis
Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it ...
Identifying the sources of cache misses in Java programs without relying on hardware counters
Cache miss stalls are one of the major sources of performance bottlenecks for multicore processors. A Hardware Performance Monitor (HPM) in the processor is useful for locating the cache misses, but is rarely used in the real world for various reasons. ...
- Proceedings of the 2012 international symposium on Memory Management