skip to main content
10.1145/2258996acmconferencesBook PagePublication PagesismmConference Proceedingsconference-collections
ISMM '12: Proceedings of the 2012 international symposium on Memory Management
ACM2012 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
ISMM '12: International Symposium on Memory Management Beijing China June 15 - 16, 2012
ISBN:
978-1-4503-1350-6
Published:
15 June 2012
Sponsors:
Recommend ACM DL
ALREADY A SUBSCRIBER?SIGN IN

Reflects downloads up to 14 Feb 2025Bibliometrics
Skip Abstract Section
Abstract

It is with great pleasure that we welcome you to the ACM SIGPLAN 2012 International Symposium on Memory Management (ISMM'12). This year continues ISMM's tradition as the top venue for presenting research results on memory management.

This year ISMM'12 received 30 submissions out of which the program committee selected 12 to appear in the conference. These papers cover diverse and interesting aspects of memory management including multicore, program analysis, and mechanisms such as read and write barriers.

We used a double-blind reviewing process, an external review committee (XRC) to add reviewer expertise, and a rebuttal process, all of which worked smoothly and efficiently. Each program committee (PC) member reviewed seven or eight papers in a four-week time period. In turn, the authors were given a rebuttal period of three days, during which they could answer reviewer questions. The rebuttal was not limited in content, but was limited in length.

The XRC followed the effort started in 2008 to increase the breadth of the reviewer pool and the depth of reviewer expertise. Unlike PC members, XRC reviewers did not attend the PC meeting. The XRC provided expert reviews, but was established ahead of time, rather than on an ad-hoc basis. Each XRC member was assigned three to four papers to review. This light reviewing load encouraged XRC members to focus on producing especially careful critiques. All submissions received at least three PC reviews and at least one XRC review. All PC and XRC members had the opportunity to revise their reviews based on the rebuttal and based on discussions prior to and during the PC meeting. The XRC played no part in the final decisionmaking for non-PC submissions.

All non-PC papers were discussed at the PC meeting on March 23, 2012, in Seattle. All PC members attended the entire meeting. PC members who had a conflict with a submission left the room during the discussions of their conflict papers. The software also prevented conflicted PC members from reading reviews or knowing the reviewers of conflicted papers. Only the committee members who reviewed the paper made the acceptance decisions. We discussed all non-PC papers. All authors were notified of the decisions by email on March 23.

PC co-authored submissions were allowed, and we received four and three were accepted. The XRC provided four to five reviews for each of these submissions and met on a conference call on March 22, 2012. These papers were held to the customary higher standard. The General Chair, Martin Vechev, handled the conflicts of interest with the Program Chair. He assigned the reviewers and led the telephone discussions of these papers. Only XRC members who reviewed PC coauthored submissions participated in the call and in the final decision.

We were very happy with blind reviewing, rebuttal, and the XRC mechanism. In particular, handling PC submissions with non-PC reviewers in a separate meeting worked quite well. In the discussion of each paper, the program chair asked a PC member to summarize the paper, its strengths and weaknesses, and the authors' response. In some cases the authors' response strongly influenced the final decision.

Skip Table Of Content Section
SESSION: Keynote address
keynote
Why is your web browser using so much memory?

Browsers are the operating systems of the Web. They support a vast universe of applications written in a modern garbage-collected programming language. Browsers expose a rich platform API mostly implemented in C++. Browsers are also consumer software ...

SESSION: Parallel memory management
research-article
Memory management for many-core processors with software configurable locality policies

As processors evolve towards higher core counts, architects will develop more sophisticated memory systems to satisfy the cores' increasing thirst for memory bandwidth. Early many-core processor designs suggest that future memory systems will likely ...

research-article
The myrmics memory allocator: hierarchical,message-passing allocation for global address spaces

Constantly increasing hardware parallelism poses more and more challenges to programmers and language designers. One approach to harness the massive parallelism is to move to task-based programming models that rely on runtime systems for dependency ...

research-article
GPUs as an opportunity for offloading garbage collection

GPUs have become part of most commodity systems. Nonetheless, they are often underutilized when not executing graphics-intensive or special-purpose numerical computations, which are rare in consumer workloads. Emerging architectures, such as integrated ...

SESSION: Memory management mechanisms
research-article
Barriers reconsidered, friendlier still!

Read and write barriers mediate access to the heap allowing the collector to control and monitor mutator actions. For this reason, barriers are a powerful tool in the design of any heap management algorithm, but the prevailing wisdom is that they impose ...

research-article
Eliminating read barriers through procrastination and cleanliness

Managed languages typically use read barriers to interpret forwarding pointers introduced to keep track of copied objects. For example, in a multicore environment with thread-local heaps and a global, shared heap, an object initially allocated on a ...

research-article
Scalable concurrent and parallel mark

Parallel marking algorithms use multiple threads to walk through the object heap graph and mark each reachable object as live. Parallel marker threads mark an object "live" by atomically setting a bit in a mark-bitmap or a bit in the object header. Most ...

SESSION: Reference counting, real time, & memory characteristics
research-article
Down for the count? Getting reference counting back in the ring

Reference counting and tracing are the two fundamental approaches that have underpinned garbage collection since 1960. However, despite some compelling advantages, reference counting is almost completely ignored in implementations of high performance ...

research-article
The Collie: a wait-free compacting collector

We describe the Collie collector, a fully concurrent compacting collector that uses transactional memory techniques to achieve wait-free compaction. The collector uses compaction as the primary means of reclaiming unused memory, and performs "individual ...

research-article
new Scala() instance of Java: a comparison of the memory behaviour of Java and Scala programs

While often designed with a single language in mind, managed runtimes like the Java virtual machine (JVM) have become the target of not one but many languages, all of which benefit from the runtime's services. One of these services is automatic memory ...

SESSION: Caches and analysis
research-article
A generalized theory of collaborative caching

Collaborative caching allows software to use hints to influence cache management in hardware. Previous theories have shown that such hints observe the inclusion property and can obtain optimal caching if the access sequence and the cache size are known ...

research-article
Exploiting the structure of the constraint graph for efficient points-to analysis

Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it ...

research-article
Identifying the sources of cache misses in Java programs without relying on hardware counters

Cache miss stalls are one of the major sources of performance bottlenecks for multicore processors. A Hardware Performance Monitor (HPM) in the processor is useful for locating the cache misses, but is rarely used in the real world for various reasons. ...

Contributors
  • Swiss Federal Institute of Technology, Zurich
  • Google LLC
  1. Proceedings of the 2012 international symposium on Memory Management

    Recommendations

    Acceptance Rates

    Overall Acceptance Rate 72 of 156 submissions, 46%
    YearSubmittedAcceptedRate
    ISMM '14221150%
    ISMM '13221150%
    ISMM '09321547%
    ISMM '02411741%
    ISMM '00391846%
    Overall1567246%