skip to main content
10.1145/3302541.3313102acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
research-article

FAB: Framework for Analyzing Benchmarks

Authors Info & Claims
Published:27 March 2019Publication History

ABSTRACT

Performance evaluation is an integral part of computer architecture research. Rigorous performance evaluation is crucial in order to evaluate novel architectures, and is often carried out using benchmark suites. Each suite has a number of workloads with varying behavior and characteristics. Most analysis is done by analyzing the novel architecture across all workloads of a single benchmark suite. However, computer architects studying optimizations of specific microarchitectural components, require evaluation of their proposals on workloads that stress the component being optimized across multiple benchmark suites.

In this paper, we present the design and implementation of FAB - a framework built with Pin and Python based workflow. FAB allows user-driven analysis of benchmarks across multiple axes like instruction distributions, types of instructions etc. through an interactive Python interface to check for desired characteristics, across multiple benchmark suites. FAB aims to provide a toolkit that would allow computer architects to 1) select workloads with desired, user-specified behavior, and 2) create synthetic workloads with desired behavior that have a grounding in real benchmarks.

References

  1. {n. d.}. OpenBLAS: An optimized BLAS library. http://www.openblas.net/Google ScholarGoogle Scholar
  2. {n. d.}. SPEC CPU 2017 Documentation. https://www.spec.org/cpu2017/Google ScholarGoogle Scholar
  3. 2016. Intel 64 and IA-32 Architectures Software Developers Manual Volume 2A:Instruction Set Reference, A-L.Google ScholarGoogle Scholar
  4. D. H. Bailey et almbox. 1991. The NAS Parallel Benchmarks & Mdash;Summary and Preliminary Results. In Proceedings of Supercomputing. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Christian Bienia et almbox. 2008a. The PARSEC Benchmark Suite: Characterization and Architectural Implications. Technical Report TR-811-08. Princeton University.Google ScholarGoogle Scholar
  6. C. Bienia et almbox. 2008b. PARSEC vs. SPLASH-2: A quantitative comparison of two multithreaded benchmark suites on Chip-Multiprocessors. In Proceedings of IISWC.Google ScholarGoogle ScholarCross RefCross Ref
  7. Jack J. Dongarra et almbox. 2002. The LINPACK benchmark: Past, present, and future.Google ScholarGoogle Scholar
  8. Lieven Eeckhout. 2010. Computer Architecture Performance Evaluation Methods 1st ed.). Morgan & Claypool Publishers. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. L. Eeckhout et almbox. 2000. Performance analysis through synthetic trace generation. In Proceedings of ISPASS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. C. Sakalis et al. 2016. Splash-3: A properly synchronized benchmark suite for contemporary research.Google ScholarGoogle Scholar
  11. Chi-Keung Luk et almbox. 2005. Pin: Building Customized Program Analysis Tools with Dynamic Instrumentation. In Proceedings PLDI. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. John D. McCalpin. 1995. Memory Bandwidth and Machine Balance in Current High Performance Computers. IEEE Computer Society Technical Committee on Computer Architecture (TCCA) Newsletter (Dec. 1995), 19--25.Google ScholarGoogle Scholar
  13. D. A. Menasce. 2002. TPC-W: a benchmark for e-commerce. IEEE Internet Computing, Vol. 6, 3 (2002). Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. R. Panda et almbox. 2018. Wait of a Decade: Did SPEC CPU 2017 Broaden the Performance Horizon?. In Proceedings of HPCA.Google ScholarGoogle ScholarCross RefCross Ref
  15. Timothy Sherwood et almbox. 2002. Automatically Characterizing Large Scale Program Behavior. SIGOPS Oper. Syst. Rev., Vol. 36, 5 (2002). Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. FAB: Framework for Analyzing Benchmarks

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ICPE '19: Companion of the 2019 ACM/SPEC International Conference on Performance Engineering
        March 2019
        99 pages
        ISBN:9781450362863
        DOI:10.1145/3302541

        Copyright © 2019 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 27 March 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate252of851submissions,30%

        Upcoming Conference

      • Article Metrics

        • Downloads (Last 12 months)6
        • Downloads (Last 6 weeks)2

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader