skip to main content
10.1145/3477314.3507087acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
research-article

Sparse tensor storage by tensor unfolding

Published:06 May 2022Publication History

ABSTRACT

Large scale high dimensional multi-aspect data are naturally represented as tensors or multi-way arrays for scientific and engineering computations. But in practical applications such as signal processing, data mining, computer vision, and graph analysis, this tensor data is very sparse. The degree of sparsity is magnified by the increase of number of dimensions for large tensors. Therefore, storing and applying tensor operations on this highly sparse multidimensional data is an important research challenge for data scientists. In this paper, we describe a new sparse tensor storage format that provide storage benefits and are independent to the number of dimension of the tensor. Efficient hash functions are desined to store sparse tensor data. The hash functions convert a higher order tensor to a matrix by tensor unfolding. And using the unfolded matrix, we develop algorithms to store nonzero data based on sparse fibers. We call our scheme Unfolded Compressed Row/Column Fiber (UCRF/UCCF). Our approach experimental result shows superior performance with standard dataset comparing to other important approaches.

References

  1. [n. d.]. Formidable Repository of Open Sparse Tensors and Tools. http://frostt.io/. Accessed: 2021-03-13.Google ScholarGoogle Scholar
  2. Brett W Bader and Tamara G Kolda. 2008. Efficient MATLAB computations with sparse and factored tensors. SIAM Journal on Scientific Computing 30, 1 (2008), 205--231.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst. 2000. Templates for the solution of algebraic eigenvalue problems: a practical guide. SIAM.Google ScholarGoogle Scholar
  4. Richard Barrett, Michael Berry, Tony F Chan, James Demmel, June Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, Charles Romine, and Henk Van der Vorst. 1994. Templates for the solution of linear systems: building blocks for iterative methods. SIAM.Google ScholarGoogle Scholar
  5. James Bennett, Stan Lanning, et al. 2007. The netflix prize. In Proceedings of KDD cup and workshop, Vol. 2007. New York, NY, USA., 35.Google ScholarGoogle Scholar
  6. Aydin Buluç, Jeremy T Fineman, Matteo Frigo, John R Gilbert, and Charles E Leiserson. 2009. Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks. In Proceedings of the twenty-first annual symposium on Parallelism in algorithms and architectures. 233--244.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Aydin Buluc and John R Gilbert. 2008. On the representation and multiplication of hypersparse matrices. In 2008 IEEE International Symposium on Parallel and Distributed Processing. IEEE, 1--11.Google ScholarGoogle ScholarCross RefCross Ref
  8. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka, and Tom M Mitchell. 2010. Toward an architecture for never-ending language learning. In Twenty-Fourth AAAI conference on artificial intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  9. Joon Hee Choi and S Vishwanathan. 2014. DFacTo: Distributed factorization of tensors. Advances in Neural Information Processing Systems 27 (2014), 1296--1304.Google ScholarGoogle Scholar
  10. Ryan Eberhardt and Mark Hoemmen. 2016. Optimization of block sparse matrix-vector multiplication on shared-memory parallel architectures. In 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 663--672.Google ScholarGoogle ScholarCross RefCross Ref
  11. Fredrik Kjolstad, Shoaib Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe. 2017. The tensor algebra compiler. Proceedings of the ACM on Programming Languages 1, OOPSLA (2017), 1--29.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Tamara Gibson Kolda. 2006. Multilinear operators for higher-order decompositions. Technical Report. Citeseer.Google ScholarGoogle Scholar
  13. Daniel Langr and Pavel Tvrdik. 2015. Evaluation criteria for sparse matrix storage formats. IEEE Transactions on parallel and distributed systems 27, 2 (2015), 428--440.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. James Li, Jacob Bien, and Martin T Wells. 2018. rTensor: An R package for multidimensional array (tensor) unfolding, multiplication, and decomposition. Journal of Statistical Software 87, 1 (2018), 1--31.Google ScholarGoogle ScholarCross RefCross Ref
  15. Julian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In Proceedings of the 7th ACM conference on Recommender systems. 165--172.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Niranjay Ravindran, Nicholas D Sidiropoulos, Shaden Smith, and George Karypis. 2014. Memory-efficient parallel computation of tensor and matrix products for big tensor decomposition. In 2014 48th Asilomar Conference on Signals, Systems and Computers. IEEE, 581--585.Google ScholarGoogle ScholarCross RefCross Ref
  17. Shaden Smith and George Karypis. 2015. Tensor-matrix products with a compressed sparse tensor. In Proceedings of the 5th Workshop on Irregular Applications: Architectures and Algorithms. 1--7.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Shaden Smith, Jongsoo Park, and George Karypis. 2017. Sparse tensor factorization on many-core processors with high-bandwidth memory. In 2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE, 1058--1067.Google ScholarGoogle ScholarCross RefCross Ref
  19. Shaden Smith, Niranjay Ravindran, Nicholas D Sidiropoulos, and George Karypis. 2015. SPLATT: Efficient and parallel sparse tensor-matrix multiplication. In 2015 IEEE International Parallel and Distributed Processing Symposium. IEEE, 61--70.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. George Strawn. 2014. Don Knuth: Mastermind of Algorithms [review of" The art of programming"]. IT Professional 16, 5 (2014), 70--72.Google ScholarGoogle ScholarCross RefCross Ref
  21. Bimal Viswanath, Alan Mislove, Meeyoung Cha, and Krishna P Gummadi. 2009. On the evolution of user interaction in facebook. In Proceedings of the 2nd ACM workshop on Online social networks. 37--42.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. James B White and Ponnuswamy Sadayappan. 1997. On improving the performance of sparse matrix-vector multiplication. In Proceedings Fourth International Conference on High-Performance Computing. IEEE, 66--71.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Sparse tensor storage by tensor unfolding

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SAC '22: Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing
      April 2022
      2099 pages
      ISBN:9781450387132
      DOI:10.1145/3477314

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 6 May 2022

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate1,650of6,669submissions,25%
    • Article Metrics

      • Downloads (Last 12 months)22
      • Downloads (Last 6 weeks)1

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader