ABSTRACT
Over recent years, tensors have emerged as the preferred data structure for model representation and computation in machine learning. However, current tensor models suffer from a lack of a formal basis, where the tensors are treated as arbitrary multidimensional data processed by a large and ever-growing collection of functions added ad hoc. In this way, tensor frameworks degenerate to programming languages with a curiously cumbersome data model. This paper argues that a more formal basis for tensors and their computation brings important benefits. The proposed formalism is based on 1) a strong type system for tensors with named dimensions, 2) a common model of both dense and sparse tensors, and 3) a small, closed set of tensor functions, providing a general mathematical language in which higher level functions can be expressed.
These features work together to provide ease of use resulting from static type verification with meaningful dimension names, improved interoperability resulting from defining a closed set of just six foundational tensor functions, and better support for performance optimizations resulting from having just a small set of core functions needing low-level optimizations, and higher-level operations being able to work on arbitrary chunks of these functions, as well as from better mathematical properties from using named tensor dimensions without inherent order. The proposed model is implemented as the model inference engine in the Vespa big data serving engine, where it runs various models expressed in this language directly, as well as models expressed in TensorFlow or Onnx formats.
- Tensors Considered Harmful, Harvard NLP blog post by Alexander Rush http://nlp.seas.harvard.edu/NamedTensorGoogle Scholar
- Smarter Deep Learning with Tensor Shape Library : tsalib, Towards Data Science blog post by Nishant Sinha https://towardsdatascience.com/i ntroducing-tensor-shape-annotation-library-tsalib-963b5b13c35bGoogle Scholar
- AxsisArrays GitHub project home https://github.com/JuliaArrays/Axi sArrays.jlGoogle Scholar
- Vespa.ai home page https://vespa.aiGoogle Scholar
- Ranking with TensorFlow Models, Vespa documentation https://docs.v espa.ai/documentation/tensorflow.htmlGoogle Scholar
- Ranking with Onnx Models, Vespa documentation https://docs.vespa.a i/documentation/onnx.htmlGoogle Scholar
- Serving article comments using reinforcement learning of a neural net, Vespa blog post by Jon Bratseth https://medium.com/vespa/serving-article-comments-using-reinforcement-learning-of-az-neural-net-83 f7ded17e8fGoogle Scholar
- A standalone Java implementation on GitHub.com https://github.com/vespa-engine/vespa/tree/master/vespajlib/src/main/java/com/yaho o/tensorGoogle Scholar
Recommendations
Robust low tubal rank tensor completion via factor tensor norm minimization
Highlights- We give the definitions of tensor double norm and tensor Frobenius/nuclear hybrid norm, and regard them as low-rank regularization penalty of tensor ...
AbstractRecent research has demonstrated that low tubal rank recovery based on tensor has received extensive attention. In this correspondence, we define tensor double nuclear norm and tensor Frobenius/nuclear hybrid norm to induce a surrogate ...
Reshaped tensor nuclear norms for higher order tensor completion
AbstractWe investigate optimal conditions for inducing low-rankness of higher order tensors by using convex tensor norms with reshaped tensors. We propose the reshaped tensor nuclear norm as a generalized approach to reshape tensors to be regularized by ...
Compressive sensing via nonlocal low-rank tensor regularization
The aim of Compressing sensing (CS) is to acquire an original signal, when it is sampled at a lower rate than Nyquist rate previously. In the framework of CS, the original signal is often assumed to be sparse and correlated in some domain. Recently, ...
Comments