ABSTRACT
Our brain executes very sparse computation, allowing for great speed and energy savings. Deep neural networks can also be made to exhibit high levels of sparsity without significant accuracy loss. As their size grows, it is becoming imperative that we use sparsity to improve their efficiency. This is a challenging task because the memory systems and SIMD operations that dominate todays CPUs and GPUs do not lend themselves easily to the irregular data patterns sparsity introduces. This talk will survey the role of sparsity in neural network computation, and the parallel algorithms and hardware features that nevertheless allow us to make effective use of it.
Index Terms
- Sparsity in Deep Neural Nets (Keynote)
Recommendations
Improving deep neural network sparsity through decorrelation regularization
IJCAI'18: Proceedings of the 27th International Joint Conference on Artificial IntelligenceModern deep learning models usually suffer high complexity in model size and computation when transplanted to resource constrained platforms. To this end, many works are dedicated to compressing deep neural networks. Adding group LASSO regularization is ...
SRS-DNN: a deep neural network with strengthening response sparsity
AbstractInspired by the sparse mechanism of biological neural systems, an approach of strengthening response sparsity for deep learning is presented in this paper. Firstly, an unsupervised sparse pre-training process is implemented and a sparse deep ...
Combined group and exclusive sparsity for deep neural networks
ICML'17: Proceedings of the 34th International Conference on Machine Learning - Volume 70The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a ...
Comments