Abstract:
Deep learning models have made great strides in tasks like classification and object detection. However, these models are often computationally intensive, require vast am...Show MoreMetadata
Abstract:
Deep learning models have made great strides in tasks like classification and object detection. However, these models are often computationally intensive, require vast amounts of data in the domain, and typically contain millions or even billions of parameters. They are also relative black-boxes when it comes to being able to interpret and analyze their functionality on data or evaluating the suitability of the network for the data that is available. To address these issues, we investigate compression techniques available off-the-shelf that aid in reducing the dimensionality of the parameter space within a Convolutional Neural Network. In this way, compression will allow us to interpret and evaluate the network more efficiently as only important features will be propagated throughout the network.
Date of Conference: 09-11 October 2018
Date Added to IEEE Xplore: 09 May 2019
ISBN Information: