Abstract:
Learning relevant features is important for interpreting data in a machine learning model. Comparing with selecting a relevant feature subset for the entire data, instanc...Show MoreMetadata
Abstract:
Learning relevant features is important for interpreting data in a machine learning model. Comparing with selecting a relevant feature subset for the entire data, instancewise feature selection is more flexible for model interpretation. However current instancewise feature selection approaches are complex and suffer from high computational cost. We consider instancewise feature selection under supervised learning framework. We design a compact and interpretable neural network to approach the problem. To reduce the computational cost and gain better interpretability, we group relevant features and construct a mixture of neural networks. Using softmax as activation function for sub-model selection, the model membership can be learned accurately through gradient descent. To the best of our knowledge, our model is the first interpretable deep neural network model for instancewise feature selection using end-to-end training.
Published in: 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
Date of Conference: 24-27 September 2019
Date Added to IEEE Xplore: 05 December 2019
ISBN Information: