Design interpretable neural network trees through self-organized learning of features | IEEE Conference Publication | IEEE Xplore

Design interpretable neural network trees through self-organized learning of features


Abstract:

Neural network tree (NNTree) is a modular neural network with the overall structure being a decision tree (DT), and each non-terminal node being an expert neural network ...Show More

Abstract:

Neural network tree (NNTree) is a modular neural network with the overall structure being a decision tree (DT), and each non-terminal node being an expert neural network (ENN). One advantage of using NNTrees is that they are actually "gray-boxes" because they can be interpreted easily if the number of inputs for each ENN is limited. To design interpretable NNTrees, we have proposed a multiple objective optimization based genetic algorithm. This algorithm, however, is good only for solving problems with binary inputs. In this paper, we propose a method to solve problems with continuous inputs. The basic idea is to find a small number of critical points for each continuous input using self-organized learning, and quantize the input using the critical points. Experimental results with several public databases show that the NNTrees built from the quantized data are much more interpretable, and in most cases they are as good as those obtained from the original data.
Date of Conference: 25-29 July 2004
Date Added to IEEE Xplore: 17 January 2005
Print ISBN:0-7803-8359-1
Print ISSN: 1098-7576
Conference Location: Budapest, Hungary

Contact IEEE to Subscribe

References

References is not available for this document.