Skip to main content
Log in

Making a Shallow Network Deep: Conversion of a Boosting Classifier into a Decision Tree by Boolean Optimisation

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

This paper presents a novel way to speed up the evaluation time of a boosting classifier. We make a shallow (flat) network deep (hierarchical) by growing a tree from decision regions of a given boosting classifier. The tree provides many short paths for speeding up while preserving the reasonably smooth decision regions of the boosting classifier for good generalisation. For converting a boosting classifier into a decision tree, we formulate a Boolean optimisation problem, which has been previously studied for circuit design but limited to a small number of binary variables. In this work, a novel optimisation method is proposed for, firstly, several tens of variables i.e. weak-learners of a boosting classifier, and then any larger number of weak-learners by using a two-stage cascade. Experiments on the synthetic and face image data sets show that the obtained tree achieves a significant speed up both over a standard boosting classifier and the Fast-exit—a previously described method for speeding-up boosting classification, at the same accuracy. The proposed method as a general meta-algorithm is also useful for a boosting cascade, where it speeds up individual stage classifiers by different gains. The proposed method is further demonstrated for fast-moving object tracking and segmentation problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Avidan, S. (2006). SpatialBoost: adding spatial reasoning to adaboost. In Proc. ECCV, Graz, Austria.

    Google Scholar 

  • Basak, J. (2004). Online adaptive decision trees. Journal of Neural Computation, 16, 1959–1981.

    Article  MATH  Google Scholar 

  • Brostow, G., Shotton, J., Fauqueur, J., & Cipolla, R. (2008). Segmentation and recognition using structure from motion point clouds. In Proc. ECCV, Marseilles.

    Google Scholar 

  • Chen, J. (1994). Application of Boolean expression minimization to learning via hierarchical generalization. In Proc. ACM symposium on applied computing (pp. 303–307).

    Google Scholar 

  • Cormen, T., Leiserson, C., Rivest, R., & Stein, C. (2001). Introduction to algorithms. Cambridge: MIT Press and McGraw-Hill.

    MATH  Google Scholar 

  • Esposito, F., Malerba, D., Semeraro, G., & Kay, J. (1997). A comparative analysis of methods for pruning decision trees. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 476–491.

    Article  Google Scholar 

  • Freund, Y., & Mason, L. (1999). The alternating decision tree learning algorithm. In Proc. ICML.

    Google Scholar 

  • Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139.

    Article  MathSciNet  MATH  Google Scholar 

  • Friedman, J., Hastie, T., & Tibshirani, R. (2000). Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28(2), 337–407.

    Article  MathSciNet  MATH  Google Scholar 

  • Grabner, H., & Bischof, H. (2006). On-line boosting and vision. In Proc. IEEE conf. CVPR (pp. 260–267).

    Google Scholar 

  • Grossmann, E. (2004a). AdaTree: boosting a weak classifier into a decision tree. In IEEE workshop on learning in computer vision and pattern recognition (pp. 105–105).

    Chapter  Google Scholar 

  • Grossmann, E. (2004b) Adatree 2: boosting to build decision trees or Improving Adatree with soft splitting rules (Technical report).

  • Huang, C., Ai, H., Li, Y., & Lao, S. (2005). Vector boosting for rotation invariant multi-view face detection. In Proc. ICCV.

    Google Scholar 

  • Kim, T.-K., Kim, H., Hwang, W., & Kittler, J. (2005). Component-based LDA face description for image retrieval and MPEG-7 standardisation. Image and Vision Computing, 23(7), 631–642.

    Article  Google Scholar 

  • Li, S. Z., & Zhang, Z. (2004). Floatboost learning and statistical face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(9), 1112–1123.

    Article  Google Scholar 

  • Mason, L., Baxter, J., Bartlett, P., & Frean, M. (2000). Boosting algorithms as gradient descent. In Proc. advances in neural information processing systems (pp. 512–518).

    Google Scholar 

  • Pham, M., & Cham, T. (2007). Fast training and selection of Haar features using statistics in boosting-based face detection. In Proc. ICCV.

    Google Scholar 

  • Quinlan, J. (1996). Bagging, boosting, and c4.5. In Proc. national. conf. on artificial intelligence (pp. 725–730).

    Google Scholar 

  • Rahimi, A., & Recht, B. (2008). Random kitchen sinks: replacing optimization with randomization in learning. In Proc. neural information processing systems.

    Google Scholar 

  • Ross, D., Lim, J., Lin, R., & Yang, M. (2008). Incremental learning for robust visual tracking. International Journal of Computer Vision, 77(1), 125–141.

    Article  Google Scholar 

  • Rowley, H., Baluja, S., & Kanade, T. (1998). Neural network-based face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 22–38.

    Article  Google Scholar 

  • Schapire, R. E., & Singer, Y. (1998). Improved boosting algorithms using confidence-rated predictions. In Proc. the eleventh annual conference on computational learning theory (pp. 80–91).

    Chapter  Google Scholar 

  • Schwender, H. (2007). Minimization of boolean expressions using matrix algebra (Technical report). Collaborative Research Center SFB 475, University of Dortmund.

  • Sochman, J., & Matas, J. (2005). WaldBoost learning for time constrained sequential detection. In Proc. CVPR, San Diego, USA.

    Google Scholar 

  • Torralba, A., Murphy, K. P., & Freeman, W. T. (2007). Sharing visual features for multiclass and multiview object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(5), 854–869.

    Article  Google Scholar 

  • Tu, Z. (2005). Probabilistic boosting-tree: learning discriminative models for classification, recognition, and clustering. In Proc. ICCV.

    Google Scholar 

  • Viola, P., & Jones, M. (2001). Robust real-time object detection. In 2nd intl. workshop on statistical and computational theories of vision.

    Google Scholar 

  • Viola, P., & Jones, M. (2004). Robust real-time face detection. International Journal of Computer Vision, 57(2), 137–154.

    Article  Google Scholar 

  • Wu, B., & Nevatia, R. (2007). Cluster boosted tree classifier for multi-view, multi-pose object detection. In Proc. ICCV.

    Google Scholar 

  • Xiao, R., Zhu, L., & Zhang, H. (2003). Boosting chain learning for object detection. In Proc. ICCV.

    Google Scholar 

  • Yeh, T., Lee, J., & Darrell, T. (2007). Adaptive vocabulary forests for dynamic indexing and category learning. In Proc. ICCV.

    Google Scholar 

  • Zhou, S. (2005). A binary decision tree implementation of a boosted strong classifier. In IEEE Workshop on analysis and modeling of faces and gestures (pp. 198–212).

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tae-Kyun Kim.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kim, TK., Budvytis, I. & Cipolla, R. Making a Shallow Network Deep: Conversion of a Boosting Classifier into a Decision Tree by Boolean Optimisation. Int J Comput Vis 100, 203–215 (2012). https://doi.org/10.1007/s11263-011-0461-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-011-0461-z

Keywords

Navigation