Skip to main content
Log in

Subsampling-based HMC parameter estimation with application to large datasets classification

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

This paper presents a contextual algorithm for the approximation of Baum’s forward and backward probabilities, which are extensively used in the framework of Hidden Markov chain models for parameter estimation. The method differs from the original algorithm by taking into account only a neighborhood of limited length and not all the data in the chain for computations. It then becomes possible to propose a bootstrap subsampling strategy for the computation of forward and backward probabilities, which greatly reduces computation time and memory saving required for EM-based parameter estimation. Comparative experiments regarding the neighborhood size and the bootstrap sample size are conducted by mean of unsupervised classification error rates. Practical interest of such an algorithm is then illustrated through the segmentation of large-size images; classification results confirm the validity and the accuracy of the proposed algorithm while greatly reducing computation and memory requirements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Banga, C., Ghorbel, F.: Optimal bootstrap sampling for fast image segmentation: application to retina image. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’93), pp. 638–641. Minneapolis, MN (1993)

  2. Baum L., Petrie T., Soules G., Weiss N.: A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann. Math. Stat. 41, 164–171 (1970)

    Article  MATH  MathSciNet  Google Scholar 

  3. Benmiloud B., Pieczynski W.: Estimation des paramètres dans les chaînes de Markov cachées et segmentation d’images. Traitement du Signal 12(5), 433–454 (1995) in French

    MATH  Google Scholar 

  4. Benyoussef L., Carincotte C., Derrode S.: Extension of higher-order HMC modeling with application to image segmentation. Digit. Signal Process. 18(5), 849–860 (2008)

    Article  Google Scholar 

  5. Bilmes, J.: A gentle tutorial on the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. Technical report icsi-tr-97-021, University of Berkeley (1997)

  6. Bouyahia Z., Benyoussef L., Derrode S.: Change detection in synthetic aperture radar images with a sliding hidden Markov chain model. J. Appl. Remote Sens. 2(1), 023,526 (2008)

    Article  Google Scholar 

  7. Celeux G., Diebolt J.: The SEM algorithm: a probabilistic teacher algorithm derived from the EM algorithm for the mixture problem. Comput. Stat. Q. 2, 73–82 (1985)

    Google Scholar 

  8. Delmas J.P.: An equivalence of the EM and ICE algorithms for exponential family. In: IEEE Trans. Signal Process. 45(10), 2613–2615 (1997)

    Google Scholar 

  9. Dempster A., Laird N., Rubin D.: Maximum likelihood from incomplete data via the EM algorithm (with discussion). J. R. Stat. Ser B 39, 1–38 (1977)

    MATH  MathSciNet  Google Scholar 

  10. Derrode S., Pieczynski W.: Signal and image segmentation using pairwise Markov chains. IEEE Trans. Signal Process. 52(9), 2477–2489 (2004)

    Article  MathSciNet  Google Scholar 

  11. Devijver P.: Baum’s forward-backward algorithm revisited. Pattern Recogn. Lett. 3, 369–373 (1985)

    Article  MATH  Google Scholar 

  12. Efron B.: Bootstrap method: another look at the Jackknife. Ann. Stat. 7, 1–26 (1979)

    Article  MATH  MathSciNet  Google Scholar 

  13. Efron B., Tibshirani R.J.: An Introduction to the Bootstrap. Chapman & Hall, CRC Press, New York (1993)

    Book  MATH  Google Scholar 

  14. Ephraim Y.: Hidden markov processes. IEEE Trans. Inf. Theory 48(6), 1518–1569 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  15. Fjørtoft R., Delignon Y., Pieczynski W., Sigelle M., Tupin F.: Unsupervised classification of radar images using hidden Markov chains and hidden Markov random fields. IEEE Trans. Geosci. Remote Sens. 41(3), 675–686 (2003)

    Article  Google Scholar 

  16. Ghahramani Z., Jordan M.: Factorial hidden Markov models. Mach. Learn. 29, 245–273 (1997)

    Article  MATH  Google Scholar 

  17. Giordana N., Pieczynski W.: Estimation of generalized multisensor hidden Markov chain and unsupervised image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 19(5), 465–475 (1997)

    Article  Google Scholar 

  18. Lanchantin P., Lapuyade-Lahorgue J., Pieczynski W.: Unsupervised segmentation of triplet Markov chains hidden with long-memory noise. Signal Process. 88(5), 1134–1151 (2008)

    Article  MATH  Google Scholar 

  19. Lapuyade-Lahorgue J., Pieczynski W.: Unsupervised segmentation of new semi-Markov chains hidden with long dependence noise. Signal Process. 90(11), 2899–2910 (2010)

    Article  MATH  Google Scholar 

  20. McLachlan G., Khrishnan T.: The EM Algorithm and Extensions. Wiley Series in Probability and Statistics. Wiley, New York (2008)

    Google Scholar 

  21. M’Hiri S., Cammoun L., Ghorbel F.: Speeding up HMRF-EM algorithms for fast unsupervised image segmentation by bootstrap resampling: application to the brain tissue segmentation. Signal Process. 87(11), 2544–2559 (2007)

    Article  MATH  Google Scholar 

  22. Pieczynski W.: Statistical image segmentation. Mach. Gr. Vis. 1(1/2), 261–268 (1992)

    Google Scholar 

  23. Pieczynski W.: Sur la convergence de l’estimation conditionnelle itérative. Comptes Rendus de l’Académie des Sciences-Mathématique 346(7/8), 457–460 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  24. Pieczynski, W., Desbouvries, F.: On triplet Markov chains. In: Proceeding of the International Symposium on Applied Stochastic Models and Data Analysis (ASMDA’05). Brest, France (2005)

  25. Rabiner L.: A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 77(2), 257–286 (1989)

    Article  Google Scholar 

  26. Skarbek, W.: Generalized Hilbert scan in image printing. In: Klette, R., Kropetsh W.G. (Edn.) chap. Theoretical Foundations of Computer Vision. Akademie Verlag, Berlin, Germany (1992)

  27. Weber, K., Bengio, S., Bourlard, H.: Increasing speech recognition noise robustness with HMM2In: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’02), pp. I.929–932. Orlando, Florida, USA (2002)

  28. Wilson A., Bobick A.: Parametric HMM for gesture recognition. IEEE Trans. Image Process. 8(9), 884–900 (1999)

    Google Scholar 

  29. Zhong, S., Ghosh, J.: HMMs and coupled HMMs for multi-channel EEG classification. In: Proceeding of IEEE International Joint Conference on Neural Networks, pp. 1154–1159. Honolulu, Hawaii, USA (2002)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stéphane Derrode.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Derrode, S., Benyoussef, L. & Pieczynski, W. Subsampling-based HMC parameter estimation with application to large datasets classification. SIViP 8, 873–882 (2014). https://doi.org/10.1007/s11760-012-0324-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-012-0324-2

Keywords

Navigation