Skip to main content

Search-Based Class Discretization for Hidden Markov Model for Regression

  • Conference paper
Advances in Artificial Intelligence – SBIA 2004 (SBIA 2004)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3171))

Included in the following conference series:

Abstract

The regression-by-discretization approach allows the use of classification algorithm in a regression task. It works as a pre-processing step in which the numeric target value is discretized into a set of intervals. We had applied this approach to the Hidden Markov Model for Regression (HMMR) which was successfully compared to the Naive Bayes for Regression and two traditional forecasting methods, Box-Jenkins and Winters. In this work, to further improve these results, we apply three discretization methods to HMMR using ten time series data sets. The experimental results showed that one of the discretization methods improved the results in most of the data sets, although each method improved the results in at least one data set. Therefore, it would be better to have a search algorithm to automatically find the optimal number and width of the intervals.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Box, G.E.P., Jenkins, G.M., Reinsel, G.C.: Time Series Analysis: Forecasting & Control. Prentice-Hall, Englewood Cliffs (1994)

    MATH  Google Scholar 

  2. Dietterich, T.G.: The Divide-and-Conquer Manifesto. In: Proceedings of the Eleventh International Conference on Algorithmic Learning Theory, pp. 13–26 (2000)

    Google Scholar 

  3. Domingos, P., Pazzani, M.: On the Optimality of the Simple Bayesian Classifier under Zero-One Loss. Machine Learning 29(2/3), 103–130 (1997)

    Article  MATH  Google Scholar 

  4. Frank, E., Trigg, L., Holmes, G., Witten, I.H.: Naive Bayes for Regression. Machine Learning 41(1), 5–25 (1999)

    Article  Google Scholar 

  5. Friedman, N., Goldszmidt, M., Lee, T.J.: Bayesian network classification with continuous attributes: Getting the best of both discretization and parametric fitting. In: 15th Inter. Conf. on Machine Learning (ICML), pp. 179–187 (1998)

    Google Scholar 

  6. Ghahramani, Z.: Learning Dynamic Bayesian Networks. In: Giles, C.L., Gori, M. (eds.) Adaptive Processing of Sequences and Data Structures. LNCS (LNAI), pp. 168–197. Springer, Berlin (1998)

    Chapter  Google Scholar 

  7. Montgomery, D.C., Johnson, L.A., Gardiner, J.S.: Forecasting and Time Series Analysis. McGraw-Hill Companies, New York (1990)

    Google Scholar 

  8. Roweis, S., Ghahramani, Z.: A Unifying Review of Linear Gaussian Models. Neural Computation 11(2), 305–345 (1999)

    Article  Google Scholar 

  9. Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn. Prentice Hall, Englewood Cliffs (2002)

    Google Scholar 

  10. Teixeira, M.A., Revoredo, K., Zaverucha, G.: Hidden Markov Model for Regression in Electric Load Forecasting. In: Proceedings of the ICANN/ICONIP 2003, Turkey, vol. 1, pp. 374–377 (2003)

    Google Scholar 

  11. Teixeira, M.A., Zaverucha, G.: Fuzzy Bayes and Fuzzy Markov Predictors. Journal of Intelligent and Fuzzy Systems 13(2-4), 155–165 (2003)

    Google Scholar 

  12. Teixeira, M.A., Zaverucha, G.: Fuzzy Markov Predictor in Multi-Step Electric Load Forecasting. In: The Proceedings of the IEEE/INSS International Joint Conference on Neural Networks (IJCNN 2003), Portland, Oregon, vol. 1, pp.3065–3070 (2003)

    Google Scholar 

  13. Torgo, L., Gama, J.: Regression Using Classification Algorithms. Intelligent Data Analysis 1, 275–292 (1997)

    Article  Google Scholar 

  14. Weiss, S., Indurkhya, N.: Rule-base Regression. Proceedings of the 13th Internationa Joing Conference on Artificial Intelligence, 1072–1078 (1993)

    Google Scholar 

  15. Weiss, S., Indurkhya, N.: Rule-base Machine Learning Methods for Functional Prediction. Journal of Artificial Intelligence Research (JAIR) 3, 383–403 (1995)

    MATH  Google Scholar 

  16. Urban Hjorth, J.S.: Computer Intensive Statistical Methods. In: Validation Model Selection and Bootstrap, Chapman & Hall, Sydney (1994)

    Google Scholar 

  17. Keogh, E., Kasetty, S.: On the Need for Time Series Data Mining Benchmarks: A Survey and Empirical Demonstration. Data Mining and Knowledge Discovery 7, 349–371 (2003)

    Article  MathSciNet  Google Scholar 

  18. http://www-psych.stanford.edu/%7Eandreas/Time-Series/SantaFe

  19. ftp://ftp.esat.kuleuven.ac.be/pub/sista/suykens/workshop/datacomp.dat

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Revoredo, K., Zaverucha, G. (2004). Search-Based Class Discretization for Hidden Markov Model for Regression. In: Bazzan, A.L.C., Labidi, S. (eds) Advances in Artificial Intelligence – SBIA 2004. SBIA 2004. Lecture Notes in Computer Science(), vol 3171. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-28645-5_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-28645-5_32

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-23237-7

  • Online ISBN: 978-3-540-28645-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics