Skip to main content

Boosting Algorithm with Sequence-Loss Cost Function for Structured Prediction

  • Conference paper
Hybrid Artificial Intelligence Systems (HAIS 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6076))

Included in the following conference series:

Abstract

The problem of sequence prediction i.e. annotating sequences appears in many problems across a variety of scientific disciplines, especially in computational biology, natural language processing, speech recognition, etc. The paper investigates a boosting approach to structured prediction, AdaBoostSTRUCT, based on proposed sequence-loss balancing function, combining advantages of boosting scheme with the efficiency of dynamic programming method. In the paper the method’s formalism for modeling and predicting label sequences is introduced as well as examined, presenting its validity and competitiveness.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Altun, Y., Hofmann, T., Johnson, M.: Discriminative Learning for Label Sequences via Boosting. In: Advances in Neural Information Processing Systems, vol. 15, pp. 1001–1008. MIT Press, Cambridge (2003)

    Google Scholar 

  2. Collins, M.: Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In: Conference on Empirical Methods in Natural Language Processing 2002, vol. 10, pp. 1–8 (2002)

    Google Scholar 

  3. Daume, H.: Practical Structured Learning Techniques for Natural Language Processing. Ph.D. thesis, University of Southern California, Los Angeles, CA, USA (2006)

    Google Scholar 

  4. Daume, H., Langford, J., Marcu, D.: Search-based structured prediction. Machine Learning 75, 297–325 (2009)

    Article  Google Scholar 

  5. Freund, Y., Schapire, R.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55, 119–139 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  6. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. The Annals of Statistics 28(2), 337–407 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  7. Kajdanowicz, T., Kazienko, P.: Hybrid Repayment Prediction for Debt Portfolio. In: Nguyen, N.T., Kowalczyk, R., Chen, S.-M. (eds.) ICCCI 2009. LNCS (LNAI), vol. 5796, pp. 850–857. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  8. Kajdanowicz, T., Kazienko, P.: Prediction of Sequential Values for Debt Recovery. In: Bayro-Corrochano, E., Eklundh, J.-O. (eds.) CIARP 2009. LNCS, vol. 5856, pp. 337–344. Springer, Heidelberg (2009)

    Google Scholar 

  9. Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: International Conference on Machine Learning ICML 2001, pp. 282–289 (2001)

    Google Scholar 

  10. McCallum, A., Freitag, D., Pereira, F.: Maximum entropy Markov models for information extraction and segmentation. In: International Conference on Machine Learning ICML 2000, pp. 591–598 (2000)

    Google Scholar 

  11. Nguyen, N., Guo, Y.: Comparisons of Sequence Labeling Algorithms and Extensions. In: International Conference on Machine Learning ICML 2000, pp. 681–688 (2007)

    Google Scholar 

  12. Punyakanok, V., Roth, D.: The use of classifiers in sequential inference. In: Advances in Neural Information Processing Systems, vol. 13, pp. 995–1001. MIT Press, Cambridge (2001)

    Google Scholar 

  13. Taskar, B., Guestrin, C., Koller, D.: Max-margin Markov networks. In: Advances in Neural Information Processing Systems, vol. 16, pp. 25–32. MIT Press, Cambridge (2004)

    Google Scholar 

  14. Theodoris, S., Koutroumbas, K.: Pattern Recognition. Elsevier, Amsterdam (2009)

    Google Scholar 

  15. Tsochantaridis, I., Hofmann, T., Thorsten, J., Altun, Y.: Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research 6, 1453–1484 (2005)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kajdanowicz, T., Kazienko, P., Kraszewski, J. (2010). Boosting Algorithm with Sequence-Loss Cost Function for Structured Prediction. In: Graña Romay, M., Corchado, E., Garcia Sebastian, M.T. (eds) Hybrid Artificial Intelligence Systems. HAIS 2010. Lecture Notes in Computer Science(), vol 6076. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13769-3_70

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-13769-3_70

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-13768-6

  • Online ISBN: 978-3-642-13769-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics