Skip to main content

Learning Low Cost Multi-target Models by Enforcing Sparsity

  • Conference paper
  • First Online:
Book cover Current Approaches in Applied Artificial Intelligence (IEA/AIE 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9101))

  • 2673 Accesses

Abstract

We consider how one can lower the costs of making predictions for multi-target learning problems by enforcing sparsity on the matrix containing the coefficients of the linear models. Four types of sparsity patterns are formalized, as well as a greedy forward selection framework for enforcing these patterns in the coefficients of learned models. We discuss how these patterns relate to costs in different types of application scenarios, introducing the concepts of extractor and extraction costs of features. We experimentally demonstrate on two real-world data sets that in order to achieve as low prediction costs as possible while also maintaining acceptable predictive accuracy for the models, it is crucial to correctly match the type of sparsity constraints enforced to the use scenario where the model is to be applied.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ambroise, C., McLachlan, G.J.: Selection bias in gene extraction on the basis of microarray gene-expression data. Proceedings of the National Academy of Sciences of the United States of America 99(10), 6562–6566 (2002)

    Article  MATH  Google Scholar 

  2. Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artificial Intelligence 97, 273–324 (1997)

    Article  MATH  Google Scholar 

  3. Naula, P., Airola, A., Salakoski, T., Pahikkala, T.: Multi-label learning under feature extraction budgets. Pattern Recognition Letters 40, 56–65 (2014)

    Article  Google Scholar 

  4. Pahikkala, T., Airola, A., Salakoski, T.: Speeding up greedy forward selection for regularized least-squares. In: Draghici, S., Khoshgoftaar, T.M., Palade, V., Pedrycz, W., Wani, M.A., Zhu, X. (eds.) Proceedings of the Ninth International Conference on Machine Learning and Applications (ICMLA 2010), pp. 325–330. IEEE (2010)

    Google Scholar 

  5. Pahikkala, T., Okser, S., Airola, A., Salakoski, T., Aittokallio, T.: Wrapper-based selection of genetic features in genome-wide association studies through fast matrix operations. Algorithms for Molecular Biology 7(1), 11 (2012)

    Article  Google Scholar 

  6. Rifkin, R., Klautau, A.: In defense of one-vs-all classification. Journal of Machine Learning Research 5, 101–141 (2004)

    MATH  MathSciNet  Google Scholar 

  7. Shalev-Shwartz, S., Srebro, N., Zhang, T.: Trading accuracy for sparsity in optimization problems with sparsity constraints. SIAM Journal on Optimization 20(6), 2807–2832 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  8. Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B 58, 267–288 (1994)

    MathSciNet  Google Scholar 

  9. Turney, P.D.: Types of cost in inductive concept learning. In: Dietterich, T., Margineantu, D., Provost, F., Turney, P.D. (eds.) Proceedings of the ICML 2000 Workshop on Cost-Sensitive Learning (2000)

    Google Scholar 

  10. Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society, Series B 68, 49–67 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  11. Zhang, T.: On the consistency of feature selection using greedy least squares regression. Journal of Machine Learning Research 10, 555–568 (2009)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pekka Naula .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Naula, P., Airola, A., Salakoski, T., Pahikkala, T. (2015). Learning Low Cost Multi-target Models by Enforcing Sparsity. In: Ali, M., Kwon, Y., Lee, CH., Kim, J., Kim, Y. (eds) Current Approaches in Applied Artificial Intelligence. IEA/AIE 2015. Lecture Notes in Computer Science(), vol 9101. Springer, Cham. https://doi.org/10.1007/978-3-319-19066-2_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-19066-2_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-19065-5

  • Online ISBN: 978-3-319-19066-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics