Skip to main content

Gaussian Opposite Maps for Reduced-Set Relevance Vector Machines

  • Conference paper
  • First Online:
  • 1816 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10305))

Abstract

The Relevance Vector Machine is a bayesian method. This model represents its decision boundary using a subset of points from the training set, called relevance vectors. The training algorithm of that is time consuming. In this paper we propose a technique for initialize the training process using the points of an opposite map in classification problems. This solution approximate the relevance points of the solutions obtained by Support Vector Machines. In order to assess the performance of our proposal, we carried out experiments on well-known datasets against the original RVM and SVM. The GOM-RVM achieved accuracy equivalent or superior than to SVM and RVM with fewer relevance vectors.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)

    MATH  Google Scholar 

  2. Chaspari, T., Tsiartas, A., Tsilifis, P., Narayanan, S.S.: Markov chain Monte Carlo inference of parametric dictionaries for sparse Bayesian approximations. IEEE Trans. Sig. Process. 64(12), 3077–3092 (2016)

    Article  MathSciNet  Google Scholar 

  3. Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7(Jan), 1–30 (2006)

    MathSciNet  MATH  Google Scholar 

  4. Dias, M.L.D., Neto, A.R.R.: A Novel Simulated Annealing-Based Learning Algorithm for Training Support Vector Machines. Springer, Cham (2017). pp. 341–351

    Book  Google Scholar 

  5. Faul, A.C., Tipping, M.E.: Analysis of sparse Bayesian learning. Adv. Neural Inf. Process. Syst. 1, 383–390 (2002)

    Google Scholar 

  6. Hearst, M.A., Dumais, S.T., Osuna, E., Platt, J., Scholkopf, B.: Support vector machines. IEEE Intell. Syst. Appl. 13(4), 18–28 (1998)

    Article  Google Scholar 

  7. Hsieh, C.-J., Si, S., Dhillon, I.S.: A divide-and-conquer solver for kernel support vector machines. In: ICML, pp. 566–574 (2014)

    Google Scholar 

  8. Lichman, M.: UCI machine learning repository (2013)

    Google Scholar 

  9. Luo, J., Vong, C.M., Wong, P.K.: Sparse Bayesian extreme learning machine for multi-classification. IEEE Trans. Neural Netw. Learn. Syst. 25(4), 836–843 (2014)

    Article  Google Scholar 

  10. MacKay, D.J.C.: Bayesian interpolation. Neural Comput. 4(3), 415–447 (1992)

    Article  MATH  Google Scholar 

  11. Mackay, D.J.C.: The evidence framework applied to classification networks. Neural Comput. 4(5), 720–736 (1992)

    Article  Google Scholar 

  12. Neto, A.R.R., Barreto, G.A.: Opposite maps: vector quantization algorithms for building reduced-set SVM and LSSVM classifiers. Neural Process. Lett. 37(1), 3–19 (2013)

    Article  Google Scholar 

  13. Rasmussen, C.E.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)

    MATH  Google Scholar 

  14. Sun, B., Ng, W.W.Y., Chan, P.P.K.: Improved sparse LSSVMS based on the localized generalization error model. Int. J. Mach. Learn. Cybern. 8, 1–9 (2016)

    Google Scholar 

  15. Tipping, M.E.: Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 1(3), 211–244 (2001)

    MathSciNet  MATH  Google Scholar 

  16. Tipping, M.E., Anita Faul, J.J., Thomson Avenue, J.J., Avenue, T.: Fast marginal likelihood maximisation for sparse Bayesian models. In: Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, pp. 3–6 (2003)

    Google Scholar 

  17. Anqi, W., Park, M., Koyejo, O.O., Pillow, J.W.: Sparse Bayesian structure learning with “dependent relevance determination” priors. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 27, pp. 1628–1636. Curran Associates Inc., Red Hook (2014)

    Google Scholar 

  18. Zhou, S.: Sparse lssvm in primal using cholesky factorization for large-scale problems. IEEE Trans. Neural Netw. Learn. Syst. 27(4), 783–795 (2016)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lucas Silva de Sousa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

de Sousa, L.S., da Rocha Neto, A.R. (2017). Gaussian Opposite Maps for Reduced-Set Relevance Vector Machines. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2017. Lecture Notes in Computer Science(), vol 10305. Springer, Cham. https://doi.org/10.1007/978-3-319-59153-7_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59153-7_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59152-0

  • Online ISBN: 978-3-319-59153-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics