Skip to main content

Hidden Space Neighbourhood Component Analysis for Cancer Classification

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9950))

Included in the following conference series:

  • 2542 Accesses

Abstract

Neighbourhood component analysis (NCA) is a method for learning a distance metric which can maximize the classification performance of the K nearest neighbour (KNN) classifier. However, NCA suffers from the small size sample problem that the number of samples is much less than the number of features. To remedy this, this paper proposes a hidden space neighbourhood components analysis (HSNCA), which is a nonlinear extension of NCA. HSNCA first maps the data in the original space into a feature space by a set of nonlinear mapping functions, and then performs NCA in the feature space. Notably, the number of samples is equal to the number of features in the feature space. Thus, HSNCA can avoid the small size sample problem. Experimental results on DNA array datasets show that HSNCA is feasibility and efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Guyon, I., Weston, J., Barnhill, S., Vapink, V.: Gene selection for cancer classification using support vector machines. Mach. Learn. 46, 389–422 (2002)

    Article  MATH  Google Scholar 

  2. Li, J.T., Jia, Y.M., Li, W.L.: Adaptive huberized support vector machine and its application to microarray classification. Neural Comput. Appl. 20, 123–132 (2011)

    Article  Google Scholar 

  3. Li, L., Weinberg, C.-R., Darden, T.-A., Pedersen, L.-G.: Gene selection for sample classification based on gene expression data: study of sensitivity to choice of parameters of the GA/KNN method. Bioinformatics 17, 1131–1142 (2001)

    Article  Google Scholar 

  4. Cover, T., Hart, P.: Nearest neighbor pattern classification. IEEE Trans. Inf. Theor. IT–13, 21–27 (1967)

    Article  MATH  Google Scholar 

  5. Goldberger, J., Roweis, S., Hinton, G., Salakhutdinov, R.: Neighbourhood components analysis. In: Advances in Neural Information Processing Systems, vol. 17, pp. 513–520. MIT Press (2004)

    Google Scholar 

  6. Shental, N., Hertz, T., Weinshall, D., Pavel, M.: Adjustment learning and relevant component analysis. In: Proceedings of 7th European Conference on Computer Vision, London, UK, pp. 776–792 (2002)

    Google Scholar 

  7. Globerson, A., Roweis, S.T.: Metric learning by collapsing classes. In: Advances in Neural Information Processing Systems, vol. 18 (2005)

    Google Scholar 

  8. Weinberger, K.Q., Saul, L.K.: Distance metric learning for large margin nearest neighbor classification. J. Mach. Learn. Res. 10, 207–244 (2009)

    MATH  Google Scholar 

  9. Xing, E.P., Ng, A.Y., Jordan, M.I., Russell, S.: Distance metric learning, with application to clustering with side-information. In: Advances in Neural Information Processing Systems, vol. 14, pp. 521–528. MIT Press, Cambridge (2002)

    Google Scholar 

  10. Chopra, S., Hadsell, R., LeCunGoldberger, Y.: Learning a similiarty metric discriminatively, with application to face verification. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, San Diego, CA, pp. 349C–356 (2005)

    Google Scholar 

  11. Davis, J.V., Kulis, B., Jain, P., Sra, S., Dhillon, I.S.: Information-theoretic metric learning. In: Proceedings of 24th International Conference on Machine Learning, pp. 209–216. ACM, New York (2007)

    Google Scholar 

  12. Qin, C., Song, S., Huang, G., Zhu, L.: Unsupervised neighborhood component analysis for clustering. Neurocomputing 168, 609–617 (2015)

    Article  Google Scholar 

  13. Yang, W., Wang, K., Zuo, W.: Neighborhood component feature selection for high-dimensional data. J. Comput. 7(1), 161–168 (2012)

    Google Scholar 

  14. Yang, Z., Laaksonen, J.: Regularized neighborhood component analysis. In: Ersbøll, B.K., Pedersen, K.S. (eds.) SCIA 2007. LNCS, vol. 4522, pp. 253–262. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  15. Qin, C., Song, S., Huang, G.: Non-linear neighborhood component analysis based on constructive neural networks. In: Proceedings of 2014 IEEE International Conference on Systems, Man and Cybernetics, pp. 1997–2002. IEEE (2014)

    Google Scholar 

  16. Yang, W., Wang, K., Zuo, W.: Fast neighborhood component analysis. Neurocomputing 83(6), 31–37 (2012)

    Article  Google Scholar 

  17. Zhang, L., Zhou, W.D., Jiao, L.C.: Hidden space support vector machines. IEEE Trans. Neural Netw. 15(6), 1424–1434 (2004)

    Article  Google Scholar 

  18. Zhou, W., Zhang, L., Jiao, L.: Hidden space principal component analysis. In: Ng, W.-K., Kitsuregawa, M., Li, J., Chang, K. (eds.) PAKDD 2006. LNCS (LNAI), vol. 3918, pp. 801–805. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  19. Zhang, L., Zhou, W.D., Chang, P.-C.: Generalized nonlinear discriminant analysis and its small sample size problems. Neurocomputing 74, 568–574 (2011)

    Article  Google Scholar 

  20. Ding, C., Zhang, L., Wang, B.J.: Hidden space discriminant neighborhood embedding. In: Proceedings of 2014 International Joint Conference on Neural Networks, pp. 271–277. IEEE (2014)

    Google Scholar 

  21. Zhang, L., Zhou, W.-D., Chang, P.-C., Liu, J., Yan, Z., Wang, T., Li, F.-Z.: Kernel sparse representation-based classifier. IEEE Trans. Sig. Process. 60, 1684–1695 (2012)

    Article  MathSciNet  Google Scholar 

  22. Xu, Z., Dai, M., Meng, D.: Fast and efficient strategies for model selection of gaussian support vector machine. IEEE Trans. Syst. Man Cybern. - Part B: Cybern. 39(5), 1292–1307 (2009)

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 61373093, and 61402310, by the Natural Science Foundation of Jiangsu Province of China under Grant No. BK20140008, by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant No. 13KJA520001, and by the Soochow Scholar Project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Zhang, L., Huang, X., Wang, B., Li, F., Zhang, Z. (2016). Hidden Space Neighbourhood Component Analysis for Cancer Classification. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9950. Springer, Cham. https://doi.org/10.1007/978-3-319-46681-1_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46681-1_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46680-4

  • Online ISBN: 978-3-319-46681-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics