skip to main content
10.1145/2093698.2093804acmotherconferencesArticle/Chapter ViewAbstractPublication PagesisabelConference Proceedingsconference-collections
research-article

Find the intrinsic space for multiclass classification

Published: 26 October 2011 Publication History

Abstract

Multiclass classification is one of the core problems in many applications. High classification accuracy is fundamental to be accepted as a valuable or even indispensable tool in the work flow. In the classification problem, each sample is usually represented as a vector of features. Most of the cases, some features are usually redundant or misleading, and high dimension is not necessary. Therefore, it is important to find the intrinsically lower dimensional space to get the most representative features that contain the best information for classification. In this paper, we propose a novel dimension reduction method for multiclass classification. Using the constraint of the triplet set, our proposed method projects the original high dimensional feature space to a much lower dimensional feature space. This method enables faster computation, reduce the space needed, and mostly importantly produces more meaningful representations that leads to better classification accuracy.

References

[1]
M. Belkin and P. Niyogi. Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. Neural Computation, pages 1373--1396, 2003.
[2]
J. Bi, K. P. Bennett, M. Embrechts, C. M. Breneman, and M. Song. Dimensionality Reduction via Sparse Support Vector Machines. Journal of Machine Learning Research, pages 633--42, 2003.
[3]
T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley-Interscience, 2nd edition, 2006.
[4]
M. Cox and T. Cox. Multidimensional Scaling, pages 315--347. Springer Handbooks Comp. Statistics, 2008.
[5]
R. Duda, P. Hart, and D. Stork. Pattern Classification. Wiley-Interscience, Hoboken, NJ, 2nd edition, 2000.
[6]
R. Fisher. The Use of Multiple Measurements in Taxonomic Problems. Annals of Eugenics, 7:179--188, 1936.
[7]
A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[8]
J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighborhood Component Analysis. In Advances in Neural Information Processing Systems, 2004.
[9]
X. He and P. Niyogi. Locality Preserving Projections. In Advances in Neural Information Processing Systems, 2003.
[10]
I. Jolliffe. Principal Component Analysis. Spring er-Verlag, 1986.
[11]
J. A. Lee, A. Lendasse, N. Donckers, and M. Verleysen. A Robust Nonlinear Projection Method. Proceedings - European Symposium on Artificial Neural Networks, pages 13--20, 2000.
[12]
M. Liu, L. Lu, X. Ye, and S. Yu. Coarse-to-fine Classification using Parametric and Nonparametric Models for Computer-Aided Diagnosis. 20th ACM Conference on Information and Knowledge Management (CIKM), 2011.
[13]
M. Liu, L. Lu, X. Ye, S. Yu, and M. Salganicoff. Sparse Classification for Computer Aided Diagnosis Using Learned Dictionaries. 14th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 6893:41--48, 2011.
[14]
M. Liu, B. vemuri, S. Amari, and F. Nielsen. Total Bregman Divergence and its Applications to Shape Retrieval. IEEE Computer Vision and Pattern Recognition, 2010.
[15]
H. Peng, F. Long, and C. Ding. Feature Selection Based on Mutual Information: Criteria of Max-dependency, Max-relevance, and Min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1226--1238, 2005.
[16]
R. Rockafellar. Convex Analysis. Princeton University Press, 1970.
[17]
S. Roweis and L. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science, 290:2323--2326, 2000.
[18]
C. Shen, J. Kim, L. Wang, and A. van den Hengel. Positive semidefinite metric learning with boosting. In Proc. Adv. Neural Inf. Process. Syst., 2009.
[19]
J. Tenenbaum, V. Silva, and J. Langford. A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science, 290:2319--2323, 2000.
[20]
B. Vemuri, M. Liu, S. Amari, and F. Nielsen. Total Bregman Divergence and its Applications to DTI Analysis. IEEE Transactions on Medical Imaging, 30(2):475--483, 2011.
[21]
K. Q. Weinberger and L. K. Saul. Unsupervised learning of image manifolds by semidefinite programming. International Journal of Computer-Vision, In Special Issue: Computer Vision and Pattern Recognition-CVPR 2005 Guest Editor(s): Aaron Bobick, Rama Chellappa, Larry Davis, 70(1):77--90, 2005.

Cited By

View all
  • (2017)Combining Word and Character N-Grams for Detecting Deceptive Opinions2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC)10.1109/COMPSAC.2017.90(828-833)Online publication date: Jul-2017

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ISABEL '11: Proceedings of the 4th International Symposium on Applied Sciences in Biomedical and Communication Technologies
October 2011
949 pages
ISBN:9781450309134
DOI:10.1145/2093698
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • Universitat Pompeu Fabra
  • IEEE
  • Technical University of Catalonia Spain: Technical University of Catalonia (UPC), Spain
  • River Publishers: River Publishers
  • CTTC: Technological Center for Telecommunications of Catalonia
  • CTIF: Kyranova Ltd, Center for TeleInFrastruktur

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 October 2011

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. dimension reduction
  2. multiclass classification
  3. triplet

Qualifiers

  • Research-article

Conference

ISABEL '11
Sponsor:
  • Technical University of Catalonia Spain
  • River Publishers
  • CTTC
  • CTIF

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)1
Reflects downloads up to 10 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2017)Combining Word and Character N-Grams for Detecting Deceptive Opinions2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC)10.1109/COMPSAC.2017.90(828-833)Online publication date: Jul-2017

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media