Abstract
We investigate a method to find local clusters in low dimensional subspaces of high dimensional data, e.g. in high dimensional image descriptions. Using cluster centers instead of the full set of data will speed up the performance of learning algorithms for object recognition, and might also improve performance because overfitting is avoided. Using the Graz01 database, our method outperforms a current standard method for feature extraction from high dimensional image representations.
This work was presented in a preliminary version at the First Austrian Cognitive Vision Workshop (ACVW 05), Zell an der Pram, January 2005.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Lowe, D.: Object recognition from local scale-invariant features. In: Seventh International Conference on Computer Vision, pp. 1150–1157 (1999)
Mikolajczyk, K., Schmid, C.: Indexing based on scale invariant interest points. In: Proceedings of the 8th International Conference on Computer Vision, Vancouver, Canada, pp. 525–531 (2001)
Mikolajczyk, K., Schmid, C.: An affine invariant interest point detector. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 128–142. Springer, Heidelberg (2002)
Opelt, A., Fussenegger, M., Pinz, A., Auer, P.: Weak hypotheses and boosting for generic object detection and recognition. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3022, pp. 71–84. Springer, Heidelberg (2004)
Dance, C., Willamowski, J., Csurka, G., Bray, C.: Categorizing nine visual classes with bags of keypoints (2004)
Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)
Hinneburg, A., Aggarwal, C.C., Keim, D.A.: What is the nearest neighbor in high dimensional spaces? The VLDB Journal, 506–515 (2000)
Aggarwal, C.C., Yu, S.P.: Finding generalized projected clusters in high dimensional spaces. In: SIGMOD 2000: Proceedings of the 2000 ACM SIGMOD international conference on Management of data, pp. 70–81. ACM Press, New York (2000)
Böhm, C., Kailing, K., Kriegel, H.P., Kröger, P.: Density connected clustering with local subspace preferences. In: Proceedings of the 4th IEEE International Conference on Data Mining, pp. 27–34 (2004)
Haussler, D.: Decision theoretic generalizations of the pac model for neural net and other learning applications. Inf. Comput. 100, 78–150 (1992)
Kearns, M.J., Schapire, R.E., Sellie, L.: Toward efficient agnostic learning. Computational Learing Theory, 341–352 (1992)
Long, P.M.: The complexity of learning according to two models of a drifting environment. Machine Learning 37, 337–354 (1999)
Demiriz, A., Bennett, K.P., Shawe-Taylor, J.: Linear Programming Boosting via Column Generation. Machine Learning 46, 225–254 (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Savu-Krohn, C., Auer, P. (2006). A Simple Feature Extraction for High Dimensional Image Representations. In: Saunders, C., Grobelnik, M., Gunn, S., Shawe-Taylor, J. (eds) Subspace, Latent Structure and Feature Selection. SLSFS 2005. Lecture Notes in Computer Science, vol 3940. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11752790_11
Download citation
DOI: https://doi.org/10.1007/11752790_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-34137-6
Online ISBN: 978-3-540-34138-3
eBook Packages: Computer ScienceComputer Science (R0)