Abstract
This paper presents a framework named “Classifier Molding” that imitates arbitrary classifiers by linear regression trees so as to accelerate classification speed. This framework requires an accurate (but slow) classifier and large amount of training data. As an example of accurate classifier, we used the Compound Similarity Method (CSM) for Industrial Ink Jet Printer (IIJP) character recognition problem. The input-output relationship of trained CSM is imitated by a linear regression tree by providing a large amount of training data. For generating the training data, we developed a character pattern fluctuation method simulating the IIJP printing process. The learnt linear regression tree can be used as an accelerated classifier. Based on this classifier, we also developed Classification based Character Segmentation (CCS) method, which extracts character patterns from an image so as to maximize the total classification scores. Through extensive experiments, we confirmed that imitated classifiers are 1500 times faster than the original classifier without dropping the recognition rate and CCS method greatly corrects the segmentation errors of bottom-up segmentation method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Quinlan, J.R.: Learning with continuous classes. In: Proceedings of 5th Australian Joint Conference on Artificial Intelligence, pp. 343–348. World Scientific Pub. Co. Inc., Tasmania (1992)
Ijima, T.: Pattern Recognition Theory. Morikita Shuppan, Japan (1989)
Vapnik, V.: The nature of statistical learning theory. Springer, Heidelberg (1995)
Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Chapman & Hall, New York (1984)
Karalic, A.: Linear regression in regression tree leaves. In: Proceedings of ISSEK 1992 (International School for Synthesis of Expert Knowledge) Workshop, Bled Slovenia (1992)
Alexander, W.P., Grimshaw, S.D.: Treed regression. Journal of Computational and Graphical Statistics (5), 156–175 (1996)
Chaudhuri, P., Huang, M.-C., Loh, W.-Y., Yao, R.: Piecewise-polynomial regression trees. Statistica Sinica 4, 143–167 (1994)
Dobra, A., Gehrke, J.E.: SECRET: A Scalable Linear Regression Tree Algorithm. In: Proceeding of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Edmonton Alberta Canada (2002)
Nakamura, T., Kato, T., Wada, T.: A Novel NonLinear Mapping Algorithm (PaLM-Tree). Journal of the Robotics Society of Japan 23(6), 732–742 (2005)
Horowitz, S.L., Pavlidis, T.: Picture segmentation by a tree traversal algorithm. Journal of The Association for Computing Machinery 23(2), 386–388 (1976)
Murase, H.: Synthesized-based learning for image recognition. Technical report of IEICE 104(291), 41–48 (2005)
Hart, P.E., Nilsson, N.J., Raphael, B.: A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Transactions on Systems Science and Cybernetics SSC-4(2), 100–107 (1968)
Dijkstra, E.W.: A Note on Two Problems in Connexion with Graphs. Numerische Mathematik 1, 269–271 (1959)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ota, T., Wada, T., Nakamura, T. (2011). Classifier Acceleration by Imitation. In: Kimmel, R., Klette, R., Sugimoto, A. (eds) Computer Vision – ACCV 2010. ACCV 2010. Lecture Notes in Computer Science, vol 6495. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-19282-1_52
Download citation
DOI: https://doi.org/10.1007/978-3-642-19282-1_52
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-19281-4
Online ISBN: 978-3-642-19282-1
eBook Packages: Computer ScienceComputer Science (R0)