Abstract
In the PAC learning model, the learner must output good approximate results with only the information that tells the learner whether the example is “positive” or “negative”. This restriction results in narrowing the area of the learnable concepts or in increasing the size of examples required for successful learning. However, if the learner receives some additional information about the example besides being positive or negative, e.g., real values corresponding to the degree of the positiveness (or negativeness), a larger class may become learnable or the number of necessary examples may be reduced.
In the case of learning geometric concepts, such additional information may be given, for instance, how close a given positive or negative example is to the boundary of a target geometric concept. Using this type of additional information for geometric concepts consisting of complex boundaries, the learner may identify the geometric concept more rigorously or with sampling the smaller number of examples for the required accuracy. In the case of neural networks of threshold functions, some of values of weighted sum of inputs at nodes, say a value at the output node, may be output instead of simply producing outputs of either 0 or 1 after comparing the values with the threshold values at nodes. Such values represent the closeness of an input to the threshold boundary.
This note investigates the effect of such additional information in computational learning theory. Two types of networks of threshold-like functions are considered, and it is demonstrated how some additional information is profitable to learn these networks more efficiently.
This is a preview of subscription content, log in via an institution.
Preview
Unable to display preview. Download preview PDF.
References
Angluin, D., “Queries and Concept Learning,” Machine Learning, Vol. 2, 1988, pp. 319–342.
Blumer, A., A. Ehrenfeucht, D. Haussler, and M. K. Warmuth, “Learnability and the Vapnik-Chervonenkis Dimension,” Journal of the ACM, Vol. 36, No. 4, 1989, pp. 929–965.
Dobkin, D., H. Edelsbrunner, and C. K. Yap, “Probing Convex Polytopes,” Proceedings of the 18th Annual ACM Symposium on Theory of Computing, 1986, p.424–432.
Edelsbrunner, H., “Algorithms in Combinatorial Geometry,” Springer-Verlag, Berlin, 1987.
Hasegawa, S., and K. Kakihara, “A Generalization of ε-Approximations for κ-Label Space and Its Application,” Manuscript, 1992.
Kakihara, K., “Effect of Additional Information in Computational Learning Theory” Graduation Thesis, Department of Information Science, University of Tokyo, March 1992.
Muroga, S., “Threshold Logic and Its Applications,” Wiley-Interscience, New York, 1971.
Lin, J.-H., and Vitter, J. S., “Complexity Issues in Learning by Neural Nets,” Proceedings of the 2nd Annual Workshop on Computational Learning Theory, 1989, pp.118–133.
Valiant, L. G., “A Theory of the Learnable,” Communications of the ACM, Vol. 27, No. 11, 1984, pp. 1134–1142.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1993 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kakihara, Ki., Imai, H. (1993). Notes on the PAC learning of geometric concepts with additional information. In: Doshita, S., Furukawa, K., Jantke, K.P., Nishida, T. (eds) Algorithmic Learning Theory. ALT 1992. Lecture Notes in Computer Science, vol 743. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57369-0_44
Download citation
DOI: https://doi.org/10.1007/3-540-57369-0_44
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-57369-2
Online ISBN: 978-3-540-48093-8
eBook Packages: Springer Book Archive