Abstract
We consider exact learning of monotone Boolean functions by membership queries, in the case that only r of the n variables are relevant. The learner proceeds in a number of rounds. In each round he submits to the function oracle a set of queries which may be chosen depending on the results from previous rounds. In a STOC’98 paper we proved that O(2 r + r logn) queries in O(r) rounds are sufficient. While the query bound is optimal for trivial information-theoretic reasons, it was open whether parallelism can be improved without increasing the amount of queries. In the present paper we prove a negative answer:Θ(r) rounds are necessary in the worst case, even for learning a very special type of monotone function. The proof is an adversary argument, based on a distance inequality in binary codes. On the other hand, a Las Vegas strategy based on another STOC’98 result can learn monotone functions in 2log2 r + O(1) rounds, without using significantly more queries. We also study the constant factors in the deterministic case.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
D.J. Balding, D.C. Torney: A comparative survey of non-adaptive pooling designs, in: Genetic Mapping and DNA Sequencing (IMA Volumes in Mathematics and Its Applications) (Springer 1995), 133–155
D.J. Balding, D.C. Torney: Optimal pooling designs with error detection, Journal of Comb. Theory A 74 (1996), 131–140
A. Blum, L. Hellerstein, N. Littlestone: Learning in the presence of finitely or infinitely many irrelevant attributes, Journal of Computer and System Sciences 50 (1995), 32–40
N.H. Bshouty, L. Hellerstein: Attribute-efficient learning in query and mistake-bound models, 9th Conf. on Computational Lerning Theory COLT’96, 235–243
P. Damaschke: Adaptive versus nonadaptive attribute-efficient learning, 30th ACM Symp. on Theory of Computing STOC’98, 590–596, accepted by Machine Learning
P. Damaschke: Computational aspects of parallel attribute-efficient learning, 9th Int. Workshop on Algorithmic Learning Theory ALT’98, LNAI1501, 103–111
P. Damaschke: Randomized group testing for mutually obscuring defectives, Info. Proc. Letters 67 (1998), 131–135
A. De Bonis, L. Gargano, U. Vaccaro: Group testing with unreliable tests, Info. Sci. 96 (1997), 1–14
A. De Bonis, U. Vaccaro: Improved algorithms for group testing with inhibitors, Info. Proc. Letters 67 (1998), 57–64
A. Dhagat, L. Hellerstein: PAC learning with irrelevant attributes, 35th IEEE Symp. on Foundations of Computer Science FOCS’94, 64–74
M. Farach, S. Kannan, E. Knill, S. Muthukrishnan: Group testing problems in experimental molecular biology, Compression and Complexity of Sequences’ 97, 357–367
P. Fischer, N. Klasner, I. Wegener: On the cut-off point for combinatorial group testing, Discrete Applied Math. 91 (1999), 83–92
T. Hofmeister: An application of codes to attribute-efficient learning, 5th European Conf. on Computational Learning Theory EuroCOLT’99, LNAI1572 (1999), 101–110
J. Kivinen, H. Mannila, E. Ukkonen: Learning hierarchical rule sets, 5th Conf. on Computational Learning Theory COLT’92, 37–44
E. Knill: Lower bounds for identifying subset members with subset queries, 6th ACM-SIAM Symp. on Discrete Algorithms SODA’95, 369–377
N. Littlestone: Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm, Machine Learning 2 (1988), 285–318
M. Plotkin: Binary codes with specified minimum distances, IEEE Trans. Info. Theory 6 (1960), 445–450
R.A. Servedio: Computational sample complexity and attribute-efficient learning, 31st ACM Symp. on Theory of Computing STOC’99
A. Ta-Shma: Classical versus quantum communication complexity, SIGACT News 30(3) (1999), 25–34
R. Uehara, K. Tsuchida, I. Wegener: Optimal attribute-efficient learning of disjunction, parity, and threshold functions, 3rd European Conf. on Computational Learning Theory EuroCOLT’97, LNAI1208, 171–184
L.G. Valiant: Projection learning, 11th Conf. on Computational Learning Theory COLT’98, 287–293
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Damaschke, P. (2000). Parallel Attribute-Efficient Learning of Monotone Boolean Functions. In: Algorithm Theory - SWAT 2000. SWAT 2000. Lecture Notes in Computer Science, vol 1851. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44985-X_42
Download citation
DOI: https://doi.org/10.1007/3-540-44985-X_42
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67690-4
Online ISBN: 978-3-540-44985-0
eBook Packages: Springer Book Archive