Abstract
The present paper focuses on some interesting classes of process-control games, where winning essentially means successfully controlling the process. A master for one of these games is an agent who plays a winning-strategy. In this paper we investigate situations, in which even a complete model (given by a program) of a particular game does not provide enough information to synthesize — even in the limit — a winning strategy. However, if in addition to getting a program, a machine may also watch masters play winning strategies, then the machine is able to learn in the limit a winning strategy for the given game. Studied are successful learning from arbitrary masters and from pedagogically useful selected masters. It is shown that selected masters are strictly more helpful for learning than are arbitrary masters. Both for learning from arbitrary masters and for learning from selected masters, though, there are cases where one can learn programs for winning strategies from masters but not if one is required to learn a program for the master’s strategy itself. Both for learning from arbitrary masters and for learning from selected masters, one can learn strictly more watching m + 1 masters than one can learn watching only m. Lastly a simulation result is presented where the presence of a selected master reduces the complexity from infinitely many semantic mind changes to finitely many syntactic ones.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Supported by the Deutsche Forschungsgemeinschaft (DFG) Graduiertenkolleg “Beherrschbarkeit komplexer Systeme” (GRK 209/2-96).
Supported by Australian Research Council Grant A49600456.
Supported by the Deutsche Forschungsgemeinschaft (DFG) Grant Am 60/9-1.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
M. Bain and C. Sammut. A framework for behavioural cloning. In K. F. S. Muggleton and D. Michie editors Machine Intelligence 15. Oxford University Press, 1996.
J. R. Büchi and L. H. Landweber. Solving sequential conditions by finite-state strategies. Transactions of the American Mathematical Society 138:295–311 1969.
D. K. C. Sammut, S. Hurst and D. Michie. Learning to fly. In D. Sleeman and P. Edwards, editors, Proceedings of the Ninth International Conference on Machine Learning. Morgan Kaufmann, 1992.
J. Case, S. Kaufmann, E. Kinber, and M. Kummer. Learning recursive functions from approximations. Journal of Computer and System Sciences, 55:183–196, 1997.
D. Cenzer and J. Remmel. Recursively presented games and strategies. Mathematical Social Sciences, 24:117–139, 1992.
R. V. Freivalds and R. Wiehagen. Inductive inference with additional information. Elektronische Informationsverarbeitung und Kybernetik, 15:179–185, 1979.
S. Jain and A. Sharma. Learning with the knowledge of an upper bound on program size. Information and Computation, 102(1):118–166, Jan. 1993.
M. Kummer and M. Ott. Learning branches and learning to win closed games. In Proceedings of Ninth Annual Conference on Computational Learning Theory, pages 280–291, New York, 1996. ACM.
M. Kummer and F. Stephan. On the structure of degrees of inferability. Journal of Computer and System Sciences, 52(2):214–238, Apr. 1996.
O. Maler, A. Pnueli, and J. Sifakis. On the synthesis of discrete controllers for timed systems. In Proceedings of the Annual Symposium on the Theoretical Aspects of Computer Science, volume 900 of LNCS, pages 229–242. Springer-Verlag, 1995.
R. McNaughton. Infinite games played on finite graphs. Annals of Pure and Applied Logic, 65:149–184, 1993.
D. Michie and C. Sammut. Machine learning from real-time input-output behaviour. In Proceedings of the International Conference on Design to Manufacture in Modern Industry, pages 363–369, 1993.
P. Odifreddi. Classical Recursion Theory. North-Holland, Amsterdam, 1989.
D. Osherson, M. Stob, and S. Weinstein. Systems that Learn. MIT Press, Cambridge, Massachusetts, 1986.
M. Ott and F. Stephan. The complexity of learning branches and strategies from queries. In Proceedings of the Eighth Annual International Symposium on Algorithms and Computation, volume 1350 of LNCS, pages 283–292. Springer, 1997.
M. Ott and F. Stephan. Structural measures for games and process control in the branch learning model. In S. Ben-David, editor, Proceedings of the Third European Conference on Computational Learning Theory, volume 1208 of LNAI, pages 94–108. Springer, 1997.
L. Pitt. Probabilistic inductive inference. Journal of the ACM, 36:383–433, 1989.
L. Pitt and C. Smith. Probability and plurality for aggregations of learning machines. Information and Computation, 77:77–92, 1988.
J. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, San Mateo, CA, 1992.
R. L. Rivest and R. E. Schapire. Inference of finite automata using homing sequences. Information and Computation, 103(2):299–347, Apr. 1993.
C. Sammut. Acquiring expert knowledge by learning from recorded behaviours. In Japanese Knowledge Acquisition Workshop, 1992.
C. Sammut. Automatic construction of reactive control systems using symbolic machine learning. Knowledge Engineering Review, 11(1):27–42, 1996.
C. Smith. The power of pluralism for automatic program synthesis. Journal of the ACM, 29:1144–1165, 1982.
W. Thomas. On the synthesis of strategies in infinite games. In Proceedings of the Annual Symposium on the Theoretical Aspects of Computer Science, volume 900 of LNCS, pages 1–13. Springer-Verlag, 1995.
T. Urbančič and I. Bratko. Reconstructing human skill with machine learning. In A. Cohn, editor, Proceedings of the Eleventh European Conference on Artificial Intelligence. John Wiley & Sons, 1994.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Case, J., Ott, M., Sharma, A., Stephan, F. (1998). Learning to Win Process-Control Games Watching Game-Masters. In: Richter, M.M., Smith, C.H., Wiehagen, R., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 1998. Lecture Notes in Computer Science(), vol 1501. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49730-7_3
Download citation
DOI: https://doi.org/10.1007/3-540-49730-7_3
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-65013-3
Online ISBN: 978-3-540-49730-1
eBook Packages: Springer Book Archive