Abstract.
The bandit problem consists of two factors, one being exploration or the collection of information on the environment and the other being the exploitation or taking benefit by choosing the optimal action in the uncertain environment. It is desirable to choose only the optimal action for the exploitation, while the exploration or collection of information requires taking a variety of (nonoptimal) actions as trials. Hence, in order to obtain the maximal cumulative gain, we need to compromise between the exploration and exploitation processes. We treat a situation where our actions change the structure of the environment, of which a simple example is formulated as the lob—pass problem by Abe and Takeuchi. Usually, the environment is specified by a finite number of unknown parameters in the bandit problem, so that the information collection part is to estimate their true values. This paper treats a more realistic situation of nonparametric estimation of the environment structure which includes an infinite number (a functional degree) of unknown parameters. A strategy is given under such a circumstance, proving that the cumulative regret can be made of the order O(log t) , O((log t) 2 ) , or O(t 1-σ ) (0< σ <1) depending on the dynamics of the environment, where t is the number of trials, in contrast with the optimal order O(log t) in the parametric case.
Similar content being viewed by others
Author information
Authors and Affiliations
Additional information
Received December 14, 1996; revised June 14, 1997, and July 24, 1997.
Rights and permissions
About this article
Cite this article
Hiraoka, K., Amari, S. Strategy Under the Unknown Stochastic Environment: the Nonparametric Lob—Pass Problem . Algorithmica 22, 138–156 (1998). https://doi.org/10.1007/PL00013826
Issue Date:
DOI: https://doi.org/10.1007/PL00013826