The role of temporal statistics in the transfer of experience in context-dependent reinforcement learning | IEEE Conference Publication | IEEE Xplore

The role of temporal statistics in the transfer of experience in context-dependent reinforcement learning


Abstract:

Reinforcement learning (RL) is an algorithmic theory for learning by experience optimal action control. Two widely discussed problems within this field are the temporal c...Show More

Abstract:

Reinforcement learning (RL) is an algorithmic theory for learning by experience optimal action control. Two widely discussed problems within this field are the temporal credit assignment problem and the transfer of experience. The temporal credit assignment problem postulates that deciding whether an action is good or bad may not be done upon right away because of delayed rewards. The problem of transferring experience investigates the question of how experience can be generalized and transferred from a familiar context, where it was acquired, to an unfamiliar context, where it may, nevertheless, prove helpful. We propose a controller for modelling such flexibility in a context-dependent reinforcement learning paradigm. The devised controller combines two alternatives of perfect learner algorithms. In the first alternative, rewards are predicted by individual objects presented in a temporal sequence. In the second alternative, rewards are predicted on the basis of successive pairs of objects. Simulations run on both deterministic and random temporal sequences show that only in case of deterministic sequences, a previously acquired context could be retrieved. This suggests a role of temporal sequence information in the generalization and transfer of experience.
Date of Conference: 14-16 December 2014
Date Added to IEEE Xplore: 16 April 2015
ISBN Information:
Conference Location: Kuwait, Kuwait

References

References is not available for this document.