Abstract:
In the last few decades, dynamic job scheduling problems (DJSPs) has received more attention from researchers and practitioners. However, the potential of reinforcement l...Show MoreMetadata
Abstract:
In the last few decades, dynamic job scheduling problems (DJSPs) has received more attention from researchers and practitioners. However, the potential of reinforcement learning (RL) methods has not been exploited adequately for solving DJSPs. In this work deep Q-network (DQN) model is applied to train an agent to learn how to schedule the jobs dynamically by minimizing the delay time of jobs. The DQN model is trained based on a discrete event simulation experiment. The model is tested by comparing the trained DQN model against two popular dispatching rules, shortest processing time and earliest due date. The obtained results indicate that the DQN model has a better performance than these dispatching rules.
Published in: 2020 Winter Simulation Conference (WSC)
Date of Conference: 14-18 December 2020
Date Added to IEEE Xplore: 29 March 2021
ISBN Information: