Abstract:
Deep learning (DL)-based recommender system (RS), particularly for its advances in the recent five years, has been startling. It reshapes the architectures of traditional...Show MoreMetadata
Abstract:
Deep learning (DL)-based recommender system (RS), particularly for its advances in the recent five years, has been startling. It reshapes the architectures of traditional RSs by lifting their limitations in dealing with data sparsity and cold-start issues. Yet, the performance of DL-based RS, like many other DL-based intelligent systems, heavily relies on selecting hyperparameters. Unfortunately, the most common selection approach is still Grid Search that requires numerous computational resources and human efforts. Motivated by this, this paper proposes a general hyperparameter optimization framework, named DE-Opt, which can be applied to most existing DL-based RSs seamlessly. The main idea of DE-Opt is to incorporate differential evolution (DE) into the model training process of a DL-based RS at a layer-wise granularity to simultaneously auto-learn its two key hyperparameters, namely, the learning rate η and the regularization coefficient λ. As a result, the system performance in terms of recommendation accuracy and computational efficiency is uplifted. Experimental results on three benchmark datasets substantiate that: 1) DE-Opt is compatible with the most recent advancements of DL-based RS to automate their hyperparameter-tuning processes, and 2) DE-Opt excels among its state-of-the-art hyperparameter-optimization competitors in terms of both higher learning performance and lower runtime.
Published in: IEEE Transactions on Services Computing ( Volume: 16, Issue: 4, 01 July-Aug. 2023)