Improving current interest with item and review sequential patterns for sequential recommendation

https://doi.org/10.1016/j.engappai.2021.104348Get rights and content

Abstract

Sequential recommendation (SR) aims to recommend items based on user information and behavior sequences. Almost all the existing works for SR construct short-term preference and long-term preference only based on the user–item interactions or the reviews rather than considering the two types of information simultaneously. In fact, interaction items and reviews commonly reflect the user’s semantic information, and play significant roles in modeling the user preference. In this paper, we propose a novel model named Parallel Item sequential pattern and Review Sequential Pattern (PIRSP) for the sequential recommendation. Specifically, first, PIRSP learns two levels of sequential patterns from item and review information, respectively: (1) item sequential pattern, which uses a gated recurrent unit with an item-attention mechanism to model history behavior sequences; (2) review sequential pattern, which takes a convolution neural network with a target-attention mechanism for modeling associated reviews of interaction items. Then, we introduce a fusion gating mechanism for selectively combining the two sequential patterns to learn the short-term preference. Second, we employ a convolution neural network with aspect information to learn the long-term preference. Finally, we utilize a linear fusion on the long-term preference and short-term preference for modeling user preference and making final recommendation. The experimental results indicate that our model outperforms other state-of-the-art methods on the Amazon dataset. Our analysis of PIRSP’s recommendation process shows the positive effect of the two types of information and fusion gating mechanism on the performance of sequential recommendation.

Introduction

With the information explosion, Internet provides various online products and services for users. However, it is difficult for each user to directly select one of the most favorite items from the plenty of candidate items. To reduce information overload and match the diverse demands of users, personalized recommender systems play an important role in our daily life and are widely used in many E-commerce platforms. It is well known that this system can effectively help users to select the products satisfying their needs, and can increase the revenue of product providers.

In many real application scenarios, users interact with products in the online platforms in chronological order, in which the next interacted item has a large relation with the accessed items. For example, after buying a baby diaper, the user would be more likely to purchase beer. Additionally, there are some kind of sequential dependencies in the next interacted item and earlier accessed items. For instance, the user may buy clothes from the online store where they had a good experience. Thus, the sequential recommendation becomes a hot spot in both academic research and industrial communities. The task of sequential recommendation is to predict the next item that the user prefers to interact with based on their sequential behaviors (Ren et al., 2020, Wang et al., 2020a). However, users always do not show their preferences clearly. Thus, accurately modeling user preference (a latent representation of the user) from sequential pattern (short-term preference) and user’s general preference (long-term preference) with the implicit or explicit feedback turns into the key challenge of the sequential recommendation (Tang and Wang, 2018a, Peng et al., 2021).

Several works have been proposed to capture sequential dynamics in user history interactions. In traditional recommendation systems, Markov Chain is extensively used to predict the personalized sequential behavior. In the present work, Factorized Personalized Markov Chains is proposed and enhanced by introducing similarity-based methods to tackle sparse datasets with sequential dynamics. However, the limitation of methods in Rendle et al. (2010) and He and McAuley (2016) is that they are weak to capture the intricate dynamics in more complex scenarios. Recently, sequential neural networks such as recurrent neural networks (RNNs) (Huang et al., 2018) and its variants of Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have been introduced into recommendation tasks. These works can characterize sequential user–item interactions and build effective representations of user interactions (Yu et al., 2016, Tang and Wang, 2018b, Ren et al., 2020). Besides, some methods propose to incorporate auxiliary information to enhance the performance of sequential recommendation (Zheng et al., 2020, Ji et al., 2020).

Although existing methods have achieved encouraging performance, we argue that they still have some limitations possibly reducing the performance of sequential recommendation. First, previous works (Yan and Zhang, 2019, Cui et al., 2019, Wang et al., 2020b, Hu et al., 2020) learn the user preference with auxiliary information but do not consider review information of features for each item and user. As we know, user reviews (reviews written by any user who purchased the item) often contain rich interest information, and reviews for each item reveal different features of items. Neglecting the meaningful reprehensive features may fail to capture the true user preference and degrade the performance. Second, the generation of modeling user’s short-term preference is learned in a single way, e.g., from the dependency between interaction items or the relationship between each interaction item and candidate item. The former methods highlight that consecutive items are of great significance for building sequential pattern, and the latter methods support that not all interaction items contribute equally for a specific candidate item. We observe that consecutive items and each candidate item play complementary roles in modeling the short-term preference, thus, they should be simultaneously considered.

To resolve these issues, we propose a novel method named Parallel Item sequential pattern and Review Sequential Pattern for the sequential recommendation, short for PIRSP. Incorporating the reviews and interaction items brings rich semantic information and sequential information for learning the representation of user preference and strengthening the expression ability of the model. Specifically, first, we represent reviews written by each user into high-dimensional vectors via a convolutional neural network with multi-aspect information (Aspect-CNN) to build the representation of long-term preference. Secondly, we represent reviews written for each item into high-dimensional vectors via another Aspect-CNN and create the interaction items into high-dimensional vectors through a gated recurrent unit (GRU). Then, we individually form an item sequential pattern and a review sequential pattern with the above two high-dimensional vectors through an item-attention mechanism and a target-attention mechanism, respectively. Additionally, we introduce a fusion gating mechanism for the item and review sequential patterns to build the short-term preference. Ultimately, a linear fusion is applied to long-term and short-term preferences for learning the representation of user preference. The experimental results on real-world datasets indicate that our method achieves state-of-the-art performance.

We conclude our contributions as follows:

  • We propose a novel neural model which exploits interaction items and reviews for the sequential recommendation. To the best of our knowledge, this is the first attempt to jointly encode sequential and semantic information for this task with an end-to-end neural model.

  • We introduce a new structure of short-term preference encoder with item sequential pattern and review sequential pattern. It forms a more precise and expressive representation of the sequential pattern with help of the fusion gating mechanism.

  • We carry out extensive experiments on three real-world datasets. PIRSP significantly outperforms baselines in terms of Precision, Recall, NDCG, and HR for the sequential recommendation task.

The rest of this paper is structured as follows. Section 2 discusses the related work of sequential recommendation. Section 3 presents the PIRSP model in detail. We conduct comprehensive experiments and present the experimental setups with the corresponding results in Section 4. Finally, we conclude the paper and point out some future works in Section 5.

Section snippets

Related work

We survey related works on recommender systems in two areas: sequential recommendation and review-based recommendation.

The proposed algorithm

In this section, we first present the problem statement. Then, we give the overview of the proposed parallel item sequential pattern and review sequential pattern (PIRSP) for the sequential recommendation. Finally, we formally present the details of PIRSP.

Experimental settings

We conduct experiments on the Amazon dataset, including 142.8 million reviews spanning May 1996–July 2014. It is a public dataset containing item reviews from Amazon and is regularly used as a benchmark dataset for recommendation systems. The review file describes the interaction information, i.e., user, item, review text, rating of the product, and review time, where review text is the review written by the user for the item and we denote it as user review in our manuscript. We select three

Conclusion

In this paper, we propose a novel framework incorporating purchase items and review information for the sequential recommendation. Specifically, we firstly apply a CNN-based neural network to build user’s long-term preference. Secondly, we use RNN-based neural network and CNN-based neural network separately to learn two kinds of sequential patterns. At the same time, we introduce a fusion gating mechanism to adaptively balance the importance of the two sequential patterns for modeling user’s

CRediT authorship contribution statement

Jinjin Zhang: Conceptualization, Methodology, Writing - original draft, Software, Validation, Formal analysis, Investigation. Xiaodong Mu: Writing - review & editing, Project administration, Investigation. Peng Zhao: Supervision, Project administration. Kai Kang: Data curation, Investigation. Chenhui Ma: Validation, Formal analysis, Writing - review & editing, Visualization, Resources.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References (32)

  • CuiQ. et al.

    A hierarchical contextual attention-based network for sequential recommendation

    Neurocomputing

    (2019)
  • YanC. et al.

    Merging visual features and temporal dynamics in sequential recommendation

    Neurocomputing

    (2019)
  • ChenX. et al.

    Sequential recommendation with user memory networks

  • ChinJ.Y. et al.

    ANR: Aspect-based neural recommender

  • GuoG. et al.

    Dynamic item block and prediction enhancing block for sequential recommendation

  • HeX. et al.

    Neural collaborative filtering

  • HeR. et al.

    Fusing similarity models with Markov chains for sparse sequential recommendation

  • HidasiB. et al.

    Recurrent neural networks with top-k gains for session-based recommendations

  • Hidasi, B., Karatzoglou, A., Baltrunas, L., Tikk, D., 2016. Session-based recommendations with recurrent neural...
  • HuH. et al.

    Modeling personalized item frequency information for next-basket recommendation

  • HuangX. et al.

    CSAN: Contextual self-attention network for user sequential recommendation

  • Ji, M., Joo, W., Song, K., Kim, Y., Moon, I., 2020. Sequential recommendation with relation-aware kernelized...
  • KangW. et al.

    Self-attentive sequential recommendation

  • KimD. et al.

    Convolutional matrix factorization for document context-aware recommendation

  • LiC. et al.

    A review-driven neural model for sequential recommendation

  • Ma, C., Ma, L., Zhang, Y., Sun, J., Liu, X., Coates, M., 2020. Memory augmented graph neural networks for sequential...
  • Cited by (6)

    View full text