Skip to main content
Log in

Enhancing personalized modeling via weighted and adversarial learning

  • Regular Paper
  • Published:
International Journal of Data Science and Analytics Aims and scope Submit manuscript

Abstract

The data generation sources are increasing in the past few years, such as mobile devices, embedded sensors, various intelligent equipment and so forth. These increasing data sources push the deployment of deep learning models in a distributed manner. However, the traditional distributed deep learning is to build a global model over all collected data and may overlook specific components which are of vital importance to individual users. In this paper, we propose an adversarial learning framework that allows an individual user to build a personalized model. Our framework consists of two stages, including efficient similar data selection from other users and adversarial training. Instead of selecting similar data by computing hand-designed similarity metrics, we train an auto-encoder and a generative adversarial network (GAN) on individual user’s data and use them to request similar data from other users. To further improve the personalized model performance, we develop two approaches that combine the requested data and user’s own data to build the personalized model. The first approach is that we apply weighted learning to capture the different importance of the requested data. The second approach is that we apply adversarial training to minimize the distribution discrepancy between the requested data and user’s own data. Experimental results demonstrate the effectiveness of the proposed framework.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. He, K., Zhang, X., Ren, S., and Sun, J.: “Deep residual learning for image recognition,” in IEEE CVPR, (2016)

  2. Lai, S., Xu, L., Liu, K., and Zhao, J.: “Recurrent convolutional neural networks for text classification,” in AAAI, (2015)

  3. Dong, X., Yu, L., Wu, Z., Sun, Y., Yuan, L., and Zhang, F.: “A hybrid collaborative filtering model with deep structure for recommender systems,” in AAAI, (2017)

  4. Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Senior, A., Tucker, P., Yang, K., Le , Q.V. etal.: “Large scale distributed deep networks,” in NeurIPS, (2012)

  5. Park, D.H., Kim, H.K., Choi, I.Y., and Kim, J.K.: “A literature review and classification of recommender systems research,” Expert systems with applications, (2012)

  6. Cheng, Y., Wang, F., Zhang, P., and Hu, J.: “Risk prediction with electronic health records: A deep learning approach,” in SDM, (2016)

  7. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y.: “Generative adversarial nets,” in NeurIPS, (2014)

  8. Chilimbi, T., Suzue, Y., Apacible, J., and Kalyanaraman, K.: “Project adam: Building an efficient and scalable deep learning training system,” in OSDI, (2014)

  9. Wen, W., Xu, C., Yan, F., Wu, C., Wang, Y., Chen, Y., and Li, H.: “Terngrad: Ternary gradients to reduce communication in distributed deep learning,” in NeurIPS, (2017)

  10. Chen, C.-Y., Choi, J., Brand, D., Agrawal, A., Zhang, W., and Gopalakrishnan, K.: “Adacomp: Adaptive residual gradient compression for data-parallel distributed training,” in AAAI, (2018)

  11. Wang, S., Pi, A., Zhao, X., and Zhou, X.: “Scalable distributed dl training: Batching communication and computation,” in AAAI, (2019)

  12. Wangni, J., Wang, J., Liu, J., and Zhang, T.: “Gradient sparsification for communication-efficient distributed optimization,” in NeurIPS, (2018)

  13. McMahan, H.B., Moore, E., Ramage, D., Hampson, S. etal.: “Communication-efficient learning of deep networks from decentralized data,” in AISTATS, (2016)

  14. Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H.B., Patel, S., Ramage, D., Segal, A., and Seth, K.: “Practical secure aggregation for privacy-preserving machine learning,” in ACM CCS, (2017)

  15. Smith, V., Chiang, C.-K., Sanjabi, M., and Talwalkar, A.S.: “Federated multi-task learning,” in NeurIPS, (2017)

  16. Che, C., Xiao, C., Liang, J., Jin, B., Zho, J., and Wang, F.: “An rnn architecture with dynamic temporal matching for personalized predictions of parkinson’s disease,” in SDM, (2017)

  17. Suo, Q., Ma, F., Yuan, Y., Huai, M., Zhong, W., Zhang, A., and Gao, J.: “Personalized disease prediction using a cnn-based similarity learning method,” in IEEE BIBM, (2017)

  18. Choi, E., Bahadori, M.T., Searles, E., Coffey, C., Thompson, M., Bost, J., Tejedor-Sojo, J., and Sun, J.: “Multi-layer representation learning for medical concepts,” in ACM KDD, (2016)

  19. Huai, M., Miao, C., Suo, Q., Li, Y., Gao, J. and Zhang, A.: “Uncorrelated patient similarity learning,” in SDM, (2018)

  20. Wang, F., Sun, J., Ebadollahi, S.: Composite distance metric integration by leveraging multiple experts’ inputs and its application in patient similarity assessment. Stat Anal Data Mining ASA Data Sci J 5(1), 54–69 (2012)

    Article  MathSciNet  Google Scholar 

  21. Li, M., and Wang, L.: “A survey on personalized news recommendation technology,” IEEE Access, (2019)

  22. Luo, F., Ranzi, G., Wang, X., Dong, Z.Y.: Social information filtering-based electricity retail plan recommender system for smart grid end users. IEEE Trans Smart Grid 10(1), 95–104 (2017)

    Article  Google Scholar 

  23. Kouki, P., Fakhraei, S., Foulds, J., Eirinaki, M., and Getoor, L.: “Hyper: A flexible and extensible probabilistic framework for hybrid recommender systems,” in ACM RecSys, (2015)

  24. Hu, L., Cao, L., Wang, S., Xu, G., Cao, J., and Gu, Z.: “Diversifying personalized recommendation with user-session context.” in IJCAI, (2017), pp. 1858–1864

  25. Yu, Z., Lian, J., Mahmoody, A., Liu, G., and Xie, X.: “Adaptive user modeling with long and short-term preferences for personalized recommendation.” in IJCAI, (2019), pp. 4213–4219

  26. Bengio, Y., Courville, A., and Vincent, P.: “Representation learning: A review and new perspectives,” IEEE TPAMI, (2013)

  27. Tzeng, E., Hoffman, J., Darrell, T., and Saenko, K.: “Simultaneous deep transfer across domains and tasks,” in IEEE CVPR, (2015)

  28. Liu, A.H., Liu, Y.-C., Yeh, Y.-Y., and Wang, Y.-C.F.: “A unified feature disentangler for multi-domain image translation and manipulation,” in NeurIPS, (2018)

  29. Gupta, A., Devin, C., Liu, Y., Abbeel, P., and Levine, S.: “Learning invariant feature spaces to transfer skills with reinforcement learning,” in ICLR, (2017)

  30. Misra, I., Shrivastava, A., Gupta, A., and Hebert, M.: “Cross-stitch networks for multi-task learning,” in IEEE CVPR, (2016)

  31. Bouchacourt, D., Tomioka, R., and Nowozin, S.: “Multi-level variational autoencoder: Learning disentangled representations from grouped observations,” in AAAI, (2018)

  32. Narayanaswamy, S., Paige, T.B., Vande Meent, J.-W., Desmaison, A., Goodman, N., Kohli, P., Wood, F., and Torr, P.: “Learning disentangled representations with semi-supervised deep generative models,” in NeurIPS, (2017)

  33. Zadrozny, B.: “Learning and evaluating classifiers under sample selection bias,” in Machine Learning, Proceedings of the Twenty-first International Conference (ICML 2004), Banff, Alberta, Canada, July 4-8, 2004, ser. ACM International Conference Proceeding Series, C.E. Brodley, Ed., vol.69.ACM, 2004. [Online]. Available: https://doi.org/10.1145/1015330.1015425

  34. Wen, J., Yu, C.-N., and Greiner, R.: “Robust learning under uncertain test distributions: Relating covariate shift to model misspecification.” in ICML, (2014), pp. 631–639

  35. Khodabandeh, M., Vahdat, A., Ranjbar, M., and Macready, W.G.: “A robust learning approach to domain adaptive object detection,” in Proceedings of the IEEE International Conference on Computer Vision, (2019), pp. 480–490

  36. Wang, X., and Schneider, J.: “Flexible transfer learning under support and model shift,” in Advances in Neural Information Processing Systems, 2014, pp. 1898–1906

  37. Huang, J., Gretton, A., Borgwardt, K., Schölkopf, B., and Smola, A.J.: “Correcting sample selection bias by unlabeled data,” in Advances in neural information processing systems, (2007), pp. 601–608

  38. Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., and Smola, A.J.: “A kernel method for the two-sample-problem,” in Advances in neural information processing systems, (2007), pp. 513–520

  39. Schonlau, M., DuMouchel, W., Ju, W.-H., Karr, A.F., Theusan, M., Vardi, Y., etal.:, “Computer intrusion: Detecting masquerades,” Statistical science, (2001)

  40. Phan, N., Ebrahimi, J., Kil, D., Piniewski, B., and Dou, D.: “Topic-aware physical activity propagation in a health social network,” IEEE intelligent systems, (2015)

  41. Ruder, S.: “An overview of multi-task learning in deep neural networks,” arXiv preprint arXiv:1706.05098, (2017)

  42. Song, C., Ristenpart, T., and Shmatikov, V.: “Machine learning models that remember too much,” in ACM CCS, (2017)

  43. Phan, N., Wang, Y., Wu, X., and Dou, D.: “Differential privacy preservation for deep auto-encoders: an application of human behavior prediction,” in AAAI, (2016)

  44. Xie, L., Lin, K., Wang, S., Wang, F., and Zhou, J.: “Differentially private generative adversarial network,” CoRR, (2018)

  45. Duchi, J.C., Jordan, M.I., and Wainwright, M.J.: “Local privacy and statistical minimax rates,” in IEEE FOCS, (2013)

  46. Settles, B.: Active learning literature survey. University of Wisconsin-Madison Department of Computer Sciences, Tech. Rep. (2009)

  47. Du, W., and Wu, X.: “Advpl: Adversarial personalized learning,” in DSAA, (2020)

Download references

Acknowledgements

This work was supported in part by NSF 1920920, 1937010, 1940093 and 1946391. This article is an extension of the conference version [47]. We thank anonymous reviewers for their constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xintao Wu.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Du, W., Wu, X. Enhancing personalized modeling via weighted and adversarial learning. Int J Data Sci Anal 12, 1–14 (2021). https://doi.org/10.1007/s41060-021-00263-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41060-021-00263-3

Keywords

Navigation