skip to main content
10.1145/3457784.3457791acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicscaConference Proceedingsconference-collections
research-article

Research on Data Generation Model Based on Improved SeqGAN

Authors Info & Claims
Published:30 July 2021Publication History

ABSTRACT

With the demand of integrated energy metering business and the rise of artificial intelligence technology, the data generation model of digital equipment has become the focus of attention. As the most widely used method in the field of image generation, the implicit method based on GAN has great development potential and strong domain expansion ability. The addition of reinforcement learning method makes the GAN correlation algorithm suitable for data generation of discrete data. This paper proposes an improved SeqGAN model, reconstructs the original SeqGAN model, improves the roll-out module of the original model, uses model parameters lagging behind the generator, and increases the stability of long sequence reinforcement learning. Compared with some existing popular algorithms, the performance of the proposed model algorithm is significantly better than that of the comparison algorithm when the training times are enough (more than 150 times), which lays a foundation for its application in data generation of digital equipment.

References

  1. P. Smolensky, Information processing in dynamical systems: Foundations of harmony theory, Colorado Univ at Boulder Dept of Computer Science, 1986.Google ScholarGoogle Scholar
  2. D. P. Kingma, and M. Welling, Auto-encoding variational bayes, arXiv preprint arXiv:1312.6114, 2013.Google ScholarGoogle Scholar
  3. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza , Generative Adversarial Nets, Advances in Neural Information Processing Systems 27, Advances in Neural Information Processing Systems Z. Ghahramani, M. Welling, C. Cortes , eds., 2014.Google ScholarGoogle Scholar
  4. A. Radford, L. Metz, and S. Chintala, Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, Computer Ence, 2015.Google ScholarGoogle Scholar
  5. J.-Y. Zhu, T. Park, P. Isola , Unpaired image-to-image translation using cycle-consistent adversarial networks. pp. 2223-2232.Google ScholarGoogle Scholar
  6. A. Brock, J. Donahue, and K. Simonyan, Large scale gan training for high fidelity natural image synthesis, arXiv preprint arXiv:1809.11096, 2018.Google ScholarGoogle Scholar
  7. T. Karras, T. Aila, S. Laine , Progressive growing of gans for improved quality, stability, and variation, arXiv preprint arXiv:1710.10196, 2017.Google ScholarGoogle Scholar
  8. L. Yu, W. Zhang, J. Wang , Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the AAAI conference on artificial intelligence (AAAI-17)Google ScholarGoogle Scholar
  9. M. Arjovsky, S. Chintala, and L. Bottou, Wasserstein Generative Adversarial Networks, in Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, 2017, pp. 214–223.Google ScholarGoogle Scholar
  10. L. Dinh, D. Krueger, and Y. Bengio, Nice: Non-linear independent components estimation, arXiv preprint arXiv:1410.8516, 2014.Google ScholarGoogle Scholar
  11. H. Larochelle, and I. Murray, The neural autoregressive distribution estimator. pp. 29-37.Google ScholarGoogle Scholar
  12. C. J. Watkins, and P. Dayan, Q-learning, Machine learning, vol. 8, no. 3-4, pp. 279-292, 1992.Google ScholarGoogle Scholar
  13. R. S. Sutton, D. A. McAllester, S. P. Singh , Policy gradient methods for reinforcement learning with function approximation. pp. 1057-1063.Google ScholarGoogle Scholar
  14. P. W. Glynn, Likelihood ratio gradient estimation for stochastic systems, Communications of the ACM, vol. 33, no. 10, pp. 75-84, 1990.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. G. Sidorov, F. Velasquez, E. Stamatatos , Syntactic n-grams as machine learning features for natural language processing, Expert Systems with Applications, vol. 41, no. 3, pp. 853-860, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. N. Srivastava, G. Hinton, A. Krizhevsky , Dropout: a simple way to prevent neural networks from overfitting, The journal of machine learning research, vol. 15, no. 1, pp. 1929-1958, 2014.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICSCA '21: Proceedings of the 2021 10th International Conference on Software and Computer Applications
    February 2021
    325 pages
    ISBN:9781450388825
    DOI:10.1145/3457784

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 30 July 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)21
    • Downloads (Last 6 weeks)2

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format