Skip to main content

Invention Concept Latent Spaces for Analogical Ideation

  • Conference paper
  • First Online:
  • 920 Accesses

Part of the book series: IFIP Advances in Information and Communication Technology ((IFIPAICT,volume 647))

Abstract

Analogy is a powerful form of ideation and therefore an automated or semiautomated analogical method is a potentially useful way to develop, or at least inspire, new and possibly patentable ideas.

The last few years have shown significant developments in the training and use of latent spaces for text generation using Variational Autoencoders (VAE), though many problems remain including preventing ‘collapse’ of the latent space during its training and successfully disentangling the latent variables, including the syntax from the semantics.

A hierarchical sentence and document variational denoising autoencoder architecture is presented, in which the encoded sentence vectors are first generated and then an encoding and decoding is performed of the sequence (in the document) of these sentence vectors. The latent vectors for both sentences and documents are structured into ‘syntactic’ and ‘semantic’ subsections based on their use in auxiliary training tasks. A large dataset of patent titles and abstracts, along with their IPC6 codes, is used to train the VAE networks.

The resulting document latent space is used to perform analogy transforms to seek to generate/inspire useful and potentially novel patent concepts.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Jia, L.-Z., Wu, C.-L., Zhu, X.-H., Tan, R.-H.: Design by analogy: achieving more patentable ideas from one creative design. Chin. J. Mech. Eng. 31, Article no. 37 (2018)

    Google Scholar 

  2. Allen, C., Hospedales, T.: Analogies explained: towards understanding word embeddings. In: Proceedings of the 36th International Conference on Machine, Learning, Long Beach, California, PMLR 97 (2019)

    Google Scholar 

  3. Chen, D., Peterson, J.C., Griffiths, T.L.: Evaluating vector-space models of analogy. In: Proceedings of the 39th Annual Conference of the Cognitive Science Society (2017)

    Google Scholar 

  4. Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2014)

    Google Scholar 

  5. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2014)

    Article  Google Scholar 

  6. Jain, P., Mathema, N., Skaggs, J., Ventura, D.: Ideation via critic-based exploration of generator latent space. In: Proceedings of the 12th International Conference on Computational Creativity (ICCC 2021) (2021)

    Google Scholar 

  7. Nobari, A.H., Rashad, M.F., Ahmed, F.: CreativeGAN: editing generative adversarial networks for creative design synthesis. In: Proceedings of the ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference IDETC/CIE 2021, 17–20 August 2021 (2021)

    Google Scholar 

  8. Caillon, A., Bitton, A., Gatinet, B., Esling, P.: Timbre latent space: exploration and creative aspects. In: Proceedings of the 2nd International Conference on Timbre (Timbre 2020), Thessaloniki, Greece, 3–4 September 2020 (2020)

    Google Scholar 

  9. Cádiz, R.F., Macaya, A., Cartagena, M., Parra, D.: Creativity in generative musical networks: evidence from two case studies. Front. Robot. AI 8, 680586 (2021)

    Google Scholar 

  10. de Rosa, G.H., Papa, J.P.: A survey on text generation using generative adversarial networks. Pattern Recognit. 119, 108098 (2021)

    Google Scholar 

  11. Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. In: 5th International Conference on Learning Representations (ICLR 2017) Toulon, France (2017)

    Google Scholar 

  12. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Google Scholar 

  13. Vaswani, A., et al.: Attention is all you need. In: 31st Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA (2017)

    Google Scholar 

  14. Shen, T., Mueller, J., Barzilay, R., Jaakkola, T.: educating text autoencoders: latent representation guidance via denoising. In: Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, PMLR 119 (2020)

    Google Scholar 

  15. Freitag, M., Roy, S.: Unsupervised natural language generation with denoising autoencoders. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3922–3929 (2018)

    Google Scholar 

  16. Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In: Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 2014 (2014)

    Google Scholar 

  17. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: ICLR 2014 (2014)

    Google Scholar 

  18. Higgins, I., et al.: beta-VAE: learning basic visual concepts with a constrained variational framework. In: ICLR April 2017 (2017)

    Google Scholar 

  19. Chen, R.T.Q., Li, X., Grosse, R., Duvenaud, D.: Isolating sources of disentanglement in VAEs. In: 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada (2018)

    Google Scholar 

  20. Burgess, C.P., et al.: Understanding disentangling in B-VAE. In: 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA (2017)

    Google Scholar 

  21. Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A., Jozefowicz, R., Bengio, S.: Generating sentences from a continuous space. In: Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, Berlin, Germany 2016 (2016)

    Google Scholar 

  22. Fu, H., Li, C., Liu, X., Gao, J., Celikyilmaz, A., Carin, L.: Cyclical annealing schedule: “a simple approach to mitigating KL vanishing. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (2019)

    Google Scholar 

  23. Yang, Z., Hu, Z., Salakhutdinov, R., Berg-Kirkpatrick, T.: Improved variational autoencoders for text modeling using dilated convolutions. In: Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, PMLR 70 (2017)

    Google Scholar 

  24. Huang, F., Guan, J., Ke, P., Guo, Q., Zhu , X., Huang, M.: A text GAN for language generation with non-autoregressive generator. In: Under Review as a Conference Paper at ICLR 2021 (2020)

    Google Scholar 

  25. Song, K., Tan, X., Qin, T., Lu, J., Liu, T.-Y.: MASS: masked sequence to sequence pre-training for language generation. In: Proceedings of the 36th International Conference on Machine Learning, Long Beach, California, PMLR 97 (2019)

    Google Scholar 

  26. Asperti, A., Trentin, M.: Balancing reconstruction error and Kullback-Leibler divergence in variational autoencoders. IEEE Access 8, 199440–199448 (2020)

    Google Scholar 

  27. Shao, H., et al.: ControlVAE: controllable variational autoencoder. In: Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, PMLR 108 (2020)

    Google Scholar 

  28. Bosc, T., Vincent, P.: Do sequence-to-sequence VAEs learn global features of sentences? In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 16–20 November 2020, pp. 4296–4318 (2020)

    Google Scholar 

  29. Watzel, T., Kürzinger, L., Li, L., Rigoll, G.: Regularized forward-backward decoder for attention models. In: Karpov, A., Potapova, R. (eds.) SPECOM 2021. LNCS (LNAI), vol. 12997, pp. 786–794. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87802-3_70

    Chapter  Google Scholar 

  30. Zhao, K., Ding, H., Ye, K., Cui, X.: A transformer-based hierarchical variational autoencoder combined hidden Markov model for long text generation. Entropy 23(10), 1277 (2021)

    Google Scholar 

  31. Bao, Y., et al.: Generating sentences from disentangled syntactic and semantic spaces. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6008–6019 (2019)

    Google Scholar 

  32. Sharma, E., Li, C., Wang, L.: BIGPATENT: a large-scale dataset for abstractive and coherent summarization. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 2019, pp. 2204–2213 (2019)

    Google Scholar 

  33. Javaloy, A., García-Mateos, G.: Text normalization using encoder–decoder networks based on the causal feature extractor. Appl. Sci. 10(13), 4551 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicholas Walker .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Walker, N. (2022). Invention Concept Latent Spaces for Analogical Ideation. In: Maglogiannis, I., Iliadis, L., Macintyre, J., Cortez, P. (eds) Artificial Intelligence Applications and Innovations. AIAI 2022. IFIP Advances in Information and Communication Technology, vol 647. Springer, Cham. https://doi.org/10.1007/978-3-031-08337-2_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08337-2_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08336-5

  • Online ISBN: 978-3-031-08337-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics