Skip to main content

Discrete Denoising Diffusion Approach to Integer Factorization

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2023 (ICANN 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14254))

Included in the following conference series:

  • 1440 Accesses

Abstract

Integer factorization is a famous computational problem unknown whether being solvable in the polynomial time. With the rise of deep neural networks, it is interesting whether they can facilitate faster factorization. We present an approach to factorization utilizing deep neural networks and discrete denoising diffusion that works by iteratively correcting errors in a partially-correct solution. To this end, we develop a new seq2seq neural network architecture, employ relaxed categorical distribution and adapt the reverse diffusion process to cope better with inaccuracies in the denoising step. The approach is able to find factors for integers of up to 56 bits long. Our analysis indicates that investment in training leads to an exponential decrease of sampling steps required at inference to achieve a given success rate, thus counteracting an exponential run-time increase depending on the bit-length.

Supported by Latvian Council of Science, project “Smart Materials, Photonics, Technologies and Engineering Ecosystem”, project No. VPP-EM-FOTONIKA-2022/1-0001; Latvian Quantum Initiative under European Union Recovery and Resilience Facility project no. 2.3.1.1.i.0/1/22/I/CFLA/001; the Latvian Council of Science project lzp-2021/1-0479; Google Research Grants; NVIDIA Academic Grant “Deep Learning of Algorithms”.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    [10] parametrize the neural network with t, instead. This is equivalent once we fix the noise schedule.

References

  1. Austin, J., Johnson, D.D., Ho, J., Tarlow, D., Van Den Berg, R.: Structured denoising diffusion models in discrete state-spaces. Adv. Neural Inform. Process. Syst. 34, 17891–17993 (2021)

    Google Scholar 

  2. Bachlechner, T., Majumder, B.P., Mao, H., Cottrell, G., McAuley, J.: Rezero is All You Need: Fast Convergence At Large Depth. arXiv preprint arXiv:2003.04887 (2020)

  3. Buhler, J.P., Lenstra, H.W., Pomerance, C.: Factoring integers with the number field sieve. In: Lenstra, A.K., Lenstra, H.W. (eds.) The development of the number field sieve. LNM, vol. 1554, pp. 50–94. Springer, Heidelberg (1993). https://doi.org/10.1007/BFb0091539

    Chapter  MATH  Google Scholar 

  4. Draguns, A., Ozolinš, E., Šostaks, A., Apinis, M., Freivalds, K.: Residual shuffle-exchange networks for fast processing of long sequences. Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 7245–7253 (2021)

    Google Scholar 

  5. Freivalds, K., Liepins, R.: Improving the neural GPU architecture for algorithm learning. The ICML workshop Neural Abstract Machines and Program Induction v2 (NAMPI 2018) (2018)

    Google Scholar 

  6. Freivalds, K., Ozolinš, E., Šostaks, A.: Neural shuffle-exchange networks - sequence processing in O(\(n\) log \(n\)) time. Adv. Neural Inform. Process. Syst. 32, 6626–6637 Curran Associates Inc (2019)

    Google Scholar 

  7. Gaile, E., Draguns, A., Ozolinš, E., Freivalds, K.: Unsupervised training for neural tsp solver. In: Learning and Intelligent Optimization: 16th International Conference, LION 16, Milos Island, Greece, June 5–10, 2022, Revised Selected Papers, pp. 334–346. Springer (2023). https://doi.org/10.1007/978-3-031-24866-5_25

  8. Hendrycks, D., Gimpel, K.: Gaussian Error Linear Units (GELUs). arXiv preprint arXiv:1606.08415 (2016)

  9. Ho, J., Saharia, C., Chan, W., Fleet, D.J., Norouzi, M., Salimans, T.: Cascaded diffusion models for high fidelity image generation. J. Mach. Learn. Res. 23(47), 1–33 (2022)

    MathSciNet  MATH  Google Scholar 

  10. Hoogeboom, E., Nielsen, D., Jaini, P., Forré, P., Welling, M.: Argmax flows and multinomial diffusion: Learning categorical distributions. Adv. Neural Inform. Process. Syst. 34, 12454–12465 (2021)

    Google Scholar 

  11. Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144 (2016)

  12. Jansen, B., Nakayama, K.: Neural networks following a binary approach applied to the integer prime-factorization problem. In: Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005, vol. 4, pp. 2577–2582. IEEE (2005)

    Google Scholar 

  13. Kaiser, Ł., Sutskever, I.: Neural GPUs learn algorithms. arXiv preprint arXiv:1511.08228 (2015)

  14. Kong, Z., Ping, W., Huang, J., Zhao, K., Catanzaro, B.: DiffWave: a versatile diffusion model for audio synthesis. In: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3–7, 2021. OpenReview.net (2021)

    Google Scholar 

  15. Lenstra, A.K.: Integer factoring. Towards a quarter-century of public key cryptography: A Special Issue of DESIGNS, CODES AND CRYPTOGRAPHY An International Journal. 19(2/3), 31–58 (2000)

    Google Scholar 

  16. Meletiou, G., Tasoulis, D.K., Vrahatis, M.N.: A first study of the neural network approach to the RSA cryptosystem. In: IASTED 2002 Conference on Artificial Intelligence, pp. 483–488 (2002)

    Google Scholar 

  17. Ozolins, E., Freivalds, K., Draguns, A., Gaile, E., Zakovskis, R., Kozlovics, S.: Goal-aware neural sat solver. In: 2022 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2022)

    Google Scholar 

  18. Pollard, J.M.: Monte Carlo methods for index computation. Math. Comput. 32(143), 918–924 (1978)

    MathSciNet  MATH  Google Scholar 

  19. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-Resolution Image Synthesis with Latent Diffusion Models. arXiv preprint arXiv:2112.10752 (2021)

  20. Shor, P.W.: Algorithms for quantum computation: discrete logarithms and factoring. In: Proceedings 35th Annual Symposium on Foundations of Computer Science, pp. 124–134 (1994)

    Google Scholar 

  21. Silver, D., et al.: A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science 362(6419), 1140–1144 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  22. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N.: Deep unsupervised learning using nonequilibrium thermodynamics. In: International Conference on Machine Learning, pp. 2256–2265. PMLR (2015)

    Google Scholar 

  23. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  24. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 (2016)

  25. Vahdat, A., Kreis, K., Kautz, J.: Score-based generative modeling in latent space. Adv. Neural Inform. Process. Syst. 34, 11287–11302 (2021)

    Google Scholar 

  26. Vaswani, A., et al.: Attention is All you Need. In: Guyon, I., Luxburg, U.V., et al., editors. Adv. Neural Inform. Process. Syst. 30, 5998–6008. Curran Associates Inc (2017)

    Google Scholar 

  27. Zakovskis, R., Draguns, A., Gaile, E., Ozolins, E., Freivalds, K.: Gates are not what you need in RNNs. arXiv preprint arXiv:2108.00527 (2021)

  28. Zhuang, J., et al.: Adabelief optimizer: adapting stepsizes by the belief in observed gradients. Adv. Neural Inform. Process. Syst. 33, 18795–18806 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kārlis Freivalds .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Freivalds, K., Ozoliņš, E., Bārzdiņš, G. (2023). Discrete Denoising Diffusion Approach to Integer Factorization. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14254. Springer, Cham. https://doi.org/10.1007/978-3-031-44207-0_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44207-0_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44206-3

  • Online ISBN: 978-3-031-44207-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics