Skip to main content
Log in

Multivariate multifractal texture DCGAN synthesis: How well does it work ? How does one know ?

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

In a recent past, Deep Learning emerged as a standard tool in Image Processing, commonly involved in numerous and various tasks. Notably, Deep Learning has become increasingly popular for the synthesis of images in several applications different in nature. However, research efforts have been massively focused on designing new and increasingly complex architectures to achieve yet better performance, often at the price of overlooking the uneasy question of the assessment of the quality of the synthesized images. Focusing on the specific context of pure textures, i.e., of images with no geometrical contents, the present work aims to propose a methodology that permits to quantify the quality of Deep Learning synthesized images. It makes use of Deep Convolutional Generative Adversarial Networks, a specific class of trained neural networks, commonly used for image synthesis. Because they provide versatile and well-documented texture models, multivariate multifractal fields, with rich multiscale cross-statistics (scale-free and multifractal textures), are used. A posteriori synthesis quality indices are defined from the statistics of multiscale (wavelet) representations computed on deep learning generated multivariate textures and compared to those associated with the models. These comparisons permit to objectively quantify the quality of deep learning texture synthesis as well as the reproducibility of the training and learning procedures, an approach that departs from reporting only the training yielding best performance. This methodology further permits to quantify objectively the variation in the quality of deep learning generated multivariate textures with respect to the complexity of deep learning architectures. Moreover, a priori indices, constructed directly on loss functions, hence much easier to compute, are also proposed and shown to correlate significantly with the a posteriori and costly multiscale representation synthesis quality indices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Figure. 1
Figure. 2
Figure. 3
Figure. 4
Figure. 5
Figure. 6
Figure. 7
Figure. 8
Figure. 9
Figure. 10
Figure. 11
Figure. 12

Similar content being viewed by others

References

  1. Abry, P., Didier, G., et al. (2018). Wavelet estimation for operator fractional brownian motion. Bernoulli, 24(2), 895–928.

    Article  MathSciNet  Google Scholar 

  2. Abry, P., Jaffard, S., & Wendt, H. (2015). Irregularities and scaling in signal and image processing: multifractal analysis. In Benoit Mandelbrot: a life in many dimensions. World Scientific, pp. 31–116.

  3. Abry, P., Roux, S. G., Wendt, H., Messier, P., Klein, A. G., Tremblay, N., et al. (2015). Multiscale anisotropic texture analysis and classification of photographic prints: Art scholarship meets image processing algorithms. IEEE Signal Processing Magazine, 32(4), 18–27.

    Article  Google Scholar 

  4. Abry, P., Wendt, H., & Jaffard, S. (2013). When van gogh meets mandelbrot: Multifractal classification of painting’s texture. Signal Processing, 93(3), 554–572.

    Article  Google Scholar 

  5. Abry, P., Wendt, H., Jaffard, S., & Didier, G. (2019). Multivariate scale-free temporal dynamics: From spectral (fourier) to fractal (wavelet) analysis. Comptes Rendus Physique, 20(5), 489–501.

    Article  Google Scholar 

  6. Angles, T., & Mallat, S. (2018). Generative networks as inverse problems with scattering transforms. ICLR: In Proc.

    Google Scholar 

  7. Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein gan. arXiv preprint arXiv:1701.07875.

  8. Bacry, E., Delour, J., & Muzy, J.-F. (2001). Multifractal random walk. Phys. Rev. E, 64,.

  9. Bartlett, P. L., & Maass, W. (2003). Vapnik-chervonenkis dimension of neural nets. The handbook of brain theory and neural networks 1188–1192.

  10. Basu, S., Karki, M., & Mukhopadhyay, S., et al. (2016). A theoretical analysis of deep neural networks for texture classification. In 2016 International Joint Conference on Neural Networks (IJCNN), IEEE, pp. 992–999.

  11. Baum, E. B., & Haussler, D. (1989). What size net gives valid generalization? In Advances in neural information processing systems, pp. 81–90.

  12. Bergmann, U., Jetchev, N., & Vollgraf, R. (2017). Learning texture manifolds with the periodic spatial gan. In Proc. of the 34th International Conference on Machine Learning-Volume 70, 469–477.

  13. Brock, A., Donahue, J., & Simonyan, K. (2018). Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096.

  14. Depeursinge, A., Foncubierta-Rodriguez, A., Van De Ville, D., & Müller, H. (2014). Three-dimensional solid texture analysis in biomedical imaging: review and opportunities. Medical image analysis, 18(1), 176–196.

    Article  Google Scholar 

  15. Dobrescu, R., Dobrescu, M., Mocanu, S., & Popescu, D. (2010). Medical images classification for skin cancer diagnosis based on combined texture and fractal analysis. WISEAS Transactions on Biology and Biomedicine, 7(3), 223–232.

    MATH  Google Scholar 

  16. Friedland, G., & Krell, M. (2017). A capacity scaling law for artificial neural networks. arXiv preprint arXiv:1708.06019.

  17. Gatys, L., Ecker, A. S., & Bethge, M. (2015). Texture synthesis using convolutional neural networks. In Advances in neural information processing systems pp. 262–270.

  18. Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics pp. 249–256.

  19. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press, 2016.

  20. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems pp. 2672–2680.

  21. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition pp. 770–778.

  22. Helgason, H., Pipiras, V., & Abry, P. (2011). Fast and exact synthesis of stationary multivariate Gaussian time series using circulant embedding. Signal Processing, 91(5), 1123–1133.

    Article  Google Scholar 

  23. Helgason, H., Pipiras, V., & Abry, P. (2014). Smoothing windows for the synthesis of gaussian stationary random fields using circulant matrix embedding. Journal of Computational and Graphical Statistics, 23(3), 616–635.

    Article  MathSciNet  Google Scholar 

  24. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances in Neural Information Processing Systems, 30, 6626–6637.

    Google Scholar 

  25. Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.

  26. Jaffard, S., Abry, P., & Wendt, H. (2015). Irregularities and Scaling in Signal and Image Processing: Multifractal Analysis (pp. 31–116). Singapore: World scientific publishing.

    MATH  Google Scholar 

  27. Jaffard, S., Seuret, S., Wendt, H., Leonarduzzi, R., & Abry, P. (2019). Multifractal formalisms for multivariate analysis. Proceedings of the Royal Society A, 475(2229), 20190150.

    Article  MathSciNet  Google Scholar 

  28. Jaffard, S., Seuret, S., Wendt, H., Leonarduzzi, R., Roux, S., & Abry, P. (2019). Multivariate multifractal analysis. Applied and Computational Harmonic Analysis, 46(3), 653–663.

    Article  MathSciNet  Google Scholar 

  29. Jetchev, N., Bergmann, U., & Vollgraf, R. (2016). Texture synthesis with spatial generative adversarial networks. arXiv preprint arXiv:1611.08207.

  30. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T. (2019). Analyzing and improving the image quality of stylegan. arXiv preprint arXiv:1912.04958.

  31. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems pp. 1097–1105.

  32. Lennon, F. E., Cianci, G. C., Cipriani, N. A., Hensing, T. A., Zhang, H. J., Chen, C.-T., et al. (2015). Lung cancer?a fractal viewpoint. Nature reviews Clinical oncology, 12(11), 664.

    Article  Google Scholar 

  33. Leonarduzzi, R., Abry, P., Roux, S. G., Wendt, H., Jaffard, S., & Seuret, S. (2018). Multifractal characterization for bivariate data. In Proc. European Signal Processing Conference (EUSIPCO) (Rome, Italy, September 2018).

  34. Leonarduzzi, R., Wendt, H., Abry, P., Jaffard, S., Melot, C., Roux, S. G., & Torres, M. E. (2016). p-exponent and p-leaders, part ii: Multifractal analysis. relations to detrended fluctuation analysis. Physica A: Statistical Mechanics and its Applications, 448, 319–339.

  35. Liotet, P., Abry, P., Leonarduzzi, R., Senneret, M., Jaffrès, L., & Perrin, G. (2020). Deep learning abilities to classify intricate variations in temporal dynamics of multivariate time series. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) IEEE, pp. 3857–3861.

  36. Malkov, S., Shepherd, J. A., Scott, C. G., Tamimi, R. M., Ma, L., Bertrand, K. A., et al. (2016). Mammographic texture and risk of breast cancer by tumor type and estrogen receptor status. Breast Cancer Research, 18(1), 122.

    Article  Google Scholar 

  37. Mallat, S. (1998). A Wavelet Tour of Signal Processing. San Diego, CA: Academic Press.

    MATH  Google Scholar 

  38. Mandelbrot, B. B. (1974). Intermittent turbulence in self-similar cascades: divergence of high moments and dimension of the carrier. J. Fluid Mech., 62, 331–358.

    Article  Google Scholar 

  39. Mauduit, V., Abry, P., Leonarduzzi, R., Roux, S., & Quemener, E. (2020). Dcgan for the synthesis of multivariate multifractal textures: How do we know it works? In 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP) IEEE, pp. 1–6.

  40. Myint, S. (2003). Fractal approaches in texture analysis and classification of remotely sensed data: Comparisons with spatial autocorrelation techniques and simple descriptive statistics. International Journal of remote sensing, 24(9), 1925–1947.

    Article  Google Scholar 

  41. Quemener, E., & Corvellec, M. (2014). Sidus, the solution for extreme deduplication of an operating system. The Linux Journal.

  42. Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.

  43. Roux, S. G., Clausel, M., Vedel, B., Jaffard, S., & Abry, P. (2013). Self-Similar Anisotropic Texture Analysis: The Hyperbolic Wavelet Transform Contribution. IEEE Trans. Image Proc., 22(11), 4353–4363.

    Article  MathSciNet  Google Scholar 

  44. Roux, S. G., Clausel, M., Vedel, B., Jaffard, S., & Abry, P. (2013). Self-similar anisotropic texture analysis: The hyperbolic wavelet transform contribution. IEEE Transactions on Image Processing, 22(11), 4353–4363.

    Article  MathSciNet  Google Scholar 

  45. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. In Advances in neural information processing systems pp. 2234–2242.

  46. Szegedy, C., Liu, W., Jia, Y., et al. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition pp. 1–9.

  47. Ulyanov, D., Lebedev, V., Vedaldi, A., & Lempitsky, V. S. (2016). Texture networks: Feed-forward synthesis of textures and stylized images. In ICML, 1, 4.

    Google Scholar 

  48. Ulyanov, D., Lebedev, V., & Vedaldi, A. (2016). Lempitsky (Vol. S, p. 4). Texture networks: Feed-forward synthesis of textures and stylized images. In ICML.

    Google Scholar 

  49. Vapnik, V. N., & Chervonenkis, A. Y. (2015). On the uniform convergence of relative frequencies of events to their probabilities. In Measures of complexity. Springer, pp. 11–30.

  50. Wendt, H., Abry, P., & Jaffard, S. (2007). Bootstrap for empirical multifractal analysis. IEEE Signal Proc. Mag., 24(4), 38–48.

    Article  Google Scholar 

  51. Wendt, H., Hourani, M., Barasab, A., & Kouame, D. (2020). Deconvolution for improved multifractal characterization of tissues in ultrasound imaging. In IEEE International Ultrasonic Symposium (IUS) (Las Vegas, USA).

  52. Wendt, H., Leonarduzzi, R., Abry, P., Roux, S., Jaffard, S., & Seuret, S. (2018). Assessing cross-dependencies using bivariate multifractal analysis. In IEEE Int. Conf. Acoust., Speech, and Signal Proces. (ICASSP).

  53. Wendt, H., Leonarduzzi, R., Abry, P., Roux, S., Jaffard, S., & Seuret, S. (2018). Assessing cross-dependencies using bivariate multifractal analysis. In IEEE Int. Conf. Acoust., Speech, and Signal Proces.

  54. Wendt, H., Roux, S. G., Abry, P., & Jaffard, S. (2009). Wavelet leaders and bootstrap for multifractal analysis of images. Signal Proces., 89(6), 1100–1114.

    Article  Google Scholar 

  55. Wendt, H., Roux, S. G., Jaffard, S., & Abry, P. (2009). Wavelet leaders and bootstrap for multifractal analysis of images. Signal Processing, 89(6), 1100–1114.

    Article  Google Scholar 

  56. Zhou, Y., Zhu, Z., Bai, X., Lischinski, D., Cohen-Or, D., & Huang, H. (2018). Non-stationary texture synthesis by adversarial expansion. arXiv preprint arXiv:1805.04487.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vincent Mauduit.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Work supported by CBP (Blaise Pascal Center) with the use of SIDUS (Single Instance Distributing Universal System), by ACADEMICS Grant, under IDEXLYON project, within PIA ANR-16-IDEX-0005 and by ANR-16-CE33-0020 MultiFracs Grant.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abry, P., Mauduit, V., Quemener, E. et al. Multivariate multifractal texture DCGAN synthesis: How well does it work ? How does one know ?. J Sign Process Syst 94, 179–195 (2022). https://doi.org/10.1007/s11265-021-01701-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-021-01701-y

Keywords

Navigation