Skip to main content

Texture-Based Data Augmentation for Small Datasets

  • Conference paper
  • First Online:
Advanced Concepts for Intelligent Vision Systems (ACIVS 2023)

Abstract

This paper proposes a texture-based domain-specific data augmentation technique applicable when training on small datasets for deep learning classification tasks. Our method focuses on label-preservation to improve generalization and optimization robustness over data-dependent augmentation methods using textures. We generate a small perturbation in an image based on a randomly sampled texture image. The textures we use are naturally occurring and domain-independent of the training dataset: regular, near regular, irregular, near stochastic and stochastic classes. Our method uses the textures to apply sparse, patterned occlusion to images and a penalty regularization term during training to help ensure label preservation. We evaluate our method against the competitive soft-label Mixup and RICAP data augmentation methods with the ResNet-50 architecture using the unambiguous “Bird or Bicyle” and Oxford-IIT-Pet datasets, as well as a random sampling of the Open Images dataset. We experimentally validate the importance of label-preservation and improved generalization by using out-of-distribution examples and show that our method improves over competitive methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    pytorch.org.

  2. 2.

    numpy.org.

References

  1. Bergstra, J., et al.: Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In: ICML, pp. 115–23. PMLR (2013)

    Google Scholar 

  2. Brown, T.B., et al.: Unrestricted adversarial examples. arXiv preprint arXiv:1809.08352 (2018)

  3. Cimpoi, M., et al.: Describing textures in the wild. In: CVPR, pp. 3606–13 (2014)

    Google Scholar 

  4. DeVries, T., Taylor, G.W.: Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552 (2017)

  5. Guo, H., et al.: MixUp as locally linear out-of-manifold regularization. In: AAAI, vol. 33, pp. 3714–22 (2019)

    Google Scholar 

  6. He, K., et al.: Deep residual learning for image recognition. In: CVPR, pp. 770–8 (2016)

    Google Scholar 

  7. Hernández-García, A., König, P.: Data augmentation instead of explicit regularization. arXiv preprint arXiv:1806.03852 (2018)

  8. Hinton, G., et al.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  9. Krizhevsky, A.: Learning multiple layers of features from tiny images. Master’s thesis, University of Tront (2009)

    Google Scholar 

  10. Kuznetsova, A., et al.: The open images dataset V4: unified image classification, object detection, and visual relationship detection at scale. Int. J. Comput. Vis. 128(7), 1956–1981 (2020)

    Article  Google Scholar 

  11. Kylberg, G.: Kylberg texture dataset v. 1.0. Centre for Image Analysis, Swedish University of Agricultural Sciences and Uppsala University (2011)

    Google Scholar 

  12. Lazebnik, S., et al.: A sparse texture representation using local affine regions. IEEE Trans. Pattern Anal. Mach. Intell. 27(8), 1265–78 (2005)

    Article  Google Scholar 

  13. Nesterov, Y.: A method of solving a convex programming problem with convergence rate O (1/k 2) O (1/k2). In: Soviet Mathematics. Doklady, vol. 27, no. 2

    Google Scholar 

  14. Parkhi, O.M., et al.: Cats and dogs. In: CVPR, pp. 3498–3505. IEEE (2012)

    Google Scholar 

  15. Rahaman, N., et al.: On the spectral bias of neural networks. In: ICML, pp. 5301–10. PMLR (2019)

    Google Scholar 

  16. Rolnick, D., et al.: Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694 (2017)

  17. Rudin, L.I., et al.: Nonlinear total variation based noise removal algorithms. Phys. D: Nonlinear Phenom. 60(1–4), 259–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  18. Shafahi, A., et al.: Are adversarial examples inevitable? In: ICLR (2019)

    Google Scholar 

  19. Sharif, M., et al.: On the suitability of Lp-norms for creating and preventing adversarial examples. In: CVPRW, pp. 1605–13 (2018)

    Google Scholar 

  20. Summers, C., Dinneen, M.J.: Improved mixed-example data augmentation. In: WACV, pp. 1262–70. IEEE (2019)

    Google Scholar 

  21. Summers, C., Dinneen, M.J.: Nondeterminism and instability in neural network optimization. In: ICML, pp. 9913–22. PMLR (2021)

    Google Scholar 

  22. Takahashi, R., et al.: Data augmentation using random image cropping and patching for deep CNNs. IEEE Trans. Circuits Syst. Video Technol. 30(9), 2917–31 (2019)

    Article  Google Scholar 

  23. Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: ICCV, pp. 839–846. IEEE (1998)

    Google Scholar 

  24. Zagoruyko, S., Komodakis, N.: Wide residual networks. In: BMVC. BMVA (2016)

    Google Scholar 

  25. Zhang, C., et al.: Understanding deep learning (still) requires rethinking generalization. Commun. ACM 64(3), 107–15 (2021)

    Article  Google Scholar 

  26. Zhang, H., et al.: MixUp: beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)

  27. Zhong, Z., et al.: Random erasing data augmentation. In: AAAI, vol. 34, pp. 13001–13008 (2020)

    Google Scholar 

  28. Zoph, B., et al.: Rethinking pre-training and self-training. In: NeurIPS, pp. 3833–3845. ACM (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amanda Dash .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dash, A., Albu, A.B. (2023). Texture-Based Data Augmentation for Small Datasets. In: Blanc-Talon, J., Delmas, P., Philips, W., Scheunders, P. (eds) Advanced Concepts for Intelligent Vision Systems. ACIVS 2023. Lecture Notes in Computer Science, vol 14124. Springer, Cham. https://doi.org/10.1007/978-3-031-45382-3_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45382-3_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45381-6

  • Online ISBN: 978-3-031-45382-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics