Skip to main content

Light-Weight Permutation Generator for Efficient Convolutional Neural Network Data Augmentation

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13569))

Abstract

Permutation is a fundamental way of data augmentation. However, it is not commonly used in image based systems with hardware acceleration due to distortion of spatial correlation and generation complexity. This paper proposes Restricted Permutation Network (RPN), a scalable architecture to automatically generate a restricted subset of local permutation, preserving the features of the dataset while simplifying the generation to improve scalability. RPN reduces the spatial complexity from \(\textit{O}(Nlog(N))\) to \(\textit{O}(N)\), making it easily scalable to 64 inputs and beyond, with 21 times speed up in generation and significantly reducing data storage and transfer, while maintaining the same level of accuracy as the original dataset for deep learning training. Experiments show Convolutional Neural Networks (CNNs) trained by the augmented dataset can be as accurate as the original one. Combining three to five networks in general improves the network accuracy by 5%. Network training can be accelerated by training multiple sub-networks in parallel with a reduced training data set and epochs, resulting in up to 5 times speed up with a negligible loss in accuracy. This opens up the opportunity to easily split long iterative training process into independent parallelizable processes, facilitating the trade off between resources and run time.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Wu, Y., Yang, Y., Nishiura, H., Saitoh, M.: Deep learning for epidemiological predictions. In: SIGIR (2018)

    Google Scholar 

  2. Ivan, C.: Convolutional neural networks on randomized data. CoRR (2019)

    Google Scholar 

  3. Um, T.T., et al.: Data augmentation of wearable sensor data for Parkinson’s disease monitoring using convolutional neural networks. In: ICMI (2017)

    Google Scholar 

  4. Iwana, B.K., Uchida, S.: An empirical survey of data augmentation for time series classification with neural networks. In: ICPR (2020)

    Google Scholar 

  5. Akbiyik, M.E.: Data augmentation in training CNNs: injecting noise to images. In: ICLR (2020)

    Google Scholar 

  6. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012)

    Google Scholar 

  7. Simard, P., Steinkraus, D., Platt, J.: Best practices for convolutional neural networks applied to visual document analysis. In: ICDAR (2003)

    Google Scholar 

  8. Goodfellow, I.J., et al.: Generative adversarial networks. In: NIPS (2014)

    Google Scholar 

  9. Yoon, J., et al.: Time-series generative adversarial networks. In: NIPS (2019)

    Google Scholar 

  10. Tran, T., Pham, T., Carneiro, G., Palmer, L.J., Reid, I.D.: A bayesian data augmentation approach for learning deep models. CoRR (2017)

    Google Scholar 

  11. Beneš, V.E.: Mathematical Theory of Connecting Network and Telephone Traffic. Academic, New York (1965)

    MATH  Google Scholar 

  12. Cooley, J.M., et al.: An algorithm for the machine calculation of complex fourier series. Math. Comp. 19, 297–301 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  13. Jumandi, Z., Samsudin, A., Budiarto, R.: Optimized arbitrary size networks. In: ICTTA (2004)

    Google Scholar 

  14. Butler, J.T., Sasao, T.: Hardware index to permutation converter. In: IPDPS (PhD Forum) (2012)

    Google Scholar 

  15. Poplin, R., et al.: A universal SNP and small-indel variant caller using deep neural networks. Nat. Biotechnol. 36, 983–987 (2018)

    Article  Google Scholar 

Download references

Acknowledgement

The support of the Croucher Foundation, the UK EPSRC (grant number EP/V028251/1, EP/L016796/1, EP/S030069/1 and EP/N031768/1) and Xilinx is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bowen P. Y. Kwan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kwan, B.P.Y., Guo, C., Luk, W., Jiang, P. (2022). Light-Weight Permutation Generator for Efficient Convolutional Neural Network Data Augmentation. In: Gan, L., Wang, Y., Xue, W., Chau, T. (eds) Applied Reconfigurable Computing. Architectures, Tools, and Applications. ARC 2022. Lecture Notes in Computer Science, vol 13569. Springer, Cham. https://doi.org/10.1007/978-3-031-19983-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19983-7_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19982-0

  • Online ISBN: 978-3-031-19983-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics