Skip to main content

Multi-scale Fractional-Order Sparse Representation for Image Denoising

  • Conference paper
  • First Online:
  • 2737 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9491))

Abstract

Sparse representation models code image patches as a linear combination of a few atoms selected from a given dictionary. Sparse representation-based image denoising (SRID) models, learning an adaptive dictionary directly from the noisy image itself, has shown promising results for image denoising. However, due to the noise of the observed image, these conventional models cannot obtain good estimations of sparse coefficients and the dictionary. To improve the performance of SRID models, we propose a multi-scale fractional-order sparse representation (MFSR) model for image denoising. Firstly, a novel sample space is re-estimated by respectively correcting singular values with the non-linear fractional-order technique in wavelet domain. Then, the denoised image can be reconstructed with the accurate sparse coefficients and optimal dictionary in the novel sample space. Compared with the conventional SRID models and other state-of-the-art image denoising algorithms, the experimental results show that the performances of our proposed MFSR model are much better in terms of the accuracy, efficiency and robustness.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  2. Aharon, M., Bruckstein, A.M.: K-SVD: An algorithm for designing over complete dictionaries for sparse representation. Trans. Sig. Process. 54(11), 4311–4322 (2006)

    Article  Google Scholar 

  3. Rubinstein, R., Peleg, T., Elad, M.: Analysis K-SVD: a dictionary-learning algorithm for the analysis sparse model. Trans. Sig. Process. 61(3), 661–677 (2013)

    Article  MathSciNet  Google Scholar 

  4. Yang, J.C., Wang, Z.W., Lin, Z.: Coupled dictionary training for image super resolution. Trans. Image Process. 21(8), 3467–3478 (2012)

    Article  MathSciNet  Google Scholar 

  5. Elad, M., Aharon, M.: Image denoising via learned dictionaries and sparse representation. In: 22th IEEE International Conference on Computer Vision and Pattern Recognition, pp. 895–900. IEEE Press, New York (2006)

    Google Scholar 

  6. Romano, Y. Elad, M.: Improving K-SVD denoising by post-processing its method-noise. In: 20th IEEE International Conference on Image Processing, pp. 435–439. IEEE Press, Melbourne (2013)

    Google Scholar 

  7. Dong, W.S., Zhang, L., Shi, G.M., Li, X.: Nonlocally centralized sparse representation for image restoration. Transactions on Image Processing 22(4), 1620–1630 (2013)

    Article  MathSciNet  Google Scholar 

  8. Sulam, J., Ophir, B., Elad, M.: Image denoising though multi-scale dictionary learning. In: 21th IEEE International Conference on Image Processing, pp. 808–812. IEEE Press, Pairs (2014)

    Google Scholar 

  9. Mairal, J., Sapiro, G., Elad, M.: Multi-scale sparse image representation with learned dictionaries. In: 13th IEEE International Conference on Image Processing, pp. 105–108. IEEE Press, Atlanta (2006)

    Google Scholar 

  10. Rubinstein, R., Bruckstein, A.M., Elad, M.: Dictionaries for sparse representation modeling. Process. IEEE 98(6), 1045–1057 (2010)

    Article  Google Scholar 

  11. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: BM3D image denoising with shape-adaptive principal component analysis. In: Proceedings of the 2th Workshop Signal Process with Adaptive Sparse Struct Representations, pp. 1–6. Springer, Saint-Malo (2009)

    Google Scholar 

  12. Pu, Y.F., Zhou, J.L., Yuan, X.: Fractional differential mask: a fractional differential-based approach for multi-scale texture enhancement. Trans. Image Process. 19(2), 491–511 (2010)

    Article  MathSciNet  Google Scholar 

  13. Pan, W., Qin, K., Chen, Y.: An adaptable-multilayer fractional Fourier transform approach for image registration. Trans. Pattern Anal. Mach. Intell. 31(3), 400–413 (2009)

    Article  Google Scholar 

  14. Yuan, Y.H., Sun, Q.S., Ge, H.W.: Fractional-order embedding canonical correlation analysis and its applications to multi-view dimensionality reduction and recognition. Pattern Recogn. 47(3), 1411–1424 (2014)

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Nature Science Foundation of China under Grant No. 61273251, No. 61402203 and No. 61401209

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leilei Geng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Geng, L., Sun, Q., Fu, P., Yuan, Y. (2015). Multi-scale Fractional-Order Sparse Representation for Image Denoising. In: Arik, S., Huang, T., Lai, W., Liu, Q. (eds) Neural Information Processing. ICONIP 2015. Lecture Notes in Computer Science(), vol 9491. Springer, Cham. https://doi.org/10.1007/978-3-319-26555-1_52

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-26555-1_52

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-26554-4

  • Online ISBN: 978-3-319-26555-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics