Skip to main content
Log in

Defocus blur detection using novel local directional mean patterns (LDMP) and segmentation via KNN matting

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

Detection and segmentation of defocus blur is a challenging task in digital imaging applications as the blurry images comprise of blur and sharp regions that wrap significant information and require effective methods for information extraction. Existing defocus blur detection and segmentation methods have several limitations i.e., discriminating sharp smooth and blurred smooth regions, low recognition rate in noisy images, and high computational cost without having any prior knowledge of images i.e., blur degree and camera configuration. Hence, there exists a dire need to develop an effective method for defocus blur detection, and segmentation robust to the above-mentioned limitations. This paper presents a novel features descriptor local directional mean patterns (LDMP) for defocus blur detection and employ KNN matting over the detected LDMP-Trimap for the robust segmentation of sharp and blur regions. We argue/hypothesize that most of the image fields located in blurry regions have significantly less specific local patterns than those in the sharp regions, therefore, proposed LDMP features descriptor should reliably detect the defocus blurred regions. The fusion of LDMP features with KNN matting provides superior performance in terms of obtaining high-quality segmented regions in the image. Additionally, the proposed LDMP features descriptor is robust to noise and successfully detects defocus blur in high-dense noisy images. Experimental results on Shi and Zhao datasets demonstrate the effectiveness of the proposed method in terms of defocus blur detection. Evaluation and comparative analysis signify that our method achieves superior segmentation performance and low computational cost of 15 seconds.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Krishnamurthy B, Sarkar M. Deep-learning network architecture for object detection. U.S. Patents 10, 019, 655, 2018

  2. Price B L, Schiller S, Cohen S, Xu N. Image matting using deep learning. Ed: Google Patents, 2019

  3. Liu C, Liu W, Xing W. A weighted edge-based level set method based on multi-local statistical information for noisy image segmentation. Journal of Visual Communication and Image Representation, 2019, 59: 89–107

    Article  Google Scholar 

  4. Gast J, Roth S. Deep video deblurring: the devil is in the details. In: Proceedings of the IEEE International Conference on Computer Vision Workshops. 2019

  5. Gvozden G, Grgic S, Grgic M. Blind image sharpness assessment based on local contrast map statistics. Journal of Visual Communication and Image Representation, 2018, 50: 145–158

    Article  Google Scholar 

  6. Shi J, Xu L, Jia J. Discriminative blur detection features. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. 2014, 2965–2972

  7. Vu C T, Phan T D, Chandler D M. S3: a spectral and spatial measure of local perceived sharpness in natural images. IEEE Transactions on Image Processing, 2011, 21(3): 934–945

    Article  MATH  Google Scholar 

  8. Su B, Lu S, Tan C L. Blurred image region detection and classification. In: Proceedings of the 19th ACM International Conference on Multimedia, Scottsdale, Arizona. 2011

  9. Zhuo S, Sim T. Defocus map estimation from a single image. Pattern Recognition, 2011, 44(9): 1852–1858

    Article  Google Scholar 

  10. Zhu X, Cohen S, Schiller S, Milanfar P. Estimating spatially varying defocus blur from a single image. IEEE Transactions on Image Processing, 2013, 22(12): 4879–4891

    Article  MathSciNet  MATH  Google Scholar 

  11. Tang C, Hou C, Song Z. Defocus map estimation from a single image via spectrum contrast. Optics letters, 2013, 38(10): 1706–1708

    Article  Google Scholar 

  12. Zhang X, Wang R, Jiang X, Wang W, Gao W. Spatially variant defocus blur map estimation and deblurring from a single image. Journal of Visual Communication and Image Representation, 2016, 35: 257–264

    Article  Google Scholar 

  13. Wing T Y, Brown M S. Single image defocus map estimation using local contrast prior. In: Proceedings of the 16th IEEE International Conference on Image Processing. 2009, 1797–1800

  14. Shan Q, Jia J, Agarwala A. High-quality motion deblurring from a single image. ACM Transactions on Graphics (Tog), 2008, 27(3): 1–10

    Article  Google Scholar 

  15. Rajabzadeh T, Vahedian A, Pourreza H. Static object depth estimation using defocus blur levels features. In: Proceedings of the 6th International Conference on Wireless Communications Networking and Mobile Computing. 2010, 1–4

  16. Mavridaki E, Mezaris V. No-reference blur assessment in natural images using Fourier transform and spatial pyramids. In: Proceedings of IEEE International Conference on Image Processing (ICIP). 2014, 566–570

  17. Lin J, Ji X, Xu W, Dai Q. Absolute depth estimation from a single defocused image. IEEE Transactions on Image Processing, 2013, 21(11): 4545–4550

    Google Scholar 

  18. Zhou C, Lin S, Nayar S K. Coded aperture pairs for depth from defocus and defocus deblurring. International Journal of Computer Vision, 2011, 93(1): 53–72

    Article  Google Scholar 

  19. Liu R, Li Z, Jia J. Image partial blur detection and classification. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. 2008, 1–8

  20. Tang C, Wu J, Hou Y, Wang P, Li W. A spectral and spatial approach of coarse-to-fine blurred image region detection. IEEE Signal Processing Letters, 2016, 23(11): 1652–1656

    Article  Google Scholar 

  21. Yi X, Eramian M. LBP-Based Segmentation of Defocus Blur. IEEE Transactions on Image Processing, 2016, 25(4): 1626–1638

    Article  MathSciNet  MATH  Google Scholar 

  22. Hassen R, Wang Z, Salama M M. Image sharpness assessment based on local phase coherence. IEEE Transactions on Image Processing, 2013, 22(7): 2798–2810

    Article  Google Scholar 

  23. Xiao H, Lu W, Li R, Zhong N, Yeung Y, Chen J. Defocus blur detection based on multiscale SVD fusion in gradient domain. Journal of Visual Communication and Image Representation, 2019, 59: 52–61

    Article  Google Scholar 

  24. Chakrabarti A, Zickler T, Freeman W T. Analyzing spatially-varying blur. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2010

  25. Golestaneh S A, Karam L J. Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017, 5800–5809

  26. Zhao W, Zheng B, Lin Q, Lu H. Enhancing diversity of defocus blur detectors via cross-ensemble network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, 8905–8913

  27. Zhang Y, Hirakawa K. Blur processing using double discrete wavelet transform. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2013, 1091–1098

  28. Shi J, Xu L, Jia J. Just noticeable defocus blur detection and estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015, 657–665

  29. Pang Y, Zhu H, Li X, Li X. Classifying discriminative features for blur detection. IEEE Transactions on Cybernetics, 2015, 46(10): 2220–2227

    Article  Google Scholar 

  30. Kim B, Son H, Park S J, Cho S, Lee S. Defocus and Motion Blur Detection with Deep Contextual Features. In: Proceedings of Computer Graphics Forum. 2018, 277–288

  31. Park J, Tai Y W, Cho D, Kweon I S. A unified approach of multi-scale deep and hand-crafted features for defocus estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017, 1736–1745

  32. Tang C, Zhu X, Liu X, Wang L, Zomaya A. DeFusionNET: defocus blur detection via recurrently fusing and refining multi-scale deep features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, 2700–2709

  33. Nigam S, Singh R, Misra A. Local binary patterns based facial expression recognition for efficient smart applications. In: Hassanien A, Elhoseny M, Ahmed S, Singh A, eds. Security in Smart Cities: Models, Applications and Challenges. Springer, Cham, 2019, 297–322

    Chapter  Google Scholar 

  34. Kumar G S, Mohan P K. Local mean differential excitation pattern for content based image retrieval. SN Applied Sciences, 2019, 1(1): 1–10

    Google Scholar 

  35. Zhao W, Zhao F, Wang D, Lu H. Defocus blur detection via multi-stream bottom-top-bottom fully convolutional network. In: Proceedings of the IEEE Conference on Computer vision and Pattern Recognition. 2018, 3080–3088

Download references

Acknowledgements

This work was supported and funded by the Directorate ASR&TD of UET-Taxila.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Javed.

Additional information

Awais Khan is currently working toward the MS degree program as a full time research scholar at Department of Computer Science in University of Engineering and Technology, Pakistan. He graduated from University of Wah, Pakistan in 2017 with a Bachelor of Science degree in Computer Science. His research interests lie in computer vision, neural networks, machine learning and data science.

Aun Irtaza has completed his PhD in 2016 from FAST-nu, Islamabad Pakistan. During his PhD he remained working as a research scientist in the Gwangju Institute of Science and Technology (GIST), South Korea. He became an Associate Professor in 2017 and department of computer science chair in 2018 in the University of Engineering and Technology (UET) Taxila, Pakistan. He is currently working as visiting Associate Professor in the University of Michigan-Dearborn. His research areas include computer vision, multimedia forensics and big data analytics. He has more than 40 publications in IEEE, Springer, and Elsevier Journals.

Ali Javed received the BSc degree with honors and 3rd position in Software Engineering from UET Taxila, Pakistan in 2007. He received his MS and PhD degrees in Computer Engineering from UET Taxila, Pakistan in 2010 and 2016. He received Chancellor’s Gold Medal in MS Computer Engineering degree. He is serving as an Assistant Professor in Software Engineering Department at UET Taxila, Pakistan. He has served as a Postdoctoral Scholar in SMILES lab at Oakland University, USA in 2019 and as a visiting PhD scholar in ISSF Lab at University of Michigan, USA in 2015. His areas of interest are image processing, computer vision, medical image processing, video content analysis, machine learning and multimedia forensics.

Tahira Nazir is currently working toward the PhD degree at Department of Computer Science in University of Engineering and Technology, Pakistan. She has done the MS(CS) from Department of Computer Science, UET Taxila, Pakistan in 2016. Her research interests are computer vision, medical imaging, machine learning and data science.

Hafiz Malik is an Associate Processor in the Electrical and Computer Engineering (ECE) Department at University of Michigan — Dearborn, USA. His research in the areas of automotive cybersecurity, IoT security, sensor security, multimedia forensics, steganography/steganalysis, information hiding, pattern recognition, and information fusion is funded by the National Science Foundation, National Academies, Ford Motor Company, and other agencies. He has published more than 100 papers in leading journals, conferences, and workshops. He is a founding member of the Cybersecurity Center for Research, Education, and Outreach at UM-Dearborn and member leadership circle for the Dearborn Artificial Intelligence Research Center at UM-Dearborn. He is also a member of the Scientific and Industrial Advisory Board (SIAB) of the National Center of Cyber Security Pakistan.

Khalid Mahmood Malik (Senior Member, IEEE) is currently an Assistant Professor with the School of Engineering and Computer Science, Oakland University, USA. His research interests include multimedia forensics, development of intelligent decision support systems using analysis of medical imaging and clinical text, secure multicast protocols for intelligent transportation systems, and automated ontology and knowledge graph generation. His research is supported by the National Science Foundation (NSF), Brain Aneurysm Foundation, and Oakland University.

Muhammad Ammar Khan recently, completed his MS degree in Computer Science from University of Engineering and Technology, Pakistan. He done his graduation in Computer Sciences from University of Wah, Pakistan in 2017. His research interests are computer visions, machine learning, data science and neural networks.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khan, A., Irtaza, A., Javed, A. et al. Defocus blur detection using novel local directional mean patterns (LDMP) and segmentation via KNN matting. Front. Comput. Sci. 16, 162702 (2022). https://doi.org/10.1007/s11704-020-9526-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-020-9526-x

Keywords

Navigation