Skip to main content
Log in

Focused-Region Segmentation for Refocusing Images from Light Fields

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

Since the focused regions of the light-field refocusing image contain depth cues, focused-region segmentation becomes a fundamental step of depth estimation, 3D measurement, and visual measurement. However, in the emerging field of light-field image processing, recent research has emphasized focus evaluation rather than a systematic method for focused-region segmentation. The segmentation algorithms for low depth-of-field images are of significance for this problem, but those algorithms have high time complexity and are not suitable for the computationally intensive applications of light-field imaging. Therefore, based on the pulse synchronous mechanism of the pulse coupled neural network (PCNN), we establish the model of neural firing sequence and some criteria of pixel classification. Further, we design an algorithm of the focused-region segmentation and its parameter settings. The experimental results show that the proposed method segments the refocusing images faster than alternative methods and meets the needs of light-field image processing and related applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7

Similar content being viewed by others

References

  1. Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P., et al. (2005). Light field photography with a hand-held plenoptic camera. Computer Science Technical Report, 2(11), 1–11.

    Google Scholar 

  2. Zhang, C., Liu, F., Hou, G. Q., Sun, Z. N., & Tan, T. N. (2016). Light field photography and its application in computer vision. Journal of Image and Graphics, 21(3), 263–281.

    Google Scholar 

  3. Hu, L. M., Ji, C. D., Zhang, X. D., Zhang, J., & Wang, L. J. (2016). Color-guided depth map extraction from light field based on focusness detection. Journal of Image and Graphics, 21(2), 0155–0164.

    Google Scholar 

  4. Gai, K., Qiu, M., & Sun, X. (2017). A survey on FinTech. Journal of Network and Computer Applications, 103, 262–273.

    Article  Google Scholar 

  5. Lv, L., Fan, T. H., Sun, Z., Wang, J., & Xu, L. Z. (2016). Object tracking with double-dictionary appearance model. Optical Engineering, 55, 083106. https://doi.org/10.1117/1.OE.55.8.083106.

    Article  Google Scholar 

  6. Qiu, M. K., Ming, Z., Li, J. Y., Gai, K. K., & Zong, Z. L. (2015). Phase-change memory optimization for green cloud with genetic algorithm. IEEE Transactions on Computers, 64(12), 3528–3540.

    Article  MathSciNet  MATH  Google Scholar 

  7. Wang, Z. B., Ma, Y., & Cheng, F. Y. (2010). Review of pulse-coupled neural networks. Image and Vision Computing, 28(1), 5–13.

    Article  Google Scholar 

  8. Dansereau, D. G., Pizarro, O., & Williams, S. B. (2013). Decoding, calibration and rectification for Lenselet-based Plenoptic cameras. Computer Vision and Pattern Recognition, 9(4), 1027–1034.

    Google Scholar 

  9. Chen, Z., Wang, X., Sun, Z., & Wang, Z. J. (2016). Motion saliency detection using a temporal fourier transform. Nerocomputing, 80, 1–15.

    Google Scholar 

  10. Gai, K. K., Qiu, M. K., Liu, M. Q., & Zhao, H. (2017). Smart resource allocation using reinforcement learning in content-centric cyber-physical systems. International Conference on Smart Computing and Communication. https://link.springer.com/chapter/10.1007/978-3-319-73830-7_5. Accessed 18 January 2018.

  11. Tsai, D. M., & Wang, H. J. (1998). Segmenting focused objects in complex visual images. Pattern Recognition Letters, 19(10), 929–940.

    Article  Google Scholar 

  12. Gai, K., & Qiu, M. (2017). Blend arithmetic operations on tensor-based fully homomorphic encryption over real numbers. IEEE Transactions on Industrial Informatics, 1. https://doi.org/10.1109/TII.2017.2780885.

  13. Won, C. S., Pyun, K., & Gray, R. M. (2002). Automatic object segmentation in images with low depth of field. International Conference on Image Processing, 3(3), 805–808.

    Google Scholar 

  14. Wang, J. Z., Li, J., Gray, R. M., & Wiederhold, G. (2001). Unsupervised multiresolution segmentation for images with low depth of field. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(1), 85–90.

    Article  Google Scholar 

  15. Li, H., & Ngan, K. N. (2007). Unsupervized video segmentation with low depth of field. IEEE Transactions on Circuits and Systems for Video Technology, 17(12), 1742–1751.

    Article  Google Scholar 

  16. Deng, X. L., Ni, J. Q., Li, Z., & Dai, F. (2012). Foreground extraction from low depth-of-field images based on colour-texture and HOS features. Acta Automatica Sinica, 39(6), 846–851.

    Article  Google Scholar 

  17. Kim, C. (2005). Segmenting a low-depth-of-field image using morphological filters and region merging. IEEE Transactions on Image Processing, 14(10), 1503–1511.

    Article  MathSciNet  Google Scholar 

  18. Liu, Z., Li, W., Shen, L., Han, Z., & Zhang, Z. (2010). Automatic segmentation of focused objects from images with low depth of field. Pattern Recognition Letters, 31(7), 572–581.

    Article  Google Scholar 

  19. Ahn, S., & Chong, J. (2015). Segmenting a noisy low-depth-of-field image using adaptive second-order statistics. IEEE Signal Processing Letters, 22(3), 275–278.

    Article  Google Scholar 

  20. Mei, J. Y., Si, Y. L., & Gao, H. J. (2013). A curve evolution approach for unsupervised segmentation of images with low depth of field. IEEE Transactions on Image Processing, 22(10), 4086–4095.

    Article  MathSciNet  MATH  Google Scholar 

  21. Eckhorn, R., Reitboeck, H. J., Arndt, M., & Dicke, P. (1990). Feature linking via synchronization among distributed assemblies: Simulations of results from cat visual cortex. Neural Computation, 2(3), 293–307.

    Article  Google Scholar 

  22. Deng, X. Y., & Ma, Y. D. (2012). PCNN model automatic parameters determination and its modified model. Acta Electronica Sinica, 40(5), 955–964.

    MathSciNet  Google Scholar 

  23. Zhou, D. G., Gao, C., & Guo, Y. C. (2014). Adaptive simplified PCNN parameter setting for image segmentation. Acta Automatica Sinica, 40(6), 1191–1197.

    Google Scholar 

  24. Wei, S., Hong, Q., & Hou, M. (2011). Automatic image segmentation based on PCNN with adaptive threshold time constant. Neurocomputing, 74(9), 1485–1491.

    Article  Google Scholar 

  25. Kuntimad, G., & Ranganath, H. S. (1999). Perfect image segmentation using pulse coupled neural networks. IEEE Transactions on Neural Networks, 10(3), 591–598.

    Article  Google Scholar 

  26. Chen, Y., Park, S. K., Ma, Y., & Ala, R. (2011). A new automatic parameter setting method of a simplified PCNN for image segmentation. IEEE Transactions on Neural Networks, 22(6), 880–892.

    Article  Google Scholar 

  27. Liu, Q., Ma, Y. D., & Qian, Z. B. (2005). Automated image segmentation using improved PCNN model based on cross-entropy. Journal of Image and Graphics, 10(5), 579–584.

    Google Scholar 

  28. Min, J., & Chai, Y. (2015). A PCNN improved with fisher criterion for infrared human image segmentation. Advanced Information Technology, Electronic and Automation Control Conference. https://doi.org/10.1109/IAEAC.2015.7428729.

  29. Helmy, A. K., & El-Taweel, G. S. (2016). Image segmentation scheme based on SOM–PCNN in frequency domain. Applied Soft Computing, 40, 405–415.

    Article  Google Scholar 

  30. Xu, X., Liang, T., Wang, G., Wang, M., & Wang, X. (2016). Self-adaptive PCNN based on the ACO algorithm and its application on medical image segmentation. Intelligent Automation & Soft Computing, 23(2), 1–8.

  31. Hernández, J., & Gómez, W. (2016). Automatic tuning of the pulse-coupled neural network using differential evolution for image segmentation. Pattern Recognition (MCPR2016) Lecture Notes in Computer Science, 9703, 157–166.

    Article  Google Scholar 

  32. Mousnier, A., Vural, E., & Guillemot, C. (2015). Partial light field tomographic reconstruction from a fixed-camera focal stack. Computer Science. https://arxiv.org/pdf/1503.01903.pdf. Accessed 6 Mar 2015.

  33. Gu, T. T., Guo, Y. W., & Yin, K. Y. (2013). Image quality evaluation base on low depth-of-field and constructs. Journal of Image and Graphics, 18(5), 574–582.

    Google Scholar 

Download references

Acknowledgements

This work is supported partly by National Natural Science Foundation of China (No. 51709083, No. 61671201), the Fundamental Research Funds for the Central Universities (No. 2018B16514), and the Natural Science Foundation of Jiangsu Higher Education Institutions of China (No. 17KJB520010).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jie Shen.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, J., Han, L., Xu, M. et al. Focused-Region Segmentation for Refocusing Images from Light Fields. J Sign Process Syst 90, 1281–1293 (2018). https://doi.org/10.1007/s11265-018-1379-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-018-1379-2

Keywords

Navigation