Skip to main content
Log in

A zoom tracking algorithm based on defocus difference

  • Original Research Paper
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

The image clarity evaluation function was commonly used by the autofocus algorithm to evaluate the clarity of the image. And autofocus algorithms like peak search algorithm and zoom tracking algorithm are based on the evaluation value. This paper proposes an Improved Feedback Zoom Tracking method (IFZT) based on defocus difference, using the amount of defocus difference as the degree of image blur. IFZT algorithm modifies the revision criterion for feedback revision point and removes the relatively complex PID algorithm. IFZT determines the orientation of the in-focus motor position according to the amount of defocus difference and uses the depth of defocus method to estimate the ideal focus position. In this paper, the calculation formula of defocus difference and ideal focus position are deduced. Finally, the algorithm was experimented on an integrated camera; the experimental results show that: IFZT algorithm using the amount of defocus has good tracking accuracy, and has a larger promotion compared with other zoom tracking algorithms. And the overall performance of IFZT algorithm is in line with the requirements of zoom tracking algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Peddigari, V., Gamadia, M., Kehtarnavaz, N.: Real-time implementation issues in passive automatic focusing for digital still cameras. J. Imaging Sci. Technol 49(2), 114–123 (2005)

    Google Scholar 

  2. Wang, J.C.: Theoretical Study of Auto-focus Algorithm Based on Image Processing. Chongqing University (2012)

    Google Scholar 

  3. Zhu, K.F., Jiang, W., Wang, D.F.: New kind of clarity-evaluation-function of image. Infrared Laser Eng. 34(4), 464–468 (2005)

    Google Scholar 

  4. Hoad, P., Illingworth, J.: Automatic control of camera pan, zoom and focus for improving object recognition. In: International Conference on Image Processing and its Applications, pp. 291–295 (1995)

  5. Chang C.H., Fuh C.S.: Auto Focus Using Adaptive Step Size Search and Zoom Tracking Algorithm. Artif Int Appl. 1(1), 22–30 (2005)

  6. Kim, Y., Lee, J.S., Morales, A.W.: A video camera system with enhanced zoom tracking and auto white balance. IEEE Trans. Consum. Electron. 48(3), 428–434 (2002)

    Article  Google Scholar 

  7. Peddigari, V., Kehtarnavaz, N.: A relational approach to zoom tracking for digital still cameras. IEEE Trans. Consum. Electron. 51(4), 1051–1059 (2005)

    Article  Google Scholar 

  8. Peddigari, V., Kehtarnavaz, N.: Real-time predictive zoom tracking for digital still cameras. J. Real-Time Image Proc. 2(1), 45–54 (2007)

    Article  Google Scholar 

  9. Peddigari, V.R., Kehtarnavaz, N., Lee, S.Y.: Real-time implementation of zoom tracking on TI DM processor. In: Electronic Imaging. International Society for Optics and Photonics, pp. 8–18 (2005)

  10. Zou, T., Tang, X., Song, B., Wang, J., Chen, J.: Robust feedback zoom tracking for digital video surveillance. Sensors 12(6), 8073–8099 (2012)

    Article  Google Scholar 

  11. Cong, B.D., Seol, T.I., Chung, S.T.: Real-time zoom tracking for DM36x-based IP Network Camera. J. Korea Multimed. Soc. 16(11), 1261–1271 (2013)

    Article  Google Scholar 

  12. Subbarao, M., Surya, G.: Depth from defocus: A spatial domain approach. Int. J. Comput. Vis. 13(3), 271–294 (1994)

    Article  Google Scholar 

  13. Pei, X.Y., Feng, H.J., Qi, L.I.: A depth from defocus auto-focusing method based on frequency analysis. Opto-Electron. Eng. 30(5), 62–65 (2003)

    Google Scholar 

  14. Yan, H.: Auto-focus method based on autocorrelation of derivative image. Acta Optica Sinica 30(12), 3435–3440 (2010)

    Article  Google Scholar 

  15. Zhou, C., Lin, S., Nayar, S.K.: Coded aperture pairs for depth from defocus and defocus deblurring. Int. J. Comput. Vis. 93(1), 53–72 (2011)

    Article  Google Scholar 

  16. Matsui, S., Nagahara, H., Taniguchi, R.I.: Half-sweep imaging for depth from defocus. In: Advances in Image and Video Technology, pp. 335–347. Springer, Berlin (2011)

    Chapter  Google Scholar 

  17. Li, Q., Xu, Z., Feng, H.: Autofocus area design of digital imaging system. Acta Photonica Sinica 31(1), 63–66 (2002)

    Google Scholar 

  18. Mannan, F., Langer, M.S.: What is a good model for depth from defocus?. In: Conference on Computer and Robot Vision, pp. 273–280 (2016)

  19. Qi, L.I., Feng, H.J., Zhi-Hai, X.U.: Method of improving autofocus speed based on defocus estimation. J. Optoelectron. Laser. 16(7), 850 (2005)

    Google Scholar 

  20. Mannan, F., Langer, M.S.: Blur calibration for depth from defocus. In: Conference on Computer and Robot Vision. IEEE Computer Society, pp. 281–288 (2016)

  21. Mannan, F., Langer, M.S.: Optimal camera parameters for depth from defocus. In: International Conference on 3d Vision. IEEE Computer Society, pp 326–334 (2015)

Download references

Acknowledgements

This project is supported by National Natural Science Foundation of China (Grant no. 52075483).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiayu Ji.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Derivation of formula 9 : \({I}^{{{\prime}}}\left(x,y\right)=I\left(x,y\right)+\frac{1}{4}{\sigma }^{2}{\nabla }^{2}{I}^{{{\prime}}}\left(x,y\right)\)

In our application, we usually use a third-order polynomial to approximate the image.

$$I\left( {x,y} \right) = \mathop \sum \limits_{0 \le m + n \le 3} a_{m,n} x^{m} y^{n} \left\{ {\begin{array}{*{20}c} {A1 \le x \le A2} \\ {B1 \le y \le B2} \\ \end{array} } \right.,$$
(24)

where, A1, A2, B1, B2 are the image boundaries.

Using the forward S transform, we have the blurred image,

$$\begin{aligned} I^{\prime}\left( {x,y} \right) & = I\left( {x,y} \right)*h_{\sigma } \left( {x,y} \right) \\ & = \mathop \sum \limits_{0 \le m + n \le 3} \frac{{\left( { - 1} \right)^{m + n} }}{m!n!}h_{m,n} I^{m,n} , \\ \end{aligned}$$
(25)

Assuming h(x, y) is circularly symmetric, it can be shown that

$${h}_{m,n}=0{\mathrm{ for }}\left(m{\mathrm{ odd}}\right){\mathrm{or }}\left(n{\mathrm{ odd}}\right){\mathrm{ and }}{h}_{m,n}={h}_{n,m}.$$
(26)

For any point spread function,

$${h}_{{0,0}}=1.$$
(27)

From Eqs. (25) and (27), \({I}^{^{\prime}}(x,y)\) becomes

$${I}^{^{\prime}}\left(x,y\right)={I}^{{0,0}}+{\frac{1}{2!}h}_{{2,0}}{I}^{{2,0}}+{\frac{1}{2!}h}_{{0,2}}{I}^{{0,2}}.$$
(28)

From Eq. (26) we have \({h}_{{0,1}}={h}_{{1,0}}={h}_{{1,1}}=0\) and \({h}_{{0,2}}={h}_{{2,0}},\) and using inverse S transform, we can get:

$$I^{\prime } \left( {x,y} \right) = I\left( {x,y} \right) + \frac{{h_{0,2} }}{2}\left\{ {I^{\prime 2,0} \left( {x,y} \right) + I^{\prime 0,2} \left( {x,y} \right)} \right\}.$$
(29)

From the definition of moments and \(\sigma\), we have \({h}_{{2,0}}={h}_{{0,2}}={\sigma }^{2}/2\), so

$$I^{\prime } \left( {x,y} \right) = I\left( {x,y} \right) + \frac{{\sigma^{2} }}{4}\left\{ {I^{\prime 2,0} \left( {x,y} \right) + I^{\prime 0,2} \left( {x,y} \right)} \right\}.$$
(30)

Note that \(I^{\prime 2,0} + I^{\prime 0,2} = \nabla^{2} I^{\prime }\) corresponds to the Laplacian operation on the image \(I^{\prime } \left( {x,y} \right)\), thus we can get:

$$I^{\prime } \left( {x,y} \right) = I\left( {x,y} \right) + \frac{1}{4}\sigma^{2} \nabla^{2} I^{\prime } \left( {x,y} \right).$$
(31)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, X., Zhu, Y. & Ji, J. A zoom tracking algorithm based on defocus difference. J Real-Time Image Proc 18, 2417–2428 (2021). https://doi.org/10.1007/s11554-021-01133-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-021-01133-8

Keywords

Navigation