Skip to main content
Log in

Algorithm and code optimizations for real-time passive ranging by imaging detection on single DSP

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Monocular vision-based passive ranging system is attractive for potential applications in navigation, transportation and traffic control, robotics, and air defense. Conventionally system estimates target distance by using imaging direction and distance-related features, and the latter are extracted from the image sequence. It is not only challenging to DSP device hardware capability but also to programming optimization that to transplant the traditional feature extraction algorithm to a real-time system due to the complexity and time-consuming. In this paper, a simplified scale invariant feature transform image matching algorithm and code optimization procedure are developed, and the improvement of operation speed is demonstrated clearly. For an image sequence with size of \(256\times 256\) pixels, the operation speed was improved to 25–27 frames per second from 4 s per frame by using a single C64x+ DSP core, which meets the requirements of real-time ranging algorithm perfectly while the ranging error is less than \(7\,\%\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Abbreviations

\(n\), \(n+1\) :

Subscripts identifying the sampling time

\(\alpha \) :

Target azimuth

\(\beta \) :

Target pitching

\((x, y, z)\) :

Camera’s coordinates

\(L\) :

The target’s rotation-invariant linearity features in image frames

\(R\) :

The distance between target and camera

\(F\) :

Camera’s focus

\((l, m, n)\) :

Direction vector to a target

\(\rho \) :

Distance ratio between adjacent sampling times

\(T, S\) :

Individually the target and surveyor for short

\(\varphi \) :

The angle across target’s trace to the camera’s sightline

References

  1. Dalmia, A.K., Trivedi, M.: Target ranging using passive sensing approaches. In: Proceedings of the SPIE on Targets and Backgrounds: Characterization and Representation, Orlando, USA, pp. 363–370 (1995)

  2. Said, Z., Sundaraj, K., Wahab, M.N.A.: Depth estimation for a mobile platform using monocular vision. Procedia Eng. 41, 945–950 (2012)

    Article  Google Scholar 

  3. Salih, Y., Malik, A.S., May, Z.: Depth estimation using monocular cues from single image. In: National Postgraduate Conference (NPC), Tronoh, Malaysia, pp. 1–4 (2011)

  4. Chan, P.P.K., Bing-Zhong, J., Ng, W.W.Y., Yeung, D.S.: Depth estimation from a single image using defocus cues. In: International Conference on Machine Learning and Cybernetics (ICMLC), Guilin, China, pp. 1732–1738 (2011)

  5. Haris, S.M., Zakaria, M.K., Nuawi, M.Z.: Depth estimation from monocular vision using image edge complexity. In: 2011 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Budapest, Hungary, pp. 868–873 (2011)

  6. Jung, J.I., Ho, Y.S.: Depth map estimation from single-view image using object classification based on Bayesian learning. In: Proceedings IEEE Conference 3DTV (3DTV-CON), pp. 1–4 (2010)

  7. Akiyama, A., Kobayashi, N., Mutoh, E., Kumagai, H., et al.: Space imaging infrared optical guidance for autonomous ground vehicle. In: Proceedings of the SPIE on Novel Optical Systems Design and Optimization, San Diego, USA, pp. 70610K-1–70610K-8 (2008)

  8. Aslantas, V., Pham, D.T.: Depth from automatic defocusing. Opt. Express 15(3), 1011–1023 (2007)

    Article  Google Scholar 

  9. de Visser, M., Schwering, P.B.W., de Groot, J.G., Hendriks, E.A.: Passive ranging using an infrared search and track sensor. Opt. Eng. 45(2), 1–14 (2006)

    Google Scholar 

  10. Huang, S., Tao, L., Zhang, T.: A modified method of passive ranging using optical flow of target infrared images. In: Proceedings SPIE, MIPPR, 2005: Image Analysis Techniques, Wuhan, China, pp. 471–477 (2005)

  11. Baldacci, A., Corsini, G., Diani, M., Cini, A., Colonna, M.: Ranging by means of monocular passive systems. In: Proceedings SPIE, Conference on Signal Processing, Sensor Fusion and Target Recognition, Orlando, USA, pp. 473–482 (1999)

  12. Gong, J., Fan, G., Yu, L., Havlicek, J.P., Chen, D.: Joint view-identity manifold for target tracking and recognition. In: 19th IEEE International Conference on Image Processing (ICIP), Orlando, FL, USA, pp. 1357–1360 (2012)

  13. Fu, X.N., Liu, S.-Q., Li, E.-K.: A real time image sequence processing algorithm for target ranging. In: Proceedings SPIE 27th International Congress on High-Speed Photography and Photonics, Xi’an, China, pp. 62793A (2007)

  14. Kisaécanin, B., Bhattacharyya, S.S., Chai, S.: Embedded Computer Vision. Springer, London (2009)

    Book  Google Scholar 

  15. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  16. Asif, M., Soraghan, J.J.: Depth estimation and implementation on the DM6437 for panning surveillance cameras. In: Proceedings IEEE 16th International Conference on Digital Signal Processing, Santorini, Greece, pp. 1–7 (2009)

  17. Lin, Daw-Tung, Lin, Min-Chueh, Huang, Kai-Yung: Real-time automatic recognition of omnidirectional multiple barcodes and DSP implementation. Mach. Vis. Appl. 22(2), 409–419 (2011)

    Article  MathSciNet  Google Scholar 

  18. Bonora, S., Capraro, I., Poletto, L., et al.: Wave front active control by a digital-signal-processor driven deformable membrane mirror. Rev. Sci. Instrum. 77(9), 093102-1–093102-5 (2006)

    Article  Google Scholar 

  19. Carvalho, B.B., Fernandes, H., Varandas, C.A.F.: A low cost, real-time DSP-based diagnostic for the control of operation of a fusion experiment. Rev. Sci. Instrum. 74(3), 1799–1802 (2003)

    Article  Google Scholar 

  20. Yin, Z., He, Y., Xiong, C., Zhong, R.: The design of moving objects detection system based on DM6437. In: International Conference on Computer Science and Service System (CSSS), pp. 4100–4103 (2011)

  21. Wang, D., Fu, X.-N.: Method for imaged targets ranging based on line segment features. In: International Conference on Communications, Computing and Control Applications (CCCA), Hongkong, China, pp. 20–21 (2011)

  22. Huang, F.C., Huang, S.Y., Ker, J.W., Chen, Y.C.: High-performance SIFT hardware accelerator for real-time image feature extraction. IEEE Trans. CSVT 22(3), 340–351 (2012)

    Google Scholar 

  23. Liu, L., Peng, F., Tian, Y., Yiping, X., Zhao, K.: Fast image matching for localization in deep-sea based on the simplified SIFT (scale invariant feature transform) algorithm. In: Proceedings SPIE, Second International Conference on Space Information Technology, Wuhan, China, pp. 67953A (2007)

  24. Texas Instruments, TMS320C6000 Programmer’s Guide, SPRU198J, Texas Instruments, Dallas, Texas, April 2010

  25. Texas Instruments, TMS320C64x+ IQmath Library User’s Guide, SPRUGG9, Texas Instruments, Dallas, Texas, December 2008

  26. Texas Instruments, TMS320C64x/C64x+ DSP CPU and Instruction Set Reference Guide, SPRU732J, Texas Instruments, Dallas, Texas, July 2010

  27. Texas Instruments, TMS320C6000 Optimizing Compiler v7.0 User’s Guide, SPRU187Q, Texas Instruments, Dallas, Texas, February 2010

  28. Heymann, S., Fröhlich, B., Medien, Fakultät , Müller, K., Wiegand, T.: SIFT implementation and optimization for general-purpose GPU. In: International Conference on Computer Graphics, Visualization and Computer Vision (WSCG) (2007)

  29. Xin, L., Wenjie, C., Tao, M., Lishuang, X.: Real-time algorithms for SIFT based on distributed shared memory architecture with homogeneous multi-core DSP. In: ICICIP 2011, Harbin, China, pp. 839–843 (2011)

Download references

Acknowledgments

This work has been supported by a grant from the National Natural Science Foundation of China (No. 60872136) and Natural Science Basic Research Plan in Shaanxi Province of China (Program No. 2011JM8002). We would like to thank GAO Wen-jing and HE Tian-xiang for the support in carrying out the experiments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiao-ning Fu.

Appendix

Appendix

1.1 Ranging model

In this paper, we take the geography coordinate system o-xyz as the host coordinates and take the north, the west, and the upper direction as the positive direction of \(x-\), \(y-\), and \(z-\) axis, respectively. It is a reasonable assumption that the state of our measure platform where the camera fixed is known by GPS or other inboard system. The known state of the platform includes the azimuth, the pitching, the radial distance to point \(o\), the velocity, the acceleration, and the attitude.

The platform itself also constitutes another coordinate O-XYZ, i.e., the platform coordinate. If we take an aircraft as the platform, then the nose head, the right wing, and the cabin head is the positive \(y-\), \(x-\), and \(z-\) axis of the platform coordinate in turn, and it is also a right-hand coordinate. The airborne measure platform in the geography coordinate is shown in Figure 2.

In the \(n\)-th sampling, the point \(O\) with coordinates \((x_{n}, y_{n}, z_{n})\) in o-xyz coordinate, the position of the moving target in the measure platform coordinate is expressed as \((r_{n},\alpha _{n},\beta _{n})\) in spherical form. Here, \(\alpha _{n}\) and \(\beta _{n}\) are the target’s azimuth and pitching in measure platform coordinate. And the sightline of camera to the target in the geography coordinate could be expressed in terms of direction vector \((l_{n}, m_{n}, n_{n})\) as below.

$$\begin{aligned} \left( {\begin{array}{l} l_n \\ m_n \\ n_n \\ \end{array}} \right) =\left( {{\begin{array}{lll} {t_{11}^n }&{}\quad {t_{12}^n }&{}\quad {t_{13}^n } \\ {t_{21}^n }&{}\quad {t_{22}^n }&{}\quad {t_{23}^n } \\ {t_{31}^n }&{}\quad {t_{32}^n }&{}\quad {t_{33}^n } \\ \end{array} }} \right) \left( {\begin{array}{ll} \cos \alpha _n&{}\quad \cos \beta _n \\ \sin \alpha _n&{}\quad \cos \beta _n \\ \sin \beta _n \\ \end{array}} \right) \end{aligned}$$
(8)

Here, \(\left( {{\begin{array}{lll} {t_{11}^n }&{}\quad {t_{12}^n }&{}\quad {t_{13}^n } \\ {t_{21}^n }&{}\quad {t_{22}^n }&{}\quad {t_{23}^n } \\ {t_{31}^n }&{}\quad {t_{32}^n }&{}\quad {t_{33}^n } \\ \end{array} }} \right) \) is the transposed matrix of the direction vector for \(x-\), \(y-\), and \(z-\) axis in o-xyz coordinates.

It is supposed that there is a scale \(x_{0}\) with one dimension in the target, which is invariable to the rotation of the camera in two adjacent sampling times. The projection of this scale in the focal plane is called the target’s feature linearity. Now, let us take into account the normal condition in which both the target and the measure platform are moving. Figure 3 illustrates the recursive ranging model based on the feature linearity.

Fig. 3
figure 3

The airborne measurement platform in geography coordinates

In Fig. 4, \(T\) and \(S\) are the target and the surveyor (camera), respectively. Subscript \(n\) or \(n+1\) is the sampling time. \(T_{n}T_{n+1}, S_{n}S_{n+1}\) is the moving trace of the target and the platform between the \(n\)-th and \((n+1)\)-th sampling time, respectively, while \(\varphi _{n}\) and \(\varphi _{n+1}\) are the angles of the target’s trace to the camera’s sightline at each sampling time.

Fig. 4
figure 4

The ranging model based on feature linearity

The focuses of optical system of the camera are supposed as \(f_{n}\) and \(f_{n+1}\) at the \(n\)-th and \((n+1)\)-th sampling, and the length of the feature linearity in the camera focal plane is \(L_{n}\) and \(L_{n+1}\) at meantime. According to the geometry imaging principle, it could be concluded as following.

$$\begin{aligned} \frac{r_{n+1}}{r_n }=\frac{f_{n+1} }{f_n }\frac{L_n }{L_{n+1}}\frac{\sin \varphi _{n+1} }{\sin \varphi _n } \end{aligned}$$
(9)

1.2 Ranging algorithm

Based on Eq. 9, the following recursion passive ranging equation is derived.

$$\begin{aligned} C_4 r_{n+1}^4 +C_3 r_{n+1}^3 +C_2 r_{n+1}^2 +C_1 r_{n+1} +C_0 =0 \end{aligned}$$
(10)

Where

$$\begin{aligned} C_4&= H[1-(l_{n+1} l_n +m_{n+1} m_n +n_{n+1} n_n )^{2}]\end{aligned}$$
(11)
$$\begin{aligned} C_3&= 2H\{l_{n+1} (x_{n+1} -x_n )+m_{n+1} (y_{n+1} -y_n )\nonumber \\&+n_{n+1} (z_{n+1} -z_n )-(l_{n+1} l_n +m_{n+1} m_n +n_{n+1} n_n )\nonumber \\&\times [l_n (x_{n+1}\! -\!x_n )\!+\!m_n (y_{n+1} -y_n )\!+\!n_n (z_{n+1} -z_n )]\}\nonumber \\ \end{aligned}$$
(12)
$$\begin{aligned} C_2&= H\{[l_n (x_{n+1} -x_n )+m_n (y_{n+1} -y_n ) \nonumber \\&+n_n (z_{n+1} -z_n )]^{2}+(x_{n+1} -x_n )^{2}+(y_{n+1} -y_n )^{2}\nonumber \\&+(z_{n+1} -z_n )^{2}\} \end{aligned}$$
(13)
$$\begin{aligned} C_1&= 0\end{aligned}$$
(14)
$$\begin{aligned} C_0&= k_2 r_n^2 +k_1 r_n+k_0 \end{aligned}$$
(15)
$$\begin{aligned} H&= \left( \frac{f_n }{f_{n+1}}\right) ^{2}\left( \frac{L_{n+1} }{L_n }\right) ^{2}\frac{1}{r_n^2 } \end{aligned}$$
(16)

In Eq. 15

$$\begin{aligned} k_2&= (l_{n+1} l_n +m_{n+1} m_n +n_{n+1} n_n )^{2}-1\end{aligned}$$
(17)
$$\begin{aligned} k_1&= 2\{l_n (x_{n+1} -x_n )+m_n (y_{n+1} -y_n ) \nonumber \\&+n_n (z_{n+1} -z_n )-(l_{n+1} l_n +m_{n+1} m_n +n_{n+1} n_n )\nonumber \\&[l_{n+1} (x_{n+1} -x_n )+m_{n+1} (y_{n+1} -y_n ) \nonumber \\&+n_{n+1} (z_{n+1} -z_n )]\} \end{aligned}$$
(18)
$$\begin{aligned} k_0&= [l_{n+1} (x_{n+1} -x_n )+m_{n+1} (y_{n+1} -y_n ) \nonumber \\&+n_{n+1} (z_{n+1} -z_n )]^{2}-(x_{n+1} -x_n )^{2} \nonumber \\&-(y_{n+1}-y_n )^{2}-(z_{n+1} -z_n )^{2} \end{aligned}$$
(19)

Equation 10 is multiplied by \(r_{n}^{2}\) in both sides, then

$$\begin{aligned}&C_4 r_n^2 r_{n+1}^4 +C_3 r_n^2 r_{n+1}^3 +C_2 r_n^2 r_{n+1}^2\nonumber \\&\quad +C_1 r_n^2 r_{n+1} +C_0 r_n^2 =0 \end{aligned}$$
(20)

Here, we denote

$$\begin{aligned} \left\{ {\begin{array}{l} C_{40} =C_4 r_n^2 \\ C_{30} =C_3 r_n^2 \\ C_{20} =C_2 r_n^2 \\ C_{10} =C_1 r_n^2 =0\quad (C_1 =0) \\ C_{00} =C_0 r_n^2 =k_2 r_n^4 +k_1 r_n^3 +k_0 r_n^2 \\ \end{array}} \right. \end{aligned}$$
(21)

And then the Eq. 20 can turn into as below.

$$\begin{aligned}&C_{40} r_{n+1}^4 +C_{30} r_{n+1}^3 +C_{20} r_{n+1}^2 +k_2 r_n^4\nonumber \\&\quad +k_1 r_n^3 +k_0 r_n^2 =0 \end{aligned}$$
(22)

Substitute the distance ratio of the adjacent sampling time, \(\rho =r_{n}/r_{n+1}\), into Eq. 22, we can get

$$\begin{aligned}&(C_{40} +k_2 \rho ^{4})r_{n+1}^4 +(C_{30} +k_1 \rho ^{3})r_{n+1}^3 +(C_{20}\nonumber \\&\quad +k_0 \rho ^{2})r_{n+1}^2 =0 \end{aligned}$$
(23)

After reduction of Eq. 23, we can get

$$\begin{aligned} D_2 r_{n+1}^4 +D_1 r_{n+1}^3 +D_0 r_{n+1}^2 =0 \end{aligned}$$
(24)

Considering the target’s distance \(r_{n+1}\ne 0\), so Eq. 24 can be written as

$$\begin{aligned} D_2 r_{n+1}^2 +D_1 r_{n+1}+D_0 =0 \end{aligned}$$
(25)

Equations 1019 and 21, the coefficients of the ranging equation can be determined, where

$$\begin{aligned} D_2&= (\rho ^{4}-{H}^{\prime })[(l_{n+1} l_n +m_{n+1} m_n +n_{n+1} n_n )^{2}-1]\end{aligned}$$
(26)
$$\begin{aligned} D_1&= 2{H}^{\prime }\{l_{n+1} (x_{n+1} -x_n )+m_{n+1} (y_{n+1} -y_n ) \nonumber \\&+\,n_{n+1} (z_{n+1} -z_n )-(l_{n+1} l_n +m_{n+1} m_n +n_{n+1} n_n ) \nonumber \\&\times \, [l_n (x_{n+1} -x_n )+m_n (y_{n+1} -y_n ) \nonumber \\&+\,n_n (z_{n+1}-z_n )]\}+2\rho ^{3}\{l_n (x_{n+1} -x_n ) \nonumber \\&+\,m_n (y_{n+1} -y_n )+n_n (z_{n+1} -z_n ) \nonumber \\&-\,(l_{n+1} l_n +m_{n+1} m_n +n_{n+1} n_n )\times [l_{n+1} (x_{n+1} -x_n ) \nonumber \\&+\,m_{n+1} (y_{n+1} -y_n )+n_{n+1}\times (z_{n+1} -z_n )]\} \end{aligned}$$
(27)
$$\begin{aligned} D_0&= {H}^{\prime }\{[l_n (x_{n+1} -x_n )+m_n (y_{n+1} -y_n ) \nonumber \\&+\,n_n (z_{n+1} -z_n )]^{2}+(x_{n+1} -x_n )^{2}+(y_{n+1}-y_n )^{2} \nonumber \\&+\,(z_{n+1} -z_n )^{2}\}+\rho ^{2}\{[l_{n+1} (x_{n+1}-x_n ) \nonumber \\&+\,m_{n+1} (y_{n+1} -y_n )+n_{n+1} (z_{n+1} -z_n )]^{2} \nonumber \\&-\,(x_{n+1} -x_n )^{2}-(y_{n+1} -y_n )^{2}-(z_{n+1} -z_n )^{2}\}\nonumber \\\end{aligned}$$
(28)
$$\begin{aligned} {H}^{\prime }&= Hr_n^2 =\left( \frac{f_n }{f_{n+1} }\right) ^{2}\left( \frac{L_{n+1} }{L_n }\right) ^{2} \end{aligned}$$
(29)

In Eqs. 2328, \(\rho =L_{n+1}/L_{n}\) can be obtained from the target image matching. Between the \(n\)-th and the \((n+1)\)-th frame of the image sequence, we can obtain three matched SIFT key points, and this is the minimum requirement for image tracking. In our experiment, \(L_{n}\) or \(L_{n+1}\) is taken as the radius of the circumcircle of three matched points.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fu, Xn., Wang, J. Algorithm and code optimizations for real-time passive ranging by imaging detection on single DSP. SIViP 9, 1377–1386 (2015). https://doi.org/10.1007/s11760-013-0590-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-013-0590-7

Keywords

Navigation