Onboard FPGA-based fast estimation of point object coordinates for linear IR-sensor

https://doi.org/10.1016/j.micpro.2017.04.009Get rights and content

Abstract

This paper describes the relatively simple algorithm and its implementation in the FPGA-based onboard hardware for quick calculating estimators of point source coordinates within an image formed by onboard IR-sensor. The algorithm based on the iterative least square method uses the model which describes the cases when the spot is put on both three and two squares, small quantity of samples for channel impulses, voltage shifts and correlated white noise for each photoelement. The paper contains the results of modeling and the description of the FPGA-based hardware that allows replacing routine processing of whole image.

Introduction

INFRARED imagery is very widely used in different applications including civil tasks such as remote detecting forest fires, light signals from the lost people, leakage of gas or oil from the pipelines, etc. [1]. In these cases, infrared signals can be presented in the form of so-called point objects (point sources). The diameter of the corresponding light spot formed by the IR-sensor objective output is usually equal to the size of the sensitive photoelement square. Photoelements of the sensor can be structured in the form of lines or matrices having sometimes up to several thousand elements. It allows realizing complex image processing algorithms capturing few frames [1], [2], [3].

Nevertheless, some tasks (as mentioned above) cannot be effectively decided by onboard hardware due to some limitations for civil applications. Trying to obtain the cheapest solutions, there are a lot of air and space vehicles containing several lines of the photoelements with relatively slow switchers. It is compelled to conduct routine processing procedures which cannot be implemented into onboard hardware but can be performed by on-ground technologies [3]. In some cases, the on-ground processing algorithms cannot help to make a decision because the low sampling switcher rate leads to small quantity of samples and low magnitudes for impulses from the light spot of a point object. Moreover, any IR-sensor has so-called fixed pattern noise (FPN) including FPN of voltage shifting, FPN of sensitivity (or, the problem of floating transfer coefficients), FPN of defective elements, FPN of the high frequency channel noises which can be summarized as a correlated white noise. If FPN of sensitivity and defects can be still eliminated on the stage of the preliminary sensor calibrating, however, voltage shifting and high frequency noise affect detecting small signals [2].

Therefore, there are two problems to be solved simultaneously:

  • 1)

    we need a relatively simple (hence, fast) algorithm to detect a point object and estimate its coordinates during frame forming (in other words, to avoid redundancy of data);

  • 2)

    we need the corresponding onboard hardware implementation of the algorithm to perform the significant part of the frame processing instead of routine on-ground image processing.

Onboard vision systems can contain different types of computation elements [4]. The structure of an onboard vision system depends on the carrier type and tasks to be solved. For example, in the case of a small-sized carrier, we have strong restriction on sizes, weight, power consumption and thermal emission of the vision system. The most suitable for our task, powerful and flexible systems contain FPGAs in the capacity of computing units. FPGAs are used to perform the most of “hard” operations such as spatial and temporal image filtering, geometric and spectral transformations, template matching and thresholding, binary image marking and solving a system of linear algebraic equations (SLAE). These systems can also contain a CPU/DSP core as a control unit. The core can be used to execute unique operations with small amount of data; FPGA dispatching and internal control.

FPGA-based systems have some advantages with respect to CPU-based systems. Because the FPGA allows performing a set of the operations at the same time, some types of the algorithms can be realized in so-called pipeline mode [5]. It means, once the incoming video stream acquisition is accomplished, the result can be obtained in a few microseconds. It makes the FPGA-based systems to be suitable for two-steps video processing. At the first step, we solve different preliminary processing tasks, e.g. object detection, extraction and its basic parameters estimation. Using the results of the first step, then we can solve more complicated tasks as object selection and recognition. Moreover, the FPGA structure is specialized to implement some digital signal processing routines, e.g. signal filtering. On the other hand, the algorithms used in FPGA-based systems have serious restrictions. These algorithms cannot use recursion procedures. Besides, they are limited in branching and number of the iterations. In spite, we have used the unfolding technique for the current implementation.

The residual part of the paper has the following structure. Section 2 describes the model of point object signals for IR-sensor having two lines of photoelements, squares of which are interlaced. The algorithm using the iterative least square method (LSM) is described in Section 3. The results of computer modeling showing the relatively fast convergence of the suggested algorithm are placed in Section 4. Section 5 contains description of the FPGA-based system and some its features.

Section snippets

The model of point object signal

We consider the case of the infrared sensor having a line of photoelements whose photosensitive squares sized d are interlaced with zones ɛ and 0 < ɛ < d/3, as it is shown on Fig. 1. Nevertheless, the position of the light spot from the point source is random with respect to the squares; hence, we should consider two cases: the spot is on three squares (Fig. 1a) and the spot is on two squares (Fig. 1b). Therefore, we estimate the point object coordinates by spatial processing 2D-arrays sized M × L

The iterative LSM-based algorithm

Because the model (4) is non-linear with respect to the information parameters, then it is useful to apply an iterative procedure of the least square method (LSM) to estimate the magnitude U0 and the spot positionΦT=[φ,β(0),,β(M1)]. It assumes to conduct some linearization of the model what we can do by considering new variables: W=U0φ, V(j)=U0β(i), i = 0,…,M-1. Then, we go to the differential representation: ΔZn(p)=ZnZ˜n(p)=H(X)XΔX+Vn=H1(Φ^(p))ΔX1(p)+Vn,where ΔX1T=[ΔU0,ΔW,ΔV(0),,ΔV(M1),Δc(

Experimental results

Effectiveness of the suggested algorithm has been proven by statistical computer modeling using Matlab software including the worked-out C++ dynamic libraries. We suggested that the point object has uniformly distributed brightness within its light spot; hence, the channel impulses have forms close to bells’ forms. Based on this preposition, we used the function sin2(…) to model the signal (1). The useful signal was formed in the three channels (registers) in accordance with (3), and for the

The implementation in FPGA-based onboard vision system

An iterative nature of the suggested algorithm prevents a quick and simple FPGA implementation. Fortunately, due to the algorithm fast convergence (Fig. 3), we can limit the number of the iterations by 2 or 3 depending on the required accuracy for the point source coordinates. It allowed us to implement this algorithm in the FPGA-based real time onboard vision system. Therefore, the following implementation scheme is proposed (Fig. 4).

The input data from the detector is arriving to the

Conclusion

The suggested decision contains the effective algorithm for detecting and estimating rough coordinates of the point object on the first iteration. Because the channel impulses have non-linear forms then it is quite possible to use different variants of linearization.

The last remark is devoted to implementation of the suggested algorithm with some modifications mentioned above. We put the algorithm as a part of software for some IR-based vision systems [8]. There were two variants on

Acknowledgment

This work is dedicated to the memory of the outstanding Russian scientist Yury Korshunov (1920–2011), who was at the forefront of this work.

Yury S. Bekhtin is a professor at the Ryazan State Radio Engineering University (RSREU). He received his diplomas of engineer, engineer-researcher and degree of candidate of technical science (PhD in computer science) from the RSREU in 1983, 1990 and 1993, respectively. He was a visiting professor at the Norwegian University of Science and Technologies (NTNU) and Danish Technical University in 1996–1997 and 2001, respectively. In 2009, he did successfully defend his second dissertational work

References (8)

  • P.R. Norton et al.

    Third generation infrared imagers

  • A.F. Milton et al.

    Influence of non-uniformity on infrared focal plane array performance

    Opt. Eng.

    (1985)
  • R.C. Gonzalez et al.

    Digital Image Processing

    (1992)
  • B.A. Alpatov et al.

    The implementation of contour-based object orientation estimation algorithm in FPGA-based on-board vision system

There are more references available in the full text version of this article.

Cited by (7)

View all citing articles on Scopus

Yury S. Bekhtin is a professor at the Ryazan State Radio Engineering University (RSREU). He received his diplomas of engineer, engineer-researcher and degree of candidate of technical science (PhD in computer science) from the RSREU in 1983, 1990 and 1993, respectively. He was a visiting professor at the Norwegian University of Science and Technologies (NTNU) and Danish Technical University in 1996–1997 and 2001, respectively. In 2009, he did successfully defend his second dissertational work for a doctor of technical science degree (habilitation) in RSREU. He is the author of more than 150 journal papers and has written two books. His current research interests include wavelet encoding of noisy still images and video, object detection and pattern recognition, wavelet-based fusion of hyper- and multispectral images.

Pavel V. Babayan is a head of the department of Automation and information technologies in control at the RSREU. He received his BS degree in mathematics and PhD from the Ryazan State Radio Engineering University in 2000 and 2005, respectively. He is the author of more than 30 journal papers and has written four book chapters. His current research interests include image registration, machine vision, object recognition and parameters estimation. He is a member of SPIE.

Valery V. Strotov holds the position of associated professor (from 2009) at the Department of Automation and Information Technologies in Control of RSREU. He received B.S. degree, engineer diploma in automation and control systems, candidate of technical science degree (equal to Ph.D.) from RSREU, Ryazan in 2001, 2002 and 2009, respectively. He is the author of more than 15 journal papers. His technical interests include image processing (image registration and stabilization, object tracking) in industrial and onboard vision systems. He is a member of SPIE

This work has been supported by the grant for the Leading Scientific Schools of the Russian Federation (NSh-7116.2016.8).

View full text