Determining star-image location: A new sub-pixel interpolation technique to process image centroids

https://doi.org/10.1016/j.cpc.2007.06.007Get rights and content

Abstract

We develop a theoretical methodology to estimate the location of star centroids in images recorded by CCD and active pixel sensors. The approach may be generalized to other applications were point-sources must be located with high accuracy. In contrast with other approaches, our technique is suitable for use with non-100% fill ratio sensors. The approach is applied experimentally to two camera systems employing sensors with fill-ratios of approximately 50%. We describe experimental approaches to implement the new paradigm and characterize centroid performance using laboratory targets and against real night-sky images. Applied to a conventional CCD camera, a centroid performance of 11.6 times the raw pixel resolution is achieved. Applied to a camera employing an active-pixel sensor a performance of 12.8 is demonstrated. The approach enables the rapid development of autonomous star-camera systems without the extensive characterizations required to derive polynomic fitting coefficients employed by traditional centroid algorithms.

Introduction

High accuracy spacecraft-attitude control is most easily attained using star cameras that record an image of the star field. Images may be processed locally or remotely to recover attitude. To improve star-location resolution star cameras often employ algorithms that interpolate star centres to sub-pixel precision. Rather than locating a star solely by the brightest pixel location, starlight is deliberately defocused by the camera so that it falls over a grid of neighboring pixels. By correctly weighting the intensities over the pixel grid, the light centroid location is adjusted to improve estimates by up to an order of magnitude. The improved location data are used to provide enhanced spacecraft stability and simplifies star identification, allowing the easier discrimination of similar star groups. With accurate location data, autonomous star identification can be performed in real time, using minimal processing (see [11]). The general approach is applicable to the image localization of sources in other applications.

The defocusing approach has a number of other advantages over a tightly focused image plane. While digital cameras quantize the image into pixel bins, stars may appear in any location in the image. No matter how tightly the image is focused, starlight may fall on the division between two pixels or even between four pixels. Fig. 1 shows three responses for the same star in slightly different locations as imaged by a CCD. In each case, the intensity and apparent spread of each response look remarkably different, although the images were made with the same optical settings only minutes apart.

The problem is further complicated by some of the physical characteristics of digital cameras. Some CCD chips have channels on their surface to prevent blooming effects (where a high reading in one pixel triggers a response in another) and to reduce noise. Other CCD chips used by current star-camera systems typically have 100% fill ratios (the surface being completely covered by active pixel wells) but still suffer from edge effects (non-linear response across the surface of each pixel well). Active pixel sensors rarely have 100% fill ratios as some of the imaging surface is required for additional components embedded along with the photosensitive pixels. Some effects may be corrected by microscopic surface optics and careful (costly) fabrication, but inevitably the summed response of a tightly focused star falling on the boundary between two pixels will be different from that of the same star trained directly at the pixel centre. Location and magnitude calculations must account for non-linear multiple pixel responses to improve estimates.

Recent star camera systems use empirical techniques to find a function that best gives centroid position (see [10]). Starlight is usually defocused over a 3×3 grid to allow centroid determination without adversely affecting star detection. In the simplest form, the star image on the CCD surface is assumed to have a square distribution, inferring that centroid location is determined as the average pixel location (or the first norm). Other systems extend this analysis, approximating non-linear effects by including higher power terms of this first norm—the coefficients of the polynomial expansion determined by experimental research [3], [6]. Crowden [4] reports that centroid-location accuracy is improved by between 3 and 50 times the raw pixel location depending on the type of CCD used, the quality and stability of the optical chain and the amount of empirical calibration.

Centroid techniques have also been previously applied in astronomical studies to determine star location from telescope images in order to measure parallax. Auer and Van Altena [2] describe a method for the analysis of microdensitometer raster scans of image plates recorded at the Yerkes Observatory. A fitted offset Gaussian was shown to out perform a 2D moment analysis when applied to raster scan data from 26 plates. Monet and Dahn [8] describe a method to determine effective image position for parallax studies by fitting an offset Gaussian to image data recorded using a 190×244 pixel CCD camera and 4 m telescope at Kitt Peak National Observatory. An iterative optimization approach is used to estimate the sub-pixel image location of stars to high accuracy by recursively calculating Gaussian area integrals by means of the Error Function. The pixel information is weighted in order to exclude faulty pixels. Applied to 2-minute exposures, parallax data with an r.m.s. scatter of 2.5×10−3 arcsec was obtained despite the authors concern that a Gaussian was only a poor approximation to the image shape. In further work Monet et al. [9] applied a similar analysis to images recorded on a cooled TI 800×800 CCD mounted at the focal plane of a 1.55 m at Flagstaff Station obtaining parallax solutions of the accuracy of 1.0×10−3 arcsec from nightly observation sessions and after significant computation.

Here, a non-recursive analytical approach to centroid determination is developed. Defocused starlight is also assumed to have a Gaussian distribution—a common assumption in optical applications with point sources [5]. Pixel boundaries may be configured to account for non-100% fill ratio CCD chips and edge effects. The analysis develops expressions for theoretical pixel-intensity readings involving the error function. The expressions which are difficult to solve even by numerical means are inverted using an error-function approximation approach. Combining neighboring pixel information, a non-linear expression is derived that estimates centroid location as a function of the intensity ratio of neighboring pixels. This theoretical approximation is shown to be accurate to at least 1.2% of the pixel width by simulation. Centroid determination graphs are shown for the x- and y-axes of the Sony CCD used in the star-camera prototype. Practical investigation with the star-camera indicate that improvements in centroid location of at least 11.6 times the raw pixel location without resort to extensive empirical analysis or the use of 100% fill ratio CCD chips. A practical technique for the determination of the image defocusing and camera performance is also presented and illustrated using images recorded by an active pixel-type sensor where a centroid accuracy of 12.8 times the raw pixel location is demonstrated.

For high-performance centroid determination, our analysis may be used as the basis for a further experimental investigation of centroid determination. In typical applications, it offers a methodology to rapidly calibrate sensors to centroid star locations to better than 1/10 of a pixel, based on a realistic optical model. The analysis also allows the calculation of centroid locations and star magnitudes by cameras using anti-blooming CCD chips that are much less susceptible to blinding by bright stars and other celestial objects. The approach may be applied post-launch when camera focus parameters may require adjustment or validation due to variations induced by launch vibration, thermal drift and aging effects caused by the space environment.

Section snippets

Theory

Consider a point source with a Gaussian intensity distribution, incident on the surface of a CCD. Assuming that the CCD pixel wells have an even intensity sensitivity across their active surface area, with a linear response, the intensity reading of a particular pixel k in the CCD may be expressed as the integral over the active pixel area K:Ik=I02πσxσykexp{(ξ1x)22σx2(ξ1y)22σy2}dξ1dξ2, where I0 is total intensity of the light (adjusted for the particular CCD response); (xy) is the

Results

We applied the algorithm to determine sensors in two applications. In the first set of experiments a star camera prototype utilizing a CCD sensor is employed. The centroid performance is evaluated using 50 night-sky images recorded over a period of one hour. Centroid performance statistics are estimated by comparing star pattern separations in successive frames. In the second investigation, a camera employing an active pixel sensor is utilized in a laboratory calibration procedure to determine

Conclusions

The theory developed offers a theoretical basis for centroid offset calculation and practical procedures for calibrating star cameras in the laboratory and against the night sky. Experimental results indicate that centroid estimates may be generated to high accuracy with little empirical calibration. The technique is adaptable to a variety of CCD and active pixel sensors that may be used in star-camera systems and in other applications.

References (11)

  • B.M. Quine et al.

    A fast autonomous star-acquisition algorithm for spacecraft

    Control Engineering Practice

    (1996)
  • M. Abramowitz et al.

    Handbook of Mathematical Functions

    (1972)
  • L.H. Auer et al.

    Digital image centering II

    Astronomical Journal

    (1978)
  • P. Cope, Wide-angle autonomous star-sensor for satellite use, in: Proceedings ESTEC GNC Conference, Noordwijk, The...
  • J.B. Crowden, Moon Orbiting Research Observatory MORO Phase A2 Study: Proposed Sensing System, Matra Marconi Space,...
There are more references available in the full text version of this article.

Cited by (78)

  • Artificial neural network for star tracker centroid computation

    2023, Advances in Space Research
    Citation Excerpt :

    There are several classical methods for obtaining the centroid of an image, as mentioned in Section 1.1. Moreover, promising methods for achieving higher accuracy are constantly being developed and explored (Wei et al. (2013), Quine et al. (2007), He et al. (2019)). They are essentially based on the calculation of an image noise model, the details of which are unknown.

  • Nanoscale measurement with pattern recognition of an ultra-precision diamond machined polar microstructure

    2019, Precision Engineering
    Citation Excerpt :

    Considering the rapid development of computer vision [8–11], it is a good idea to use the improvement of algorithms in robustness and accuracy to partially replace the rigorous requirements for the physical properties of measurement equipment. Computer vision has been used in various fields such as face recognition [12], astronomy [13], target tracking [14], cryptography [15], etc. Much research has focused on improving the speed and robustness of computer vision and has achieved great success [16–18].

  • APL's Contributions to Stratospheric Ballooning for Space Science

    2023, Johns Hopkins APL Technical Digest (Applied Physics Laboratory)
View all citing articles on Scopus
View full text