Keywords

1 Introduction

Different scientific studies have been made investigating relevant optical sensor system parameter and technical concepts using small satellite systems for mapping high temperature events starting in the early 1990 [1, 2]. In order to detect and observe fire from space, the German Aerospace Center (DLR) developed and launched two small satellites (TET-1, BIROS) in the context of the FireBIRD (Fire Bispectral InfraRed Detector) Mission. The mission aims to significantly improve detection, mapping and analysis of HTE [3] compared to currently existing sensor systems.

The FireBIRD IR sensor systems are based on cooled photodetectors. Various methods for HTE detection and quantification have been developed. While single band methods rely on the robust demarcation of background pixels and higher temperature pixels, considered as being anomalous, the FireBIRD systems facilitate the application of the widely used bi-spectral algorithm approach introduced by Dozier [4], using the mid-infrared (MWIR) and longwave-infrared (LWIR) channels. While this meanwhile proven concept has shown its capabilities successfully in various case studies [5,6,7,8], multi-channel data processing approaches require an accurate co-registration of the data (Fig. 1).

Fig. 1.
figure 1

TET-BIROS constellation (animated)

Each of the highly agile satellites with up to 30° off-nadir pointing is equipped with three line cameras, one in the visible and near-infrared (VIS/NIR) spectral range with high spatial resolution, used to detect sun glint; one in the MWIR range to measure high temperatures and one in the LWIR to derive the background temperature with high precision (Fig. 2).

Fig. 2.
figure 2

Camera sub-assembly. Left: MWIR, center: VIS, right: LWIR, parameters see Table 1

The main camera parameters are shown in Table 1.

Table 1. The main parameters of the three line cameras

This configuration and the corresponding radiometric calibration and processing allow fire to be detected reliably directly after downlink. Additionally, to be able to inform the relevant authorities about a detected fire, it is necessary to determine the location of the fire accurately and quickly. To that end, the FireBIRD satellites have the capability of directly georeferencing their images. This relies on the position and (exterior) orientation of the satellite, provided by the Attitude and Orbit Control System (AOCS), as well as the geometric camera calibration (interior orientation). The AOCS data are mainly based on GPS (position) and a system of star-trackers and Inertial-Measurement Units (IMU) for the orientation.

Although the cameras’ interior geometries have already been measured in a laboratory set-up on ground, verification and in this case re-calibration in orbit was necessary. This was necessitated by the occurrence of small changes in the cameras viewing geometry, possibly caused by the structural loads during launch.

A usual setup for in-flight calibration of a (push-broom) line camera is a set of strips (line images) of a test field scanned in different directions with ground control points (GCPs) [9,10,11]. For a satellite based camera the flight direction cannot be changed significantly, resulting in a less constrained geometry for calibration. Fortunately the required accuracy for georeferencing is an order of magnitude lower than the accuracy of the GPS, allowing us to take the position as given. Also the lever arm between the satellite’s origin and the cameras’ centers of projections is negligible compared to the much larger ground sampling distance.

What remains to be determined is the boresight alignment of each camera as well as the refined interior orientations.

A special case in terms of line geometry are the MWIR and LWIR cameras, which enhance the spatial resolution using two staggered lines as shown in Fig. 3 and discussed in Sect. 2.

Fig. 3.
figure 3

Parameters of the detector line coordinate system (xs, ys) (Color figure online)

For geometric calibration only scenes with highly accurate exterior orientation were selected. The georeferencing chain was verified using GCPs. The so-called boresight of each camera (angular misalignment relative to the satellite coordinate system used by the AOCS) was determined and improved via bundle adjustment. The new boresights were introduced into the processing configuration and the scenes were reprocessed.

The issue and the proposed solution described in this paper are of special importance for small, micro and nano satellites. Satellites of these classes have small mass budgets implying higher sensitivities and instabilities with respect to thermal and structural loads. Approaches for in-orbit re-calibrations are a precondition for such satellite systems.

In the following chapter the in-orbit calibration procedure used for both satellites is described. The results are shown in Sect. 3 and the lessons learned in Sect. 4. Conclusions and an outlook are given in Sect. 5.

2 Calibration

For every detector array of every camera a set of geometric calibration parameters is defined. These are based on the definition of the pointing view of each individual pixel, with respect to (i) the camera coordinate system, known due to laboratory measurements, (ii) definition of the mounting position of the camera system. With these parameters and the calculation specification (described below) it is possible to calculate the line of sight in satellite coordinates for each physical pixel. Using the absolute satellite position and orientation obtained by the AOCS and the time to synchronize them with the imagery, the line of sight can be transformed into a global coordinate system, e.g. WGS84 or a local space rectangular coordinate system, thus allowing direct georeferencing of the imagery.

In the following the special geometry of FireBIRD’s line cameras are explained. The parameters of the interior orientation and distortion were not optimized during the in-orbit calibration because they did not contribute significantly to the achieved accuracy. However, for the understanding of the geometry of rather unusual focal plane arrays with staggered lines the following paragraphs may be helpful.

The IR-detector consists of a pair of adjacent lines with 512 imaging pixel each, where the second line is shifted by a half pixel in line direction (Fig. 3). In combination with a doubled temporal sampling rate the ground sampling distance is half the size of the pixel size.

Each of the two staggered line detectors is regarded as one single detector with a special geometry. It is described by the following calibration parameters (see Figs. 3 and 4).

Fig. 4.
figure 4

Rotation of the detector line coordinate system (xs, ys) with respect to the image plane coordinate system (xi, yi)

  • FocalLength c

  • FirstPixelX x0 (along track) and FirstPixelY y0 (across track): Position of the first active pixel of the swath in the detector line coordinate system

  • StaggeringOffsetX sx and StaggeringOffsetY sy: Offset of the shifted detector line with respect to the position in the ideal (non-staggered) case

  • PixelPitch si: Distance between pixel mid-points along the detector line

  • LineAngle α: Angle of rotation of the detector line with respect to the image plane

  • Distortion k1, k2, k3: Radial symmetric distortion parameters

  • AlignmentAngles qR: Rotation of the camera with respect to the satellite coordinate system (AOCS).

For the VIS/NIR camera the same set of parameters is used with neutral staggering offsets.

Geometric Calculation:

The input for the calculation is the pixel index i, starting with the value 0 for the first pixel of the swath running over all subsequent pixels in line direction. For staggered lines, i alternates between the pixels of the two detector lines. The pixel index refers to the order of pixels within the detector array. This means that in case of subsampled images, the appropriate scale factors between the image pixel and the physical pixel have to be applied in order to get the correct position.

Interior Orientation:

The position (x, y) of a pixel i on the detector line coordinate system is calculated as follows:

r = i modulo 2

x = x0 + r sx

y = y0 + si + sy

Some pixel indices are denoted in the corresponding symbols for the pixels (blue boxes). For cameras with multiple detector lines on the focal plane (e.g. VIS), the rotation of the detector line around the optical axis is expressed by the parameter α. The position of the pixel on the image plane (x, y) is then

\( x^{\prime} = xcos(\alpha ) - y \, sin(\alpha ) \)

\( y^{\prime} = x \, sin(\alpha ) + y \, cos(\alpha ) \)

Image Distortion:

The radial symmetric distortion of the image is modeled according to Brown’s distortion model [12] with the three radial symmetric parameters k1, k2 and k3. The distorted pixel coordinate (x′′, y′′) is calculated in the following way from the undistorted coordinate in the image coordinates (x, y).

$$ \begin{array}{*{20}c} {x''} \\ {y''} \\ \end{array} = \begin{array}{*{20}c} {x^{\prime}} \\ {y^{\prime}} \\ \end{array} + \begin{array}{*{20}c} {x^{\prime}} \\ {y^{\prime}} \\ \end{array} \left( {k_{1} r^{2} + k_{2} r^{4} + k_{3} r^{6} } \right) \quad with\quad r^{2} = x^{'2} + y^{'2} $$

Camera Coordinates:

The corresponding camera coordinate (X′, Y′, Z′) of the distorted pixel coordinate (x′′, y′′) is defined to be

$$ \left( {\begin{array}{*{20}c} {X^{\prime}} \\ {Y^{\prime}} \\ {Z^{\prime}} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {x''} \\ {y''} \\ { - c} \\ \end{array} } \right), $$

where c is the focal length of the camera. As the focal plane is at a negative z-coordinate of the camera coordinate system, it is located at the negative z-coordinate −c.

Sensor Alignment:

The next step is the transformation of the camera coordinate to the satellite coordinate system. This is performed in two steps. First, the sensor alignment is corrected and second, the resulting coordinates are flipped into the satellite coordinate system.

The camera coordinate system is intended to be almost identical to the flipped satellite coordinate system, but they are translated and rotated anyway. Whereas the translation is assumed to be negligible, the rotation is determined during the geometric calibration procedure. It is a three-dimensional rotation qR.

The satellite coordinates (X, Y, Z) of the camera coordinates vector (X′, Y′, Z) can be determined by rotating it with the inverse rotation q 1R .

Exterior Orientation:

Finally, the exterior orientation of the satellite is necessary to define the geometric relation between the camera and the world (Fig. 5). The exterior orientation is provided by the AOCS (attitude and orbiting control system), measuring the satellites position and orientation mainly with a GPS-receiver, two star trackers and an IMU. While the position is already given in earth centered, earth fixed (ECEF) coordinates, the orientation has to be transformed from the stellar coordinate system to the moving earth. This complex system is an elementary part of the georeferencing chain and was investigated in-depth in the context of the geometric calibration and is beyond the scope of this paper.

Fig. 5.
figure 5

Image and camera coordinates with respect to the satellite coordinate system

The vector (X, Y, Z) is now given in satellite coordinates and is defined as pointing in the direction of the pixel i, corresponding to the line of sight.

Using the exterior orientation the vector (X, Y, Z), as well as the center of projection, can be transformed into the earth centered and earth fixed WGS84 coordinate system. The exterior orientation is obtained from the AOCS and is already transformed into WGS84 coordinates.

Test Field:

For almost every part of the world well georeferenced satellite imagery is available with a much higher ground resolution than the resolution of the cameras of TET-1/BIROS. So the ground control points can be arbitrary stationary points where the height is available via SRTM. The challenge is to find points that are clearly visible in all spectral bands.

For GCP selection the images are mapped on to a reference plane at average terrain height with the nominal camera parameters to obtain unstaggered and roughly undistorted images (Fig. 6).

Fig. 6.
figure 6

Exemplary GCPs (crosses) clearly identifiable in the different spectral bands. Scene: TET1 Demmin, 1.8.2014 (Table 2). The displayed part shows the Baltic coast line around Stralsund and Greifswald

Points on waterlines make for good GCPs, as water and land differ in the visual spectral range as well as having different temperatures and hence a good contrast in the infrared range. The disadvantage is that water lines can change over time, so care must be taken to (visually) ensure that the water level was similar at the time when the reference images were taken.

Bundle Adjustment:

Given the camera model; the exterior orientation and the GCPs, a bundle adjustment was performed to determine the unknown boresight alignments and camera model parameters [13].

Relevant Calibration Parameters:

During the calibration of TET-1 it became clear that just determining the boresight of each individual camera clearly improves georeferencing accuracy (as expected). However, no significant gain could be achieved by simultaneously determining the other camera parameters defined in the beginning of this section. An explanation for this is that the nominal camera parameters are accurate enough for the rather low resolution and/or that they are highly correlated with the boresight (e.g. x0, y0, and α). This was confirmed at the calibration of BIROS.

Geometric Processing:

For the generation of the final data products a geometric processing chain was established following the radiometric correction. It uses the exterior orientation from the AOCS and the interior orientation obtained by the in-orbit calibration. The interior orientation if the cameras shall remain constant under normal circumstances. However, these can vary throughout the life time of the satellite. Therefore it is essential to keep the optical parameter configurable for data processing, in order to allow re-adjusting these parameters to the real state of the sensor at a time. For monitoring and especially for maintenance it is crucial to transfer some of the original and derived parameters used as well as meta information to the final product, e.g. selection parameter and versioning or simply ground sampling distance. During processing additional plausibility checks are involved and derived dynamic parameters are provided, e.g. ground track etc. On demand additional geometric data can be generated. Standard metadata are produced in XML files allowing for simple tools to derive information either to generate statistics or to look for specific constellations. Such parameters are AOCS state, sensor state, illumination conditions, or geolocation and orbit parameter, e.g. orbit direction, roll angle (Fig. 7).

Fig. 7.
figure 7

Overview of geometric processing of TET-1 and BIROS imagery

3 Results

TET-1

The calibration of TET-1 was performed on a test field around Demmin, Germany on 1.8.2014 during day time. Mostly on the coast to the Baltic Sea, 23 GCPs were manually selected and used for the bundle adjustment (top row of Table 2).

Table 2. TET-1 scenes direct georeferencing accuracies (RMS). The first scene was used for calibration. The following were used for verification. The lower four scenes were night scenes where only the LWIR and MWIR cameras were imaging

The following lines show the accuracy reached with the above calibration at other scenes. The GCPs were only used as check points for the accuracy of direct georeferencing. In Table 3 the corresponding boresight alignment angles are listed.

Table 3. Determined boresight for day scenes with rotation expressed as Euler angles around the axes X (flight direction), Y (right) and Z (up)

BIROS

Consequently, the same approach was used for calibration and verification of the geometric status and accuracy of BIROS images. A set of 140 BIROS scenes was used to determine the on-board calibration values of the MWIR and LWIR sensors onboard of the BIROS satellite and to evaluate the influences of possible error sources on the direct georeferencing, which had not previously been investigated in detail for the TET data sets. The scenes were divided into two sets depending on the availability of star tracker information which serves as highly precise input for the exterior orientation of the system. This separation revealed that the set with at least one star tracker available produced a far more consistent set of boresight angles in comparison to scenes with no contemporaneous star tracker information available. To further reduce complexity of the data takes and a possible error source, scenes with off-nadir pointing angles larger than 10° were removed, which further reduced the RMS. By clustering the remaining boresight angles, two distinct sets of angles differing by 0.8° in pitch direction (ry) became evident, which cannot be explained by changes in the boresight. This difference can be resolved by a positional error along track of 7.8 km. By analyzing the timing events and the metadata an irregularly occurring timing error of 1 s in the metadata could be identified and removed, so the two clusters could be merged into one consistent set of boresight angles, which is given in Table 4.

Table 4. Determined boresight for BIROS, rotation expressed as Euler angles around the axes X (flight direction), Y (right) and Z (up)

Using scenes with highest quality exterior orientation (at least one star tracker available) and exact timing, a direct georeferencing mean accuracy of 419.2 m was achieved for 32 selected scenes; the best scenes showing an RMS with sub-pixel accuracy. This solution allows for a significant improvement of the geometric accuracy of the individual data sets during reprocessing and an automatic mosaicking of the scenes, as observed by [14] (Table 5).

Table 5. Statistics for direct georeferencing using the determined boresight angles and timing correction for 32 BIROS scenes with star trackers, MWIR and LWIR

4 Lessons Learned

The experiences with real space missions show that even obvious issues and challenges are often underestimated, e.g. providing a common time base for different subsystems or defining coordinate systems and their transition matrices. Additionally, the complexity of an entire system is often underestimated. Companies or research institutes focus on single units, the overall view goes short, and e.g. the positioning of a star tracker should be optimized knowing mission operation conditions to maximize star-fix of the trackers. System simulators can help to overcome this issue. The investigations show the important role of image quality (IQ) assessment as an integral part of a mission, as the image reveals the behavior and status of the sensor system (satellite) directly. This begins with simulation of representative images before launch, the definition of requirements for the processing software, including updates during the mission and ends with validation of images taken during the mission.

For calibration of optical sensors for small satellites a few notes shall summarize the lessons learned:

  • Be aware of the limitations of the on ground calibration processes.

  • Include specialists for system design, calibration and data processing into a calibration team from the very beginning.

  • Design an optical system in a way that it can be calibrated in-flight.

  • A validation procedure must be an inherent part of the design.

5 Conclusions

Direct georeferencing involves a whole chain of satellite subsystems, such as star trackers, IMU, time synchronization, camera, read out electronics, on-board data processing, and the camera with its subcomponents. A complete verification of the functionality and performance can only be made under flight conditions. Even though all subsystems were tested before launch it turned out, that additional work hat do be done to reach the desired performance (Fig. 8).

Fig. 8.
figure 8

Mosaic of two BIROS scenes from Kuwait (MWIR, 2018.05.13 and 2018.08.29) processed using direct georeferencing with optimized boresight angles (Table 4)

Based on detailed investigations, these existing problems, the interior geometry as well the timing regime could have been detected, analyzed. The described solution was developed to solve these problems to reach an operational state of both satellites, TET-1 and BIROS, including direct georeferencing. Using high quality AOCS data, subpixel accuracy can be reached with direct georeferencing,

The presented investigations show the necessity to consider and implement validation procedures as an integral part of the data processing chain. These tasks in combination with data quality assessment procedures are the prerequisite for standardized product development activities within further mission activities.