Elsevier

Signal Processing

Volume 150, September 2018, Pages 75-84
Signal Processing

Centralized multiple-view sensor fusion using labeled multi-Bernoulli filters

https://doi.org/10.1016/j.sigpro.2018.04.010Get rights and content

Highlights

  • Multiple-view multi-sensor fusion with random finite set filters.

  • Information-based tuning of weights in Generalized Covariance Intersection method.

  • Using information divergence for automatic weight tuning.

Abstract

This paper presents a novel method for track-to-track fusion to integrate multiple-view sensor data in a centralized sensor network. The proposed method overcomes the drawbacks of the commonly used Generalized Covariance Intersection method, which considers constant weights allocated for sensors. We introduce an intuitive approach to automatically tune the weights in the Generalized Covariance Intersection method based on the amount of information carried by the posteriors that are locally computed from measurements acquired at each sensor node. To quantify information content, Cauchy–Schwarz divergence is used. Our solution is particularly formulated for sensor networks where the update step of a Labeled Multi-Bernoulli filter is running locally at each node. We will show that with that type of filter, the weight associated with each sensor node can be separately adapted for each Bernoulli component of the filter. The results of numerical experiments show that our proposed method can successfully integrate information provided by multiple sensors with different fields of view. In such scenarios, our method significantly outperforms the common approach of using Generalized Covariance Intersection method with constant weights, in terms of inclusion of all existing objects and tracking accuracy.

Introduction

Statistical sensor fusion methods combine the information received from multiple sensors to propagate statistical densities and estimate the state(s) of object(s) with enhanced accuracy compared to using a single sensor [1]. The emergence of new sensors, advanced processing techniques, and improved processing hardware has made real-time multi-sensor fusion increasingly viable and rapidly evolving. By means of multi-sensor information fusion, multi-object tracking systems can not only enlarge spatial/temporal surveillance coverage, but also improve system reliability and robustness [2], [3].

Popular statistical multi-object tracking techniques include Multiple-Hypotheses Tracking (MHT) [4], Joint Probabilistic Data Association (JPDA) [5], and Random Finite Set (RFS) filters [6]. Recent studies on RFS theory have led to various filters with different implementations (based on simplifying but reasonable assumptions on the multi-object distribution) such as probability hypothesis density (PHD) filter and its cardinalized version (CPHD) [6], the multi-Bernoulli filter (MB) and its cardinality-balanced version (CB-MeMBer) [7], and the newly derived δ-GLMB(Generalized Labeled Multi-Bernoulli) [8], [9] and its special case, the Labeled Multi-Bernoulli (LMB) filters [10], [11]. A robust version of multi-Bernoulli filter that can handle unknown clutter intensity and detection profile has also been introduced in [12]. The most recent family of RFS filters are based on using labeled random finite set densities which have been shown to admit conjugacy with the general multiple point measurement set likelihood and to be closed under the Chapman–Kolmogorov equation [10], [11]. Various implementations of RFS filters based on Sequential Monte Carlo (SMC) and Gaussian-mixture approximations have been presented. These methods have been utilized to solve different tracking problems in various applications such as track-before-detect visual tracking applications [13], [14], [15], and sensor management in target tracking applications [16], [17], [18], [19].

In multi-sensor applications, a common approach is track-to-track fusion in which the above filters are used in each sensor node and the posterior densities that are computed locally in every node are fused with each other. The fusion operation can be executed in a distributed or centralized manner. In this context, multi-sensor fusion refers to combining several multi-object posteriors that depending on the type of the filter used in each sensor node, can have a different mathematical form.

This paper proposes a new solution for track-to-track fusion in centralized sensor networks where an LMB filter update is locally run at each sensor node. A schematic diagram of such a network is shown in Fig. 1. At each time, each sensor node receives the most recent multi-object prior from the central node, then acquires a measurement set, from which it creates a new local LMB posterior and sends to the central node. In the central node, all the local LMB posteriors are fused and used as the next multi-object prior to be fed back to the sensor nodes, and the above process recursively continues.

In order to combine the local posteriors, the most common method proposed in the RFS filtering literature is using the Generalized Covariance Intersection (GCI) rule. Examples include the fusion of Poisson multi-object posteriors of multiple local PHD filters [20], i.d.d. clusters densities of several local CPHD filters [21], multi-Bernoulli densities of local multi-Bernoulli filters [22], and LMB or δ-GLMB filters [23]. In many of the above mentioned works, GCI rule is used for track-to-track fusion in a distributed sensor network.

As we will show later in this paper, track-to-track fusion based on GCI rule leads to consistency fusion1 of multiple densities in the sense that the fused density is formed with more emphasis on objects that are included in all local posteriors. In many applications, where different sensors have different fields of view, some objects may not be detected by all sensors. In a consistency fusion scheme, such objects may not be represented well in the fused multi-object density, and may be lost from the fused multi-object state estimate (i.e. from the tracking results). In such scenarios, the solution should be capable of combining the multiple-source information in such a way that they complement each other in the fused posterior. Examples of such fusion solutions include the usage of linear-like complementary filters for attitude estimation [24], using an extended Kalman filter for fusion of multi-sensor data in mobile robotics [25], and using an unscented Kalman filter for fusion of multiple Poisson densities from local PHD filters in robotic applications [26].

This paper presents a novel strategy for fusing random set LMB posteriors that are received from sensor nodes with limited fields of view in a centralized sensor network with feedback. The proposed solution is different from GCI rule in its commonly used form, in the sense that it performs not only consistency fusion (thus, enhanced accuracy when local posteriors consistently represent high levels of confidence in the existence and state of the same object), but also multiple-view sensor fusion, providing extended coverage when different sensors cover different regions.

Our proposed solution intuitively applies GCI rule to separately combining the densities of Bernoulli components (possible objects) with the same label, in different local LMB posteriors. We devise an information theoretic method to tune and assign adaptive weights to each Bernoulli component, so that the more informative local posterior for that label influences the fused density (output of GCI rule) more than less informative local posteriors.

To our knowledge, the only work that addresses the problem of GCI fusion of multiple sensor data in presence of limited sensing abilities is the work of Yi et al.[27]. Their solution is an intuitive method inspired by the Covariance Integration rule which performs fusion directly on single Gaussian components. They propose a heuristic approach towards tuning the weights associated with each sensor node in alignment with the detection profile of that node. They also propose a technique for “information difference preservation” through the fusion process that is based on truncating information distances between Gaussian components of intensity functions of local PHD filters in various sensor nodes in the network.

In our approach, the information content of each local posterior is quantified using an information theoretic divergence. In a Bayes filtering context, divergence functionals are commonly used to quantify the expected information gain from prior to posterior densities. The commonly used divergence functions in the stochastic filtering literature include the Shannon entropy [28], the Kullback–Leibler (KL) divergence [20], [29], [30], and Rényi divergence [18], [31], [32]. Calculation of these divergences incurs significant computational cost and they can only be computed analytically for simple cases. Recently, Cauchy–Schwarz divergence was shown to admit a simple closed-form expression for Poisson multi-object densities [33]. This was followed by an increasing uptake of Cauchy–Schwarz divergence to solve multi-object tracking problems using labeled random set filters [34], [35], [36], [37]. In this work, we use Cauchy–Schwarz divergence to quantify the information content of each local posterior in relation to each possibly existing object (Bernoulli component of the fused LMB posterior).

The rest of this paper is organized as follows. Section 2 briefly reviews the necessary background material on labeled multi-Bernoulli filter and Cauchy–Schwarz divergence. The formulation of the centralized LMB multi-sensor fusion is presented in Section 3 followed by our proposed solution and its particle implementation in Section 4. Numerical experiments are given in Section 5. They demonstrate that in scenarios involving sensors with unlimited fields of view (hence all objects being detectable, and consistency fusion being suitable), our method performs very similar to GCI rule . However, with sensors having limited (and overlapping) fields of view, our method is capable of tracking all the objects while the state of art method leads to targets being lost. Section 6 concludes the paper.

Section snippets

Bayesian multi-object tracking

For notational purposes, we use lower-case letters (e.g. x, x) to denote single-object states and upper-case letters (e.g. X, X) to denote multi-object states. In order to distinguish labeled states and distributions from unlabeled ones, we adopt bold letters to represent labeled entities and variables (e.g. X, x, π). Furthermore, blackboard bold letters represent spaces for variables (e.g. X,Z,L).

Assume that at time k, the labeled multi-object state is denoted by XkX and the multi-object

GCI Fusion rule for multiple-view sensors: Problem statement

Fig. 2 shows a block diagram representing the local and central processing tasks involved in the problem. At each time k, the ith sensor node receives a current multi-object prior LMB denoted by πk|k1={(rk|k1,pk|k1(·))}Jk1+ from the central node and at the same time, acquires a set of measurements denoted by Zi, k. Then, the update step of an LMB filter is executed locally at the sensor node, leading to a local multi-object posterior denoted by πi,k={(ri,k,pi,k(·))}Jk1+. Note that

Multiple-view multi-sensor fusion

Our proposed solution for multiple-view multi-sensor track-to-track fusion using the GCI rule is inspired by two observations. Firstly, the example shown in Fig. 3 demonstrates that track-to-track fusion can be made possible by properly tuning the weights in the GCI rule. However, one may need to tune the weights for each object label ℓ separately and dependence of weights on labels is not accommodated by the GCI rule. A resolution for this issue is given by the following observation.

From

Numerical experiments

We conducted extensive experiments involving various scenarios with different numbers of targets, sensors, target motion models, sensor detection profile models and sensors field of view. In each experiment, we compared the performance of the proposed sensor fusion method with the commonly used GCI fusion method with equal and constant weights which is the state of art as presented in several recent papers [22], [27], [37] for distributed fusion. Performance was quantified in terms of Optimal

Conclusions

A novel multiple-view multi-sensor fusion method was introduced for LMB filters. Through the Cauchy–Schwarz divergence evaluation for each LMB component on all sensors, the proposed method overcomes the drawbacks of commonly used Generalized Covariance Intersection (GCI) method, which only considers a constant weight to each sensor during the whole tracking and fusion process. Numerical experiments involving a challenging multi-target tracking scenario, showed that our method can properly fuse

Acknowledgment

This project was supported by the Australian Research Council through ARC Discovery grant DP160104662, as well as National Nature Science Foundation of China grant 61673075.

References (39)

  • R.P.S. Mahler

    Advances in Statistical Multisource-Multitarget Information Fusion

    (2014)
  • B.T. Vo et al.

    The cardinality balanced multi-target multi-Bernoulli filter and its implementations

    IEEE Trans. Signal Process.

    (2009)
  • B.N. Vo et al.

    Labeled random finite sets and the Bayes multi-target tracking filter

    IEEE Trans. Signal Process.

    (2014)
  • F. Papi et al.

    Generalized labeled multi-Bernoulli approximation of multi-object densities

    IEEE Trans. Signal Process.

    (2015)
  • B.T. Vo et al.

    Labeled random finite sets and multi-object conjugate priors

    IEEE Trans. Signal Process.

    (2013)
  • S. Reuter et al.

    The labeled multi-Bernoulli filter

    IEEE Trans. Signal Process.

    (2014)
  • B.T. Vo et al.

    Robust multi-Bernoulli filtering

    IEEE J. Sel. Top. Signal Process.

    (2013)
  • B.N. Vo et al.

    Joint detection and estimation of multiple objects from image observations

    IEEE Trans. Signal Process.

    (2010)
  • R. Hoseinnezhad et al.

    Visual tracking in background subtracted image sequences via multi-Bernoulli filtering

    IEEE Trans. Signal Process.

    (2013)
  • Cited by (46)

    • A discrepant information selection mechanism for cooperative sensors

      2023, Signal Processing
      Citation Excerpt :

      Compared with the value of the overlapping indicator over the measurement space, the value of the overlapping indicator over the interested area, i.e., the area where targets exist, may more accurately reflect the discrimination among the local information. In this section, the performance of the proposed fusion method is evaluated along with the Cauchy-Schwarz divergence (CSD) fusion rule [25] and the adaptive weight GCI rule proposed by Wang et al. [43]. The multitarget tracking scenario [1] is presented in Fig. 2, where a maximum of 10 targets appears with various births and deaths.

    • Multi-UAV cluster-based cooperative navigation with fault detection and exclusion capability

      2022, Aerospace Science and Technology
      Citation Excerpt :

      The multi-UAV cooperative navigation mainly adopts centralized and distributed structure. In the centralized framework, information from all sensors of each UAV is sent to a unified fusion center for cooperative positioning [15,16]. The centralized structure theoretically provides optimal navigation performance, but it may suffer from heavy communication cost and computational burden when the formation size is large [17].

    • Robust multi-sensor generalized labeled multi-Bernoulli filter

      2022, Signal Processing
      Citation Excerpt :

      In the centralized setting, measurements from all sensors are delivered to a central node for direct computation of the multi-target density. Solutions for this problem have been developed via PHD, CPHD filters in [33,34], Multi-Bernoulli filter in [35], or LMB filter in [36–39]. The iterated corrector method, which sequentially performs update overall sensors, is also widely used in practice [33,40].

    View all citing articles on Scopus
    View full text