Skip to main content
Log in

Probabilistic active filtering with gaussian processes for occluded object search in clutter

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

This paper proposes a Gaussian process model-based probabilistic active learning approach for occluded object search in clutter. Due to heavy occlusions, an agent must be able to gradually reduce uncertainty during the observations of objects in its workspace by systematically rearranging them. In this work, we apply a Gaussian process to capture the uncertainties of both system dynamics and observation function. Robot manipulation is optimized by mutual information that naturally indicates the potential of moving one object to search for new objects based on the predicted uncertainties of two models. An active learning framework updates the state belief based on sensor observations. We validated our proposed method in a simulation robot task. The results demonstrate that with samples generated by random actions, the proposed method can learn intelligent object search behaviors while iteratively converging its predicted state to the ground truth.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Li JK, Hsu D, Lee WS (2016) Act to see and see to act: POMDP planning for objects search in clutter. In: IEEE/RSJ International conference on intelligent robots and systems (IROS), pp 5701–5707

  2. Nieuwenhuisen D, van der Stappen AF, Overmars MH (2008) An effective framework for path planning amidst movable obstacles. In: Algorithmic Foundation of Robotics VII, pp 87–102

  3. Stilman M, Schamburek J-U, Kuffner J, Asfour T (2007) Manipulation planning among movable obstacles. In: IEEE International conference on robotics and automation (ICRA), pp 3327–3332

  4. Van Den Berg J, Stilman M, Kuffner J, Lin M, Manocha D (2009) Path planning among movable obstacles: a probabilistically complete approach. In: Algorithmic foundation of robotics VIII. Springer, pp 599–614

  5. Isler S, Sabzevari R, Delmerico J, Scaramuzza D An information gain formulation for active volumetric 3D reconstruction. In: IEEE International Conference on Robotics and Automation (ICRA), pp 3477–3484

  6. Wu K, Ranasinghe R, Dissanayake G (2015) Active recognition and pose estimation of household objects in clutter. In: IEEE International conference on robotics and automation (ICRA), pp 4230–4237

  7. Ghaffari Jadidi M, Valls Miro J, Dissanayake G (2018) Gaussian processes autonomous mapping and exploration for range-sensing mobile robots. Auton Robot 42(2):273–290

    Article  Google Scholar 

  8. Brandao M, Figueiredo R, Takagi K, Bernardino A, Hashimoto K, Takanishi A (2020) Placing and scheduling many depth sensors for wide coverage and efficient mapping in versatile legged robots. Int J Robot Res 39(4):431–460

    Article  Google Scholar 

  9. Dogar MR, Srinivasa SS (2012) A planning framework for non-prehensile manipulation under clutter and uncertainty. Auton Robot 33(3):217–236

    Article  Google Scholar 

  10. Dogar MR, Koval MC, Tallavajhula A, Srinivasa SS (2013) Object search by manipulation. In: IEEE International conference on robotics and automation (ICRA), pp 4973–4980

  11. Lin Y, Wei S, Yang S, Fu L (2015) Planning on searching occluded target object with a mobile robot manipulator. In: IEEE International conference on robotics and automation (ICRA), pp 3110–3115

  12. Gupta M, Rühr T., Beetz M, Sukhatme GS (2013) Interactive environment exploration in clutter. In: IEEE/RSJ International conference on intelligent robots and systems (IROS), pp 5265–5272

  13. Pajarinen J, Kyrki V (2014) Robotic manipulation in object composition space. In: 2014 IEEE/RSJ International conference on intelligent robots and systems (IROS), pp 1–6

  14. Xiao Y, Katt S, ten Pas A, Chen S, Amato C (2019) Online planning for target object search in clutter under partial observability. In: International conference on robotics and automation (ICRA), pp 8241–8247

  15. Levine S, Pastor P, Krizhevsky A, Ibarz J, Quillen D (2018) Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int J Robot Res 37(4-5):421–436

    Article  Google Scholar 

  16. Pinto L, Gupta A (2016) Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In: IEEE International conference on robotics and automation (ICRA), pp 3406–3413

  17. Paxton C, Barnoy Y, Katyal K, Arora R, Hager GD (2019) Visual robot task planning. In: International conference on robotics and automation (ICRA), pp 8832–8838

  18. Eitel A, Hauff N, Burgard W (2020) Learning to singulate objects using a push proposal network. In: Robotics research, pp 405–419

  19. Yang Y, Liang H, Choi C (2020) A deep learning approach to grasping the invisible. IEEE Robot Autom Lett 5(2):2232–2239

    Article  Google Scholar 

  20. Rasmussen CE, Williams CK (2006) Gaussian processes for machine learning. MIT Press, Cambridge

  21. Tanaka D, Matsubara T, Ichien K, Sugimoto K (2014) Object manifold learning with action features for active tactile object recognition. In: IEEE/RSJ International conference on intelligent robots and systems (IROS), pp 608–614

  22. Saal H, Ting J-A, Vijayakumar S (2010) Active sequential learning with tactile feedback. In: The thirteenth international conference on artificial intelligence and statistics, pp 677–684

  23. Kaboli M, Yao K, Feng D, Cheng G (2019) Tactile-based active object discrimination and target object search in an unknown workspace. Auton Robot 43(1):123–152

    Article  Google Scholar 

  24. Poon J, Cui Y, Ooga J, Ogawa A, Matsubara T (2019) Probabilistic active filtering for object search in clutter. In: International conference on robotics and automation (ICRA), pp 7256–7261

  25. Girard A, Rasmussen CE, Candela JQ, Murray-Smith R (2003) Gaussian process priors with uncertain inputs application to multiple-step ahead time series forecasting. In: Advances in neural information processing systems (NIPS), pp 545–552

  26. Deisenroth MP, Huber MF, Hanebeck UD (2009) Analytic moment-based Gaussian process filtering. In: The 26th annual international conference on machine learning, pp 225–232

  27. Nocedal J, Wright SJ (2006) Sequential quadratic programming. Numerical Optimization, pp. 529–562

  28. Deisenroth MP (2010) Efficient reinforcement learning using Gaussian processes. KIT Scientific Publishing, vol 9

  29. Cui Y, Osaki S, Matsubara T (2019) Reinforcement learning boat autopilot: a sample-efficient and model predictive control based approach. In: IEEE/RSJ International conference on intelligent robots and systems (IROS), pp 2868–2875

  30. Rohmer MFE, Singh SPN (2013) V-REP: a versatile and scalable robot simulation framework. In: The international conference on intelligent robots and systems (IROS), pp 1321–1326

  31. Snelson E, Ghahramani Z (2006) Sparse Gaussian processes using pseudo-inputs. In: Advances in neural information processing systems (NIPS), pp 1257–1264

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunduan Cui.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A

Appendix A

1.1 A.1 Analytical moment matching of observation function

Consider the exact analytical expression of the observation function in (11):

$$ h_{g}(\boldsymbol{\mu}, \boldsymbol{\Sigma}) \approx \int p\left( GP_{g}(\boldsymbol{x}_{*})|\boldsymbol{x}_{*}\right)p(\boldsymbol{x}_{*}|\boldsymbol{\mu}, \boldsymbol{\Sigma})\mathrm{d} \boldsymbol{x}_{*}. $$
(16)

Following [25, 26], the mean and variance of hg(μ,Σ) in each dimension a = 1,...,D are calculated:

$$ \mu_{g_{a}} = \boldsymbol{\upbeta}_{g_{a}}^{\top}\int \boldsymbol{k}_{g_{a}}(\boldsymbol{x}_{*}) p(\boldsymbol{x}_{*}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) \mathrm{d} \boldsymbol{x}_{*} = \boldsymbol{\upbeta}_{g_{a}}^{\top}\boldsymbol{l}_{g_{a}}, $$
(17)

where \(\boldsymbol {\upbeta }_{g_{a}}=(\boldsymbol {K}^{g_{a}}+\alpha ^{2}_{g_{a}}\boldsymbol {I})^{-1}\boldsymbol {Z}^{a}\). For target dimension a,b = 1,...,D and ab, predicted variance Σaa and covariance Σab of h(μ,Σ) follow:

$$ \begin{array}{@{}rcl@{}} {\Sigma}_{g_{aa}}&=& \mathbb{E}\left[\sigma_{g_{a}}^{2}(\boldsymbol{x})\right] + \mathbb{E}\left[m_{g_{a}}^{2}(\boldsymbol{x})\right] - \mu_{g_{a}}^{2} \\&=& \boldsymbol{\upbeta}_{g_{a}}^{\top}\boldsymbol{L}^{g}\boldsymbol{\upbeta}_{g_{a}} + \alpha^{2}_{g_{a}} - tr\left( (\boldsymbol{K}^{g_{a}} + \sigma_{\epsilon_{a}}^{2}\boldsymbol{I})^{-1}\boldsymbol{L}^{g}\right) - \mu_{g_{a}}^{2},\\ {\Sigma}_{g_{ab}} &=& \mathbb{E}\left[m_{g_{a}}(\boldsymbol{x})m_{g_{b}}(\boldsymbol{x})\right]- \mu_{g_{a}}\mu_{g_{b}} = \boldsymbol{\upbeta}_{g_{a}}^{\top}\boldsymbol{Q}^{g}\boldsymbol{\upbeta}_{g_{b}} - \mu_{g_{a}}\mu_{g_{b}}. \end{array} $$
(18)

Vectors \(\boldsymbol {l}_{g_{a}}\) and matrices Lg, Qg have the following elements:

$$ \begin{array}{@{}rcl@{}} l_{g_{ai}} &=& \int k_{g_{a}}(\boldsymbol{x}_{i}, \boldsymbol{x}_{*}) p(\boldsymbol{x}_{*}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) \mathrm{d} \boldsymbol{x}_{*}\\ &=&\alpha^{2}_{g_{a}}|\boldsymbol{\Sigma}\boldsymbol{\varLambda}_{g_{a}}^{-1} + \boldsymbol{I}|^{-\frac{1}{2}}\\ &&\times\exp\left( -\frac{1}{2}(\boldsymbol{x}_{i}-\boldsymbol{\mu})^{\top}(\boldsymbol{\Sigma}+\boldsymbol{\varLambda}_{g_{a}})^{-1}(\boldsymbol{x}_{i}-\boldsymbol{\mu})\right). \end{array} $$
(19)
$$ \begin{array}{@{}rcl@{}} L_{ij}^{g}& =& \frac{k_{g_{a}}(\boldsymbol{x}_{i},\boldsymbol{\mu})k_{g_{a}}(\boldsymbol{x}_{j},\boldsymbol{\mu})}{|2\boldsymbol{\Sigma}\boldsymbol{\varLambda}_{g_{a}}^{-1} + \boldsymbol{I}|^{\frac{1}{2}}}\\ &&\times \exp\left( (\boldsymbol{z}_{ij}-\boldsymbol{\mu})^{\top}(\boldsymbol{\Sigma}+\frac{1}{2}\boldsymbol{\varLambda}_{g_{a}})^{-1}\boldsymbol{\Sigma}\boldsymbol{\varLambda}_{ga}^{-1}(\boldsymbol{z}_{ij}-\boldsymbol{\mu})\right), \end{array} $$
(20)
$$ \begin{array}{@{}rcl@{}} Q_{ij}^{g} &=& \alpha_{g_{a}}^{2}\alpha_{g_{b}}^{2}|(\boldsymbol{\varLambda}_{g_{a}}^{-1}+\boldsymbol{\varLambda}_{g_{b}}^{-1})\boldsymbol{\Sigma}+\boldsymbol{I}|^{-\frac{1}{2}}\\ &\times&\exp\left( -\frac{1}{2}(\boldsymbol{x}_{i} - \boldsymbol{x}_{j})^{\top}(\boldsymbol{\varLambda}_{g_{a}} + \boldsymbol{\varLambda}_{g_{b}})^{-1}(\boldsymbol{x}_{i} - \boldsymbol{x}_{j})\right)\\ &\times&\exp\left( -\frac{1}{2}(\boldsymbol{z}_{ij}^{\prime}-\boldsymbol{\mu})^{\top}\boldsymbol{R}^{-1}(\boldsymbol{z}_{ij}^{\prime}-\boldsymbol{\mu})\right), \end{array} $$
(21)

where \(\boldsymbol {z}_{ij} = \frac {1}{2}(\boldsymbol {x}_{i}+\boldsymbol {x}_{j})\). \(\boldsymbol {z}^{\prime }\) and R are defined:

$$ \begin{array}{@{}rcl@{}} \boldsymbol{z}^{\prime}_{ij} ={\boldsymbol{\varLambda}}_{g_{b}}(\boldsymbol{\varLambda}_{g_{a}}+\boldsymbol{\varLambda}_{g_{b}})^{-1}\boldsymbol{x}_{i} + \boldsymbol{\varLambda}_{g_{a}}(\boldsymbol{\varLambda}_{g_{a}}+\boldsymbol{\varLambda}_{g_{b}})^{-1}\boldsymbol{x}_{j}, \end{array} $$
(22)
$$ \begin{array}{@{}rcl@{}} \boldsymbol{R}=(\boldsymbol{\varLambda}_{g_{a}}^{-1}+\boldsymbol{\varLambda}_{g_{b}}^{-1})^{-1}+\boldsymbol{\Sigma}. \end{array} $$
(23)

1.2 A.2 Input-output covariance

According to a previous work [26], to calculate C, which is the covariance between x and z, we define input state \(\boldsymbol {x}_{*}\sim \mathcal {N}(\boldsymbol {\mu }, \boldsymbol {\Sigma })\), and predict observation from \(h_{g}(\boldsymbol {x}_{*}|\boldsymbol {\mu }, \boldsymbol {\Sigma })\sim \mathcal {N}(\boldsymbol {\mu }_{*}, \boldsymbol {\Sigma }_{*})\). The joint distribution is:

$$ p\left( \mathbf{x}_{*}, h_{g}\left( \mathbf{x}_{*}\right) | \boldsymbol{\mu}, \mathbf{\Sigma}\right)=\mathcal{N}\left( \left[\begin{array}{c}{\boldsymbol{\mu}} \\ {\boldsymbol{\mu}_{*}} \end{array}\right],\left[\begin{array}{cc}{\boldsymbol{\Sigma}} & \boldsymbol{C} \\ \boldsymbol{C}^{\top} & {\boldsymbol{\Sigma}_{*}} \end{array}\right]\right). $$
(24)

The covariance is represented as:

$$ \boldsymbol{C}=\mathbb{E}_{\boldsymbol{x}_{*}, h_{g}}[\boldsymbol{x}_{*}h_{g}(\boldsymbol{x}_{*})^{\top}]-\boldsymbol{\mu}\boldsymbol{\mu}_{*}^{\top}. $$
(25)

For all N samples for each dimension a = 1,...,D, we have:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}_{\boldsymbol{x}_{*}, {h_{g}^{a}}}[\boldsymbol{x}_{*}{h_{g}^{a}}(\boldsymbol{x}_{*})^{\top}]&=&\int \boldsymbol{x}_{*}m_{g_{a}}(\boldsymbol{x}_{*})\mathrm{d} \boldsymbol{x}_{*}\\ &=&{\sum}_{i=1}^{N}{\upbeta}_{g_{ai}}\int \boldsymbol{x}_{*} c_{1} \boldsymbol{k}_{g_{a}}({\boldsymbol{x}}_{*})^{\top}p(\boldsymbol{x}_{*}) \boldsymbol{x}_{*}, \end{array} $$
(26)

where we define \(c_{1}:=\alpha _{g_{a}}^{-2}(2\pi )^{-\frac {D}{2}}|\boldsymbol {\varLambda }_{g_{a}}|^{-\frac {1}{2}}\) so that \(c_{1} \boldsymbol {k}_{g_{a}}(\boldsymbol {x}_{*})^{\top }\) becomes a normalized Gaussian distribution. The product of two Gaussian distributions, \(\boldsymbol {x}_{*} c_{1} \boldsymbol {k}_{g_{a}}(\boldsymbol {x}_{*})^{\top }\times p(\boldsymbol {x}_{*})\), creates a new Gaussian \(c_{2}^{-1}\mathcal {N}(\boldsymbol {x}|\boldsymbol {\phi }_{i}, \boldsymbol {\Psi })\):

$$ \begin{array}{@{}rcl@{}} c_{2}^{-1} &=& (2 \pi)^{-\frac{D}{2}}\left|\boldsymbol{\Lambda}_{g_{a}}+\mathbf{\Sigma}\right|^{-\frac{1}{2}}\\ &&\times\exp \left( -\frac{1}{2}\left( \mathbf{x}_{i}-\boldsymbol{\mu}\right)^{\top}\left( \boldsymbol{\Lambda}_{g_{a}}+\mathbf{\Sigma}\right)^{-1}\left( \mathbf{x}_{i}-\boldsymbol{\mu}\right)\right)\\ \mathbf{\Psi} &=&\left( \boldsymbol{\Lambda}_{g_{a}}^{-1}+\boldsymbol{\Sigma}^{-1}\right)^{-1} \\ \boldsymbol{\psi}_{i} &=&\mathbf{\Psi}\left( \boldsymbol{\Lambda}_{g_{a}}^{-1} \mathbf{x}_{i}+\boldsymbol{\Sigma}^{-1} \boldsymbol{\mu}\right). \end{array} $$
(27)

Considering (17) the mean of \({h_{g}^{a}}(\boldsymbol {x}_{*})\), we have:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}_{\boldsymbol{x}_{*}, {h_{g}^{a}}}[\boldsymbol{x}_{*}{h_{g}^{a}}(\boldsymbol{x}_{*})]&=&{\sum}_{i=1}^{N}{\upbeta}_{g_{ai}} c_{1} c_{2}^{-1}\boldsymbol{\psi}_{i} = {\sum}_{i=1}^{N}{\upbeta}_{g_{ai}} l_{g_{ai}}\boldsymbol{\psi}_{i}\\ &=&{\sum}_{i=1}^{N}{\upbeta}_{g_{ai}} l_{g_{ai}}\mathbf{\Psi}\left( \boldsymbol{\Lambda}_{g_{a}}^{-1} \mathbf{x}_{i}+\boldsymbol{\Sigma}^{-1} \boldsymbol{\mu}\right). \end{array} $$
(28)

Since μ can be calculated following (17), thea dimension of covariance is represented as a combination of (28) with (26):

$$ \begin{array}{@{}rcl@{}} \boldsymbol{C}^{a}&=&{\sum}_{i=1}^{n} {\upbeta}_{g_{ai}} l_{g_{ai}} {}\left( \boldsymbol{\Sigma}\left( \boldsymbol{\Sigma}+\boldsymbol{\Lambda}_{g_{a}}\right)^{-1} \mathbf{x}_{i}\left( \boldsymbol{\Lambda}_{g_{a}}\left( \boldsymbol{\Sigma} +\boldsymbol{\Lambda}_{g_{a}}\right)^{-1}-\mathbf{I}\right) \boldsymbol{\mu}{}\right)\\ &=&{\sum}_{i=1}^{n} {\upbeta}_{g_{ai}} l_{g_{ai}} \boldsymbol{\Sigma}\left( \boldsymbol{\Sigma}+\boldsymbol{\Lambda}_{g_{a}}\right)^{-1}\left( \mathbf{x}_{i}-\boldsymbol{\mu}\right). \end{array} $$
(29)

1.3 A.3 Analytical moment matching of system dynamics

Consider the exact analytical expression of hf(μ,Σ,u) with deterministic action u in (11):

$$ h_{f}(\boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{u}) \approx \int p\left( GP_{f}(\boldsymbol{x}_{*}, \boldsymbol{u})|\boldsymbol{x}_{*}, \boldsymbol{u}\right)p(\boldsymbol{x}_{*}|\boldsymbol{\mu}, \boldsymbol{\Sigma})\mathrm{d} \boldsymbol{x}_{*}. $$
(30)

Previous works [21, 22, 29] assumed that the state and action are independent by separating them in the SE kernel:

$$ \begin{array}{@{}rcl@{}} k_{f_{a}}(\boldsymbol{x}_{i}, \boldsymbol{u}_{i}, \boldsymbol{x}_{j}, \boldsymbol{u}_{j}) = k_{f_{a}}(\boldsymbol{u}_{i}, \boldsymbol{u}_{j})\times k_{f_{a}}(\boldsymbol{x}_{i}, \boldsymbol{x}_{j}). \end{array} $$
(31)

Defining \(\boldsymbol {k}_{f_{a}}(\boldsymbol {u}) = k_{f_{a}}(\boldsymbol {U}, \boldsymbol {u})\), \(\boldsymbol {k}_{f_{a}}(\boldsymbol {x}) = k_{f_{a}}(\boldsymbol {X}, \boldsymbol {x})\), the mean and covariance related to (4) and (5) follow:

$$ \begin{array}{@{}rcl@{}} m_{f_{a}}(\boldsymbol{x}, \boldsymbol{u}) & =& \left( \boldsymbol{k}_{f_{a}}(\boldsymbol{u})\times \boldsymbol{k}_{f_{a}}(\boldsymbol{x})\right)^{\top}\boldsymbol{\upbeta}_{f_{a}},\\ {\sigma}^{2}_{f_{a}}(\boldsymbol{x}, \boldsymbol{u}) & =& \left( k_{f_{a}}(\boldsymbol{u}, \boldsymbol{u})\times k_{f_{a}}(\boldsymbol{x}, \boldsymbol{x})\right) \\ &-& (\boldsymbol{k}_{f_{a}}(\boldsymbol{u})\times\boldsymbol{k}_{f_{a}}(\boldsymbol{x}))^{\top}(\boldsymbol{K}^{f_{a}}+\alpha^{2}_{f_{a}}\boldsymbol{I})^{-1}\\ &&\times(\boldsymbol{k}_{f_{a}}(\boldsymbol{x})\times\boldsymbol{k}_{f_{a}}(\boldsymbol{u})). \end{array} $$
(32)

Define \(\boldsymbol {\upbeta }_{f_{a}}=(\boldsymbol {K}^{f_{a}}+\alpha ^{2}_{f_{a}}\boldsymbol {I})^{-1}\boldsymbol {Y}^{a}\), and the mean of hf(μ,Σ,u) in input dimension a is calculated following previous derivations [25, 26]:

$$ \begin{array}{@{}rcl@{}} \mu_{f_{a}} &=& \int m_{f_{a}}(\boldsymbol{x}_{*}, \boldsymbol{u}) p(\boldsymbol{x}_{*}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) \mathrm{d} \boldsymbol{x}_{*}\\ &=&\boldsymbol{\upbeta}_{f_{a}}^{\top}\boldsymbol{k}_{f_{a}}(\boldsymbol{u})\int \boldsymbol{k}_{f_{a}}(\boldsymbol{x}_{*}) p(\boldsymbol{x}_{*}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) \boldsymbol{x}_{*} \\ &=& \boldsymbol{\upbeta}_{f_{a}}^{\top}\boldsymbol{l}_{f_{a}}. \end{array} $$
(33)

For target dimension a,b = 1,...,D and ab, predicted variance \({\Sigma }_{f_{aa}}\) and covariance \({\Sigma }_{f_{ab}}\) of hf(μ,Σ,u) follow:

$$ \begin{array}{@{}rcl@{}} {\Sigma}_{f_{aa}}&=& \mathbb{E}\left[\sigma_{f_{a}}^{2}(\boldsymbol{x}, \boldsymbol{u})\right] + \mathbb{E}\left[m_{f_{a}}^{2}(\boldsymbol{x}, \boldsymbol{u})\right] - \mu_{f_{a}}^{2} \\ &=& \boldsymbol{\upbeta}_{f_{a}}^{\top}\boldsymbol{L}^{f}\boldsymbol{\upbeta}_{f_{a}} + \alpha^{2}_{f_{a}} - tr\left( (\boldsymbol{K}^{f_{a}} + \sigma_{w_{a}}^{2}\boldsymbol{I})^{-1}\boldsymbol{L}^{f}\right) - \mu_{f_{a}}^{2},\\ {\Sigma}_{f_{ab}} &=& \mathbb{E}{}\left[m_{f_{a}}(\boldsymbol{x}, \boldsymbol{u})m_{f_{b}}(\boldsymbol{x}, \boldsymbol{u})\right]{}-{} \mu_{f_{a}}\mu_{f_{b}} {}={} \boldsymbol{\upbeta}_{f_{a}}^{\top}\boldsymbol{Q}^{f}\boldsymbol{\upbeta}_{f_{b}} - \mu_{f_{a}}\mu_{f_{b}}. \end{array} $$
(34)

Define ui,xi as the i-th sample in U and X, and vectors \(\boldsymbol {l}_{f_{a}}\) and matrices Lf, Qf have the following elements:

$$ \begin{array}{@{}rcl@{}} l_{f_{ai}} &=& k_{f_{a}}(\boldsymbol{u}_{i}, \boldsymbol{u})\int k_{f_{a}}(\boldsymbol{x}_{i}, \boldsymbol{x}_{*}) p(\boldsymbol{x}_{*}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) \mathrm{d} \boldsymbol{x}_{*}\\ &=&k_{f_{a}}(\boldsymbol{u}_{i}, \boldsymbol{u})\alpha^{2}_{f_{a}}|\boldsymbol{\Sigma}\boldsymbol{\varLambda}_{f_{a}}^{-1} + \boldsymbol{I}|^{-\frac{1}{2}} \\ &&\times\exp\left( -\frac{1}{2}(\boldsymbol{x}_{i}-\boldsymbol{\mu})^{\top}(\boldsymbol{\Sigma}+\boldsymbol{\varLambda}_{f_{a}})^{-1}(\boldsymbol{x}_{i}-\boldsymbol{\mu})\right). \end{array} $$
(35)
$$ \begin{array}{@{}rcl@{}} L_{ij}^{f} &= & k_{f_{a}}(\boldsymbol{u}_{i},\boldsymbol{u})k_{f_{a}}(\boldsymbol{u}_{j},\boldsymbol{u})\frac{k_{f_{a}}(\boldsymbol{x}_{i},\boldsymbol{\mu})k_{f_{a}}(\boldsymbol{x}_{j},\boldsymbol{\mu})}{|2\boldsymbol{\Sigma}\boldsymbol{\varLambda}_{f_{a}}^{-1} + \boldsymbol{I}|^{\frac{1}{2}}}\\ &&\times \exp\left( (\boldsymbol{z}_{ij}-\boldsymbol{\mu})^{\top}(\boldsymbol{\Sigma}+\frac{1}{2}\boldsymbol{\varLambda}_{f_{a}})^{-1}\boldsymbol{\Sigma}\boldsymbol{\varLambda}_{f_{a}}^{-1}(\boldsymbol{z}_{ij}-\boldsymbol{\mu})\right), \end{array} $$
(36)
$$ \begin{array}{@{}rcl@{}} Q_{ij}^{f} &= & \alpha_{f_{a}}^{2}\alpha_{f_{b}}^{2}k_{f_{a}}(\boldsymbol{u}_{i},\boldsymbol{u}_{j})k_{f_{b}}(\boldsymbol{u}_{i},\boldsymbol{u}_{j})|(\boldsymbol{\varLambda}_{f_{a}}^{-1}+\boldsymbol{\varLambda}_{f_{b}}^{-1})\boldsymbol{\Sigma}+\boldsymbol{I}|^{-\frac{1}{2}}\\ &&\times\exp\left( -\frac{1}{2}(\boldsymbol{x}_{i} - \boldsymbol{x}_{j})^{\top}(\boldsymbol{\varLambda}_{f_{a}} + \boldsymbol{\varLambda}_{f_{b}})^{-1}(\boldsymbol{x}_{i} - \boldsymbol{x}_{j})\right)\\ &&\times\exp\left( -\frac{1}{2}(\boldsymbol{z}_{ij}^{\prime}-\boldsymbol{\mu})^{\top}\boldsymbol{R}^{-1}(\boldsymbol{z}_{ij}^{\prime}-\boldsymbol{\mu})\right), \end{array} $$
(37)

where \(\boldsymbol {z}_{ij} = \frac {1}{2}(\boldsymbol {x}_{i}+\boldsymbol {x}_{j})\), \(\boldsymbol {z}^{\prime }\) and R follow (22) and (23) with subscript f instead of g.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cui, Y., Ooga, J., Ogawa, A. et al. Probabilistic active filtering with gaussian processes for occluded object search in clutter. Appl Intell 50, 4310–4324 (2020). https://doi.org/10.1007/s10489-020-01789-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-020-01789-y

Keywords

Navigation