Skip to main content
Log in

Variational-Bayes Optical Flow

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

The Horn-Schunck (HS) optical flow method is widely employed to initialize many motion estimation algorithms. In this work, a variational Bayesian approach of the HS method is presented, where the motion vectors are considered to be spatially varying Student’s t-distributed unobserved random variables, i.e., the prior is a multivariate Student’s t-distribution, while the only observations available is the temporal and spatial image difference. The proposed model takes into account the residual resulting from the linearization of the brightness constancy constraint by Taylor series approximation, which is also assumed to be a spatially varying Student’s t-distributed observation noise. To infer the model variables and parameters we recur to variational inference methodology leading to an expectation-maximization (EM) framework with update equations analogous to the Horn-Schunck approach. This is accomplished in a principled probabilistic framework where all of the model parameters are estimated automatically from the data. Experimental results show the improvement obtained by the proposed model which may substitute the standard algorithm in the initialization of more sophisticated optical flow schemes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Alvarez, L., Weickert, J., Sanchez, J.: Reliable estimation of dense optical flow fields with large displacements. Int. J. Comput. Vis. 39, 41–56 (2000)

    Article  MATH  Google Scholar 

  2. Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M., Szeliski, R.: A database and evaluation methodology for optical flow. In: Proceedings of the International Conference of Computer Vision (ICCV), pp. 1–8 (2007)

    Google Scholar 

  3. Barron, J., Fleet, D., Beauchemin, S.: Performance of optical flow techniques. Int. J. Comput. Vis. 12(1), 43–77 (1994)

    Article  Google Scholar 

  4. Beal, M.J.: Variational algorithms for approximate Bayesian inference. Technical report, The Gatsby Computational Neuroscience Unit, University College, London (2003)

  5. Beal, M.J.: The variational Bayesian EM algorithm for incomplete data with application to scoring graphical model structures. Bayesian Stat. 7, 453–464 (2003)

    MathSciNet  Google Scholar 

  6. Ben-Ari, R., Sochen, N.: A general framework and new alignment criterion for dense optical flow. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 529–536 (2006)

    Google Scholar 

  7. Birchfield, S., Pundlik, S.: Joint tracking of features and edges. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (2008)

    Google Scholar 

  8. Bishop, C.: Pattern Recognition and Machine Learning. Springer, Berlin (2006)

    MATH  Google Scholar 

  9. Black, M., Anandan, P.: A framework for the robust estimation of optical flow. In: Fourth International Conference on Computer Vision, pp. 231–236 (1993)

    Google Scholar 

  10. Black, M., Anandan, P.: The robust estimation of multiple motions: parametric and piecewise-smooth flow fields. Comput. Vis. Image Underst. 63(1), 75–104 (1996)

    Article  Google Scholar 

  11. Black, M., Fleet, D.: Probabilistic detection and tracking of motion boundaries. Int. J. Comput. Vis. 38, 231–245 (2000)

    Article  MATH  Google Scholar 

  12. Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: Proceedings of the 8th European Conference on Computer Vision (ECCV), vol. 4, pp. 25–36 (2004)

    Google Scholar 

  13. Bruhn, A., Weickert, J., Schnorr, C.: Lucas/Kanade meets Horn/Schunck: combining local and global optic flow methods. Int. J. Comput. Vis. 61(3), 211–231 (2005)

    Article  Google Scholar 

  14. Chantas, G., Galatsanos, N., Likas, A., Saunders, M.: Variational Bayesian image restoration based on a product of t-distributions image prior. IEEE Trans. Image Process. 17(10), 1795–1805 (2008)

    Article  MathSciNet  Google Scholar 

  15. Gkamas, T., Chantas, G., Nikou, C.: A probabilistic formulation of the optical flow problem. In: 21st International Conference on Pattern Recognition (ICPR), 11–15 November, Tsukuba, Japan (2012)

    Google Scholar 

  16. Glocker, B., Heibel, T.H., Navab, N., Kohli, P., Rother, C.: Triangleflow: optical flow with triangulation-based higher-order likelihoods. In: Proceedings of the 11th European Conference on Computer Vision Conference on Computer Vision: Part III, ECCV’10, pp. 272–285 (2010)

    Google Scholar 

  17. Heas, P., Herzet, C., Memin, E.: Robust optic-flow estimation with Bayesian inference of model and hyper-parameters. In: Scale Space and Variational Methods in Computer Vision. LNCS, vol. 6667, pp. 773–785 (2012)

    Chapter  Google Scholar 

  18. Horn, B., Schunck, B.: Determining optical flow. Artif. Intell. 17, 185–203 (1981)

    Article  Google Scholar 

  19. Krajsek, K., Mester, A.: Bayesian inference and maximum entropy methods in science and engineering, aip conference proceedings. In: Scale Space and Variational Methods in Computer Vision. LNCS, vol. 872, pp. 311–318 (2006)

    Google Scholar 

  20. Krajsek, K., Mester, R.: A maximum likelihood estimator for choosing the regularization parameters in global optical flow methods. In: IEEE International Conference on Image Processing, pp. 1081–1084 (2006)

    Google Scholar 

  21. Krajsek, K., Mester, A.: Bayesian model selection for optical flow estimation. In: Proceedings of the 29th DAGM Conference on Pattern Recognition, Heidelberg (2007)

    Google Scholar 

  22. Lee, K.J., Kwon, D., Yun, I.D., Lee, S.U.: Optical flow estimation with adaptive convolution kernel prior on discrete framework. In: CVPR, pp. 2504–2511 (2010)

    Google Scholar 

  23. Liu, C., Rubin, D.B.: ML estimation of the t-distribution using EM and its extensions. Technical report, ECM and ECME, Statistica Sinica (1995)

  24. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th International Joint Conference on Artificial Intellegence (IJCAI), pp. 674–679 (1981)

    Google Scholar 

  25. McCane, B., Novins, K., Crannitch, D., Galvin, B.: On benchmarking optical flow. Comput. Vis. Image Underst. 84(1), 126–143 (2001)

    Article  MATH  Google Scholar 

  26. Memin, E., Perez, P.: Dense estimation and object-based segmentation of the optical flow with robust techniques. IEEE Trans. Image Process. 7(5), 703–719 (1998)

    Article  Google Scholar 

  27. Molina, R.: On the hierarchical Bayesian approach to image restoration. Applications to astronomical images. IEEE Trans. Pattern Anal. Mach. Intell. 16(11), 1122–1128 (1994)

    Article  Google Scholar 

  28. Nagel, H., Enkelman, W.: An investigation of smoothness constraints for estimation of displacement vector fields from image sequences. IEEE Trans. Pattern Anal. Mach. Intell. 8(5), 565–593 (1986)

    Article  Google Scholar 

  29. Paige, C., Saunders, M.: Solution of sparse indefinite systems of linear equations. SIAM J. Numer. Anal. 12, 617–629 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  30. Ren, X.: Local grouping for optical flow. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (2008)

    Google Scholar 

  31. Roth, S., Black, M.: On the spatial statistics of optical flow. Int. J. Comput. Vis. 74(1), 33–50 (2007)

    Article  Google Scholar 

  32. Sfikas, G., Nikou, C., Galatsanos, N.: Edge preserving spatially varying mixtures for image segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, Alaska, USA (2008)

    Google Scholar 

  33. Sfikas, G., Nikou, C., Galatsanos, N., Heinrich, C.: Spatially varying mixtures incorporating line processes for image segmentation. J. Math. Imaging Vis. 36, 91–110 (2010)

    Article  MathSciNet  Google Scholar 

  34. Sfikas, G., Heinrich, C., Zallat, J., Nikou, C., Galatsanos, N.: Recovery of poliarimetric Stokes images by spatial mixture models. J. Opt. Soc. Am. A 28(3), 465–474 (2011)

    Article  Google Scholar 

  35. Sfikas, G., Nikou, C., Galatsanos, N., Heinrich, C.: Majorization-minimization mixture model determination in image segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2169–2176. Colorado Springs, Colorado (2011)

    Google Scholar 

  36. Sun, D., Roth, S., Lewis, J.P., Black, M.J.: Learning optical flow. In: Proceedings of the 10th European Conference on Computer Vision (ECCV): Part III, ECCV ’08, pp. 83–97 (2008)

    Google Scholar 

  37. Tzikas, D., Likas, A., Galatsanos, N.: The variational approximation for Bayesian inference. IEEE Signal Process. Mag. 25(6), 131–146 (2008)

    Article  Google Scholar 

  38. Wedel, A., Cremers, D., Pock, T., Bischof, H.: Structure- and motion-adaptive regularization for high accuracy optic flow. In: International Conference on Computer Vision (ICCV), pp. 1663–1668 (2009)

    Google Scholar 

  39. Weickert, J., Schnörr, C.: A theoretical framework for convex regularizers in PDE based computation of image motion. Int. J. Comput. Vis. 45(3), 245–264 (2001)

    Article  MATH  Google Scholar 

  40. Weickert, J., Schnörr, C.: Variational optical flow computation with a spatio-temporal smoothness constraint. J. Math. Imaging Vis. 14(3), 245–255 (2001)

    Article  MATH  Google Scholar 

  41. Werlberger, M., Pock, T., Bischof, H.: Motion estimation with non-local total variation regularization. In: CVPR, pp. 2464–2471 (2010)

    Google Scholar 

  42. Xu, L., Jia, J., Matsushita, Y.: Motion detail preserving optical flow estimation. IEEE Trans. Pattern Anal. Mach. Intell. 34, 1744–1757 (2012)

    Article  Google Scholar 

  43. Zhou, Z., Leahy, R.M., Qi, J.: Approximate maximum likelihood hyperparameter estimation for Gibbs priors. IEEE Trans. Image Process. 6(6), 844–861 (1997)

    Article  Google Scholar 

  44. Zimmer, H., Bruhn, A., Weickert, J.: Optic flow in harmony. Int. J. Comput. Vis. 93(3), 368–388 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  45. Zitnick, C., Jojic, N., Kang, S.: Consistent segmentation for optical flow estimation. In: Proceedings of the International Conference on Computer Vision (ICCV), vol. 2, pp. 1308–1315 (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christophoros Nikou.

Appendix

Appendix

In what follows we present in detail the derivation of the update equations for the model variables and parameters.

In the fully Bayesian framework, the complete data likelihood, including the hidden variables and the parameters of the model, is given by:

$$\begin{aligned} p (\mathbf{d}, \mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b}; \theta ) =& p ( \mathbf{d}|\mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b}; \theta ) p (\mathbf{u}| \tilde{\mathbf{A}}, \mathbf{b}; \theta ) \\ &{}\times p (\tilde{\mathbf{A}}; \theta )p ( \mathbf{b}; \theta ), \end{aligned}$$
(55)

where θ=[λ noise ,λ x ,λ y ,μ,ν x ,ν y ] gathers the parameters of the model. Estimation of the model parameters could be obtained through maximization of the marginal distribution of the observations p(d;θ):

$$ \hat{\theta} = \mathop{\mathrm{arg\,max}}_{\theta}{\iiint p (\mathbf {d}, \mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b}; \theta )d\mathbf{u} d \tilde{\mathbf{A}} d\mathbf{b}}. $$
(56)

However, in the present case, this marginalization is not possible, since the posterior of the latent variables given the observations \(p(\mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b}|\mathbf{d})\) is not known explicitly and inference via the Expectation-Maximization (EM) algorithm may not be obtained. Thus, we resort to the variational methodology [5, 8] where we have to maximize a lower bound of \(p (\mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b} )\):

$$\begin{aligned} &{ L (\mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b}; \theta )} \\ &{\quad = \iiint q (\mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b} ) \log \frac{p (\mathbf{d}, \mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b}; \theta )}{q (\mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b} )}d \mathbf{u} d\tilde{\mathbf{A}} d\mathbf{b}.} \end{aligned}$$
(57)

This involves finding approximations of the posterior distribution of the hidden variables, denoted by q(u), \(q (\tilde{\mathbf{A}} )\), q(b) because there is no analytical form of the auxiliary function q for which the bound in (57) becomes equality. However, in the variational methodology, we employ the mean field approximation [8]:

$$ q (\mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b} ) = q (\mathbf{u} ) q (\tilde{\mathbf{A}} )q (\mathbf{b} ), $$
(58)

and (57) becomes:

$$ L (\mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b}; \theta ) = \int _{\mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b}} q (\mathbf{u} ) q (\tilde{\mathbf{A}} )q (\mathbf{b} ) \log \frac{p (\mathbf{d}, \mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b}; \theta )}{q (\mathbf{u} ) q (\tilde{\mathbf{A}} )q (\mathbf{b} )}. $$
(59)

In our case, in the E-step of the variational algorithm (VE-step), optimization of the functional \(L (\mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b}; \theta )\) is performed with respect to the auxiliary functions. Following the variational inference framework, the distributions q(u k ), k∈{x, y}, are Normal:

$$ q (\mathbf{u} ) = \mathcal{N}\left (\left [ \begin{array}{c} \mathbf{m}_x \\ \mathbf{m}_y \\ \end{array} \right ], \left [ \begin{array}{c@{\quad}c} \mathbf{R}_x & \mathbf{0} \\ \mathbf{0} & \mathbf{R}_y \\ \end{array} \right ] \right ), $$
(60)

yielding

$$ q (\mathbf{u}_{x} ) = \mathcal {N} ( \mathbf{m}_x, \mathbf{R}_x ), $$
(61)

and

$$ q (\mathbf{u}_{y} ) = \mathcal {N} ( \mathbf{m}_y, \mathbf{R}_y ). $$
(62)

Therefore, this bound is actually a function of the parameters R k and m k , k∈{x,y} and a functional with respect to the auxiliary functions q(a k ),q(b). Using (58), the variational bound in our problem becomes:

$$\begin{aligned} &{ L \bigl(q(\mathbf{u}_{x}), q(\mathbf{u}_{y}), q( \mathbf{a}_x), q(\mathbf{a}_y), q(\mathbf{b}), \theta_1, \theta_2 \bigr) } \\ &{\quad =\iiint \biggl(\prod_{k\in\{x, y\}} q( \mathbf{u}_{k}; \theta_1) q(\mathbf{a}_k) \biggr) q(\mathbf{b}) } \\ &{\qquad{}\times\log p(\mathbf{d}, \mathbf{u}, \tilde{\mathbf{A}}, \mathbf{b}; \theta _2) d\mathbf{u}d\tilde{\mathbf{A}}d\mathbf{b} } \\ &{\qquad{}-\iiint \biggl(\prod_{k\in\{x, y\}}q( \mathbf{u}_{k}; \theta_1) q(\mathbf{a}_k) \biggr) q(\mathbf{b}) } \\ &{\qquad{}\times\log \biggl( \biggl( \prod_{k\in\{x, y\}} p(\mathbf{u}_{k}; \theta_1) q(\mathbf{a}_k) \biggr) q(\mathbf{b}) \biggr) d\mathbf{u}d\tilde{\mathbf{A}}d\mathbf{b}} \end{aligned}$$
(63)

where we have separated the parameters into two sets:

$$ \theta_1 = \{\mathbf{R}_x, \mathbf{R}_x, \mathbf{m}_x, \mathbf{m}_y \}, $$
(64)

and

$$ \theta_2 = \{\mathbf{a}_x, \mathbf{a}_y, \mathbf{b}, \lambda_x, \lambda_y, \nu_x, \nu_y \}. $$
(65)

Thus, in the VE-step of the variational EM algorithm the bound must be optimized with respect to R k , m k , q(a k ) and q(b).

Taking the derivative of (63) with respect to m k , R k , q(α k ) and q(b) and setting the result equal to zero, we obtain the following update equations:

$$ \mathbf{m}_x^{(t+1)} = \lambda_\mathit{noise}^{(t)} \mathbf{R}_x^{(t)}\hat{\mathbf {B}}^{(t)} \mathbf{G}_x \bigl(\mathbf{d}-\mathbf{G}_y \mathbf{u}_{y}^{(t)} \bigr), $$
(66)

and

$$ \mathbf{m}_y^{(t+1)} = \lambda_\mathit{noise}^{(t)} \mathbf{R}_y^{(t)}\hat{\mathbf {B}}^{(t)} \mathbf{G}_y \bigl(\mathbf{d}-\mathbf{G}_x \mathbf{u}_{x}^{(t)} \bigr), $$
(67)

where

$$ \mathbf{R}_x^{(t+1)} = \bigl( \lambda_\mathit{noise}^{(t)}\mathbf{G}_x^T\hat{ \mathbf{B}}^{(t)}\mathbf{G}_x + \lambda_x^{(t)} \mathbf{Q}^T \hat{\mathbf{A}}_x^{(t)} \mathbf{Q} \bigr)^{-1}, $$
(68)

and

$$ \mathbf{R}_y^{(t+1)} = \bigl( \lambda_\mathit{noise}^{(t)}\mathbf{G}_y^T\hat{ \mathbf{B}}^{(t)}\mathbf{G}_y + \lambda_y^{(t)} \mathbf{Q}^T \hat{\mathbf{A}}_y^{(t)} \mathbf{Q} \bigr)^{-1}. $$
(69)

Notice that the final estimates for u x , u y are m x and m y , in (37) and (38), respectively.

After some manipulation, we obtain the update equations for the model parameters which maximize (63) with respect to q(a k ), q(b). The form of all q approximating-to-the-posterior functions will remain the same as the corresponding prior (due to the conjugate priors we employ) namely q(a k ), q(b) which approximate p(a k |u k ,λ k ,C k ;ν k ), p(b|u,λ noise ,F;μ) will follow Gamma distributions, ∀i=1,…,N,∀k∈{x,y}:

$$\begin{aligned} &{ q^{(t+1)}\bigl(\boldsymbol{\alpha}_k(i)\bigr)} \\ &{\quad= \mathrm{Gamma} \biggl(\frac{\nu_k^{(t)}}{2} + \frac{1}{2},} \\ &{\phantom{\quad= \mathrm{Gamma} \biggl(}\frac{\nu_k^{(t)}}{2} + \frac{1}{2}\lambda_k^{(t)} \bigl( \bigl[\mathbf{Q}\mathbf{u}_{k}^{(t)} \bigr]_i^{2} + \mathbf{C}_k^{(t)} (i, i ) \bigr) \biggr),} \end{aligned}$$
(70)

and

$$\begin{aligned} &{q^{(t+1)}\bigl(\mathbf{b}(i)\bigr) } \\ &{\quad= \mathrm{Gamma} \biggl( \frac{\mu^{(t)}}{2} + \frac{1}{2},} \\ &{\phantom{\quad= \mathrm{Gamma} \biggl(} \frac{\mu^{(t)}}{2} + \frac{1}{2} \lambda_\mathit{noise}^{(t)} \bigl( \bigl[\mathbf{G}\mathbf{u}^{(t)} - \mathbf{d} \bigr]_i^2 + \mathbf{F}^{(t)} (i, i ) \bigr) \biggr),} \end{aligned}$$
(71)

where the N×N matrix

$$ \mathbf{C}_k^{(t)} = \mathbf{Q} \mathbf{R}_k^{(t)} \mathbf{Q}^T, $$
(72)

the N×N matrix

$$ \mathbf{F}^{(t)} = \mathbf{G}_x \mathbf{R}_x^{(t)}\mathbf{G}_x^T+ \mathbf{G}_y \mathbf{R}_y^{(t)} \mathbf{G}_y^T, $$
(73)

and [⋅] i denotes the i-th element of the vector inside the brackets.

The size of matrices R x , R y and consequently C x , C y and F makes their direct calculation prohibitive. In order to overcome this difficulty, we employ the iterative Lanczos method [29] for their calculation. For matrices C x , C y and F only the diagonal elements are needed in (70) and (71) and they are obtained as a byproduct of the Lanczos method.

Note that as we can see from (66) and (67), there is a dependency between u x and u y , as it is the case in the standard Horn-Schunck method.

Notice also that since each q (t+1)(α k (i)) is a Gamma pdf, it is easy to derive its expected value:

$$ \bigl\langle \boldsymbol{\alpha}_k(i) \bigr\rangle _{q^{(t+1)}(\boldsymbol {\alpha}_k(i))} = \frac{\nu_k^{(t)} + 1}{\nu_k^{(t)} + \lambda_k^{(t)} ( [\mathbf{Q}\mathbf{u}_{k}^{(t)} ]_i^{2} + \mathbf{C}_k^{(t)}(i, i) )}, $$
(74)

and the same stands for the expected value of b(i):

$$ \bigl\langle \mathbf{b}(i) \bigr\rangle _{q^{(t+1)}(\mathbf{b}(i))} = \frac{\mu^{(t)} + 1}{\mu^{(t)} + \lambda_\mathit{noise}^{(t)} ( [\mathbf{G}\mathbf{u}^{(t)} - \mathbf{d} ]_i^{2} + \mathbf{F}^{(t)}(i, i) )}, $$
(75)

where 〈.〉 q(.) denotes the expectation with respect to an arbitrary distribution q(⋅). These estimates are used in (66), (67), (68) and (69), where \(\hat{\mathbf{A}}_{k}^{(t)}\) and \(\hat{\mathbf{B}}^{(t)}\) are diagonal matrices with elements:

$$\hat{\mathbf{A}}_k^{(t)}(i, i) = \bigl\langle \boldsymbol{ \alpha}_k(i) \bigr\rangle _{q^{(t)}(\boldsymbol{\alpha}_k(i))}, $$

and

$$\hat{\mathbf{B}}^{(t)}(i, i) = \bigl\langle \mathbf{b}(i) \bigr\rangle _{q^{(t)}(\mathbf{b}(i))}, $$

for i=1,…,N.

At the variational M-step, the bound is maximized with respect to the model parameters:

$$\begin{aligned} &{\theta_2^{(t+1)} = \mathop{\mathrm{arg\,max}}_{\theta_2} L \bigl(q^{(t+1)} (\mathbf{u}_{k} ), q^{(t+1)} (\hat{ \mathbf{A}}_k ),} \\ &{\phantom{\theta_2^{(t+1)} = \mathop{\mathrm{arg\,max}}_{\theta_2} L \bigl(} q^{(t+1)} (\hat{\mathbf{B}} ), \theta_1^{(t+1)}, \theta_2 \bigr),} \end{aligned}$$
(76)

where

$$\begin{aligned} &{L \bigl(q^{(t+1)} (\mathbf{u}_{k} ), q^{(t+1)} (\hat { \mathbf{A}}_k ), q^{(t+1)} (\hat{\mathbf{B}} ), \theta _1^{(t+1)}, \theta_2 \bigr) }\\ &{\quad{}\propto\bigl\langle \log p (\mathbf{d}, \mathbf{u}, \hat{\mathbf{A}}_k, \hat{\mathbf{B}}; \theta_2 ) \bigr\rangle _{q (\mathbf{u}_{k}; \theta_1^{(t+1)} ), q^{(t+1)} (\hat{\mathbf{A}}_k ), q^{(t+1)} (\hat{\mathbf{B}} )}} \end{aligned}$$

is calculated using the results from (66)–(69).

The update for λ noise is obtained after taking the derivative of \(L (q^{(t+1)} (\mathbf{u}_{k} ), q^{(t+1)} (\hat{\mathbf{A}}_{k} ), q^{(t+1)} (\hat{\mathbf{B}} ), \theta_{1}^{(t+1)}, \theta_{2} )\) in (63) with respect to it and setting it to zero:

$$\begin{aligned} \lambda_\mathit{noise}^{(t+1)} = \frac{N}{\sum_{i=1}^N \mathbf{b}^{(t+1)}(i) ( [\mathbf{G}\mathbf{u}^{(t+1)} - \mathbf{d} ]^2_i + \mathbf{F}^{(t+1)}(i, i) )}. \end{aligned}$$
(77)

By the same means we obtain the estimates for λ x and λ y :

$$ \lambda_k^{(t+1)} = \frac{N}{\sum_{i=1}^N \boldsymbol{\alpha}_k^{(t+1)}(i) ( [\mathbf{Q}\mathbf{u}_{k}^{(t+1)} ]^2_i + \mathbf{C}_k^{(t+1)}(i, i) )}, $$
(78)

with k∈{x,y}.

The degrees of freedom parameters ν k of the Student’s t-distributions are also computed accordingly through the roots of the following equation:

$$\begin{aligned} &{ \frac{1}{N} \Biggl(\sum_{i=1}^N \log\bigl\langle \boldsymbol{\alpha}_k(i) \bigr\rangle _{q^{(t+1)} (\mathbf{A}_k )} - \sum_{i=1}^N \bigl\langle \boldsymbol{ \alpha}_k(i) \bigr\rangle _{q^{(t+1)} (\mathbf{A}_k )} \Biggr)} \\ &{\quad{} + \digamma \biggl( \frac{\nu_k^{(t)}}{2} + \frac{1}{2} \biggr) - \log \biggl(\frac{\nu_k^{(t)}}{2} + \frac{1}{2} \biggr)} \\ &{\quad{} - \digamma \biggl(\frac{\nu_k}{2} \biggr) + \log \biggl(\frac{\nu_k}{2} \biggr) + 1= 0,} \end{aligned}$$
(79)

for ν k , k∈{x,y}, where Ϝ(x) is the digamma function (derivative of the logarithm of the Gamma function) and \(\nu_{k}^{(t)}\) is the value of ν k at the previous iteration.

Finally, by the same procedure we obtain estimates for the parameter μ of the noise distribution:

$$\begin{aligned} &{ \frac{1}{N} \Biggl(\sum_{i=1}^N \log\bigl\langle \mathbf{b}(i) \bigr\rangle _{q^{(t+1)} (\mathbf{b}(i) )} - \sum _{i=1}^N \bigl\langle \mathbf{b}(i) \bigr\rangle _{q^{(t+1)} (\mathbf{b}(i) )} \Biggr) } \\ &{\quad{}+ \digamma \biggl( \frac{\mu^{(t)}}{2} + \frac{1}{2} \biggr) - \log \biggl(\frac{\mu^{(t)}}{2} + \frac{1}{2} \biggr) - \digamma \biggl(\frac{\mu}{2} \biggr) } \\ &{\quad{}+ \log \biggl(\frac{\mu}{2} \biggr) + 1 = 0. } \end{aligned}$$
(80)

In our implementation Eqs. (79) and (80) are solved by the bisection method, as also proposed in [23].

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chantas, G., Gkamas, T. & Nikou, C. Variational-Bayes Optical Flow. J Math Imaging Vis 50, 199–213 (2014). https://doi.org/10.1007/s10851-014-0494-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-014-0494-3

Keywords

Navigation