Skip to main content
Log in

Image Segmentation Using a Local GMM in a Variational Framework

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

In this paper, we propose a new variational framework to solve the Gaussian mixture model (GMM) based methods for image segmentation by employing the convex relaxation approach. After relaxing the indicator function in GMM, flexible spatial regularization can be adopted and efficient segmentation can be achieved. To demonstrate the superiority of the proposed framework, the global, local intensity information and the spatial smoothness are integrated into a new model, and it can work well on images with inhomogeneous intensity and noise. Compared to classical GMM, numerical experiments have demonstrated that our algorithm can achieve promising segmentation performance for images degraded by intensity inhomogeneity and noise.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Bishop, C.: Neural Networks for Pattern Recognition. Oxford Univ. Press, London (1995)

    MATH  Google Scholar 

  2. McLachlan, G., Peel, D.: Finite Mixture Models. Wiley, New York (2000)

    Book  MATH  Google Scholar 

  3. Roberts, S., Husmeier, D., Rezek, I., Penny, W.: Bayesian approaches to Gaussian mixture modeling. IEEE Trans. Pattern Anal. Mach. Intell. 20, 1133–1142 (1998)

    Article  Google Scholar 

  4. Figueiredo, M., Jain, A.: Unsupervised learning of finite mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 24(3), 381–396 (2002)

    Article  Google Scholar 

  5. Ashburner, J., Friston, K.: Unified segmentation. NeuroImage 26(3), 839–851 (2005)

    Article  Google Scholar 

  6. McLachlan, G., Krishnan, T.: The EM Algorithm and Extensions. Wiley, New York (2007)

    MATH  Google Scholar 

  7. Held, K., Kops, E., Krause, B., Wells, W., Kikinis, R., Muller-Gartner, H.: Markov random field segmentation of brain MR images. IEEE Trans. Med. Imaging 16(6), 878–886 (1997)

    Article  Google Scholar 

  8. Zhang, Y., Brady, M., Smith, S.: Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans. Med. Imaging 20(1), 45–57 (2001)

    Article  Google Scholar 

  9. Dempster, A., Laird, N., Rubin, D.: Maximum likelihood from incomplete data via the EM algorithm (with discussion). J. R. Stat. Soc. B 39, 1–38 (1977)

    MATH  Google Scholar 

  10. Bresson, X., Esedoglu, S., Vandergheynst, P., Thiran, J., Osher, S.: Fast global minimization of the active contour/snake model. J. Math. Imaging Vis. 28, 151–167 (2007)

    Article  MathSciNet  Google Scholar 

  11. Bae, E., Yuan, J., Tai, X.: Global minimization for continuous multiphase partitioning problems using a dual approach. Int. J. Comput. Vis. 92(1), 112–129 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  12. Brinkmann, B., Manduca, A., Robb, R.: Optimized homomorphic unsharp masking for MR grayscale inhomogeneity correction. IEEE Trans. Med. Imaging 17, 161–171 (1998)

    Article  Google Scholar 

  13. Wells, W., Grimson, W., Kikinis, R., Jolesz, F.: Adaptive segmentation of MRI data. IEEE Trans. Med. Imaging 15, 429–442 (1996)

    Article  Google Scholar 

  14. Ahmed, M., Yamany, S., Mohamed, N., Farag, A., Moriarty, T.: A modified fuzzy c-means algorithm for bias field estimation and segmentation of MRI data. IEEE Trans. Med. Imaging 21, 193–198 (2002)

    Article  Google Scholar 

  15. Xiao, G., Brady, M., Noble, J., Zhang, Y.: Segmentation of ultrasound B-mode images with intensity inhomogeneity correction. IEEE Trans. Med. Imaging 21(1), 48–57 (2002)

    Article  Google Scholar 

  16. Mumford, D., Shah, J.: Optimal approximation by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 42, 577–685 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  17. Chan, T., Vese, L.: Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001)

    Article  MATH  Google Scholar 

  18. Lie, J., Lysaker, M., Tai, X.: A binary level set model and some applications to Mumford-Shah image segmentation. IEEE Trans. Image Process. 15(5), 1171–1181 (2006)

    Article  MATH  Google Scholar 

  19. Joshi, N., Brady, M.: Non-parametric mixture model based evolution of level sets and application to medical images. Int. J. Comput. Vis. 88(1), 52–68 (2010)

    Article  Google Scholar 

  20. Bertelli, L., Chandrasekaran, S., Gibou, F., Manjunath, B.: On the length and area regularization for multiphase level set segmentation. Int. J. Comput. Vis. 90(3), 267–282 (2010)

    Article  Google Scholar 

  21. Goldstein, T., Bresson, X., Osher, S.: Geometric applications of the split Bregman method: segmentation and surface reconstruction. SIAM J. Sci. Comput. 45, 272–293 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  22. Li, C., Xu, C., Gui, C., Fox, M.: Level set evolution without re-initialization: a new variational formulation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 430–436 (2005)

    Google Scholar 

  23. Estellers, V., Zosso, D., Lai, R., Thiran, J., Osher, S., Bresson, X.: Efficient algorithm for level set method preserving distance function. UCLA CAM Report 11-58 (2011)

  24. Vese, L., Chan, T.: A multiphase level set framework for image segmentation using the Mumford and Shah model. Int. J. Comput. Vis. 50, 271–293 (2002)

    Article  MATH  Google Scholar 

  25. Brox, T., Weickert, J.: Level set segmentation with multiple regions. IEEE Trans. Image Process. 15(10), 3213–3218 (2006)

    Article  Google Scholar 

  26. Cremers, D., Pock, T., Kolev, K., Chambolle, A.: Convex relaxation techniques for segmentation, stereo and multiview reconstruction. In: Advances in Markov Random Fields for Vision and Image Processing. MIT Press, Cambridge (2011)

    Google Scholar 

  27. Li, C., Kao, C., Gore, J., Ding, Z.: Minimization of region-scalable fitting energy for image segmentation. IEEE Trans. Image Process. 17(10), 1940–1949 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  28. Li, C., Gatenby, C., Wang, L., Gore, J.: A robust parametric method for bias field estimation and segmentation of MR images. In: CVPR2009, pp. 218–223 (2009)

    Google Scholar 

  29. Redner, R., Walker, H.: Mixture densities, maximum likelihood and the EM algorithm. SIAM Rev. 26(2), 195–239 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  30. Rockafellar, R.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  31. Wang, Y., Yang, J., Yin, W., Zhang, Y.: A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 1(3), 248–272 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  32. Tai, X., Wu, C.: Augmented Lagrangian method, dual methods and split Bregman iteration for ROF model. UCLA CAM Report 09-05 (2009)

  33. Goldstein, T., Osher, S.: The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2, 323–343 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  34. Setzer, S.: Operator splittings, Bregman methods and frame shrinkage in image processing. Int. J. Comput. Vis. 92, 265–280 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  35. Chan, T., Golub, G., Mulet, P.: A nonlinear primal-dual method for total variation-based image restoration. SIAM J. Sci. Comput. 20, 1964–1977 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  36. Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20, 89–97 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  37. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  38. Potts, R.: Some generalized order-disorder transformations. Proc. Camb. Philos. Soc. 48, 106–109 (1952)

    Article  MathSciNet  MATH  Google Scholar 

  39. Pham, D.: Spatial models for fuzzy clustering. Comput. Vis. Image Underst. 84, 285–297 (2001)

    Article  MATH  Google Scholar 

  40. Wang, J., Ju, L., Wang, X.: An edge-weighted centroidal Voronoi tessellation model for image segmentation. IEEE Trans. Image Process. 18(8), 1844–1858 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  41. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23, 1–18 (2001)

    Article  Google Scholar 

  42. Chambolle, A.: Total variation minimization and a class of binary MRF models. In: EMMCVPR 2005. LNCS, vol. 3757, pp. 136–152. Springer, Berlin (2005)

    Google Scholar 

  43. Chan, T., Esedoglu, S., Nikolova, M.: Algorithms for finding global minimizers of image segmentation and denoising models. SIAM J. Appl. Math. 66(5), 1632–1648 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  44. Kolmogorov, V., Zabih, R.: What energy functions can be minimized via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 147–159 (2004)

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (No. 11071023, No. 11201032).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Liu.

Appendices

Appendix A: Proof of Lemma 1

Let \(\mathcal{E}(\mathbf{u})=\sum_{k=1}^{K}(\mathcal{B}_{k}-\log\mathcal {A}_{k})u_{k}+\sum_{k=1}^{K}u_{k}\log u_{k}\). We use Lagrange multiplier method to calculate the minimal value of \(\mathcal{E}\). Denote Lagrange functional \(L(\mathbf{u})=\mathcal{E}(\mathbf {u})+\lambda(\sum_{k=1}^{K}u_{k}-1)\), then

$$ \frac{\delta L}{\delta u_k}=\mathcal{B}_k+\log\frac{u_k}{\mathcal {A}_k}+1+\lambda. $$

By setting \(\frac{\delta L}{\delta u_{k}}=0\), and one can get a stationary point u with component function

$$ u_k^{*}=\mathcal{A}_k \exp(-\mathcal{B}_k-\lambda-1). $$
(17)

Please note u Δ, we sum up k from 1 to K in the both sides of (17) and thus we obtain

$$ \lambda+1=\log\sum_{k=1}^{K} \mathcal{A}_k\exp(-\mathcal{B}_k). $$
(18)

Equations (17) and (18) produce

$$ u_k^{*}=\frac{\mathcal{A}_k\exp(-\mathcal{B}_k)}{\sum_{l=1}^{K}\mathcal {A}_l\exp(-\mathcal{B}_l)}. $$
(19)

It is easy to check that u is a minimizer of \(\mathcal{E}\). Substituting Eq. (19) into \(\mathcal{E}\) and one can finally get

$$ \mathcal{E}\bigl(\mathbf{u}^{*}\bigr)=-\log\sum _{k=1}^{K}\mathcal{A}_k\exp (- \mathcal{B}_k), $$

which completes the proof.

Appendix B: Proof of Proposition 1

According to the first formulation of iteration scheme (4) and Lemma 1, we have

$$ \left \{ \begin{array}{l} -\mathcal{L}(\varTheta^{\nu})=\tilde{\mathcal{E}}(\varTheta^{\nu},\mathbf {u}^{\nu+1}),\\[5pt] -\mathcal{L}(\varTheta^{\nu+1})=\tilde{\mathcal{E}}(\varTheta^{\nu +1},\mathbf{u}^{\nu+2}). \end{array} \right . $$

On the other hand, the two equations in (4) can provide

$$ \tilde{\mathcal{E}}\bigl(\varTheta^{\nu},\mathbf{u}^{\nu+1}\bigr) \geq \tilde {\mathcal{E}}\bigl(\varTheta^{\nu+1},\mathbf{u}^{\nu+1} \bigr) \geq \tilde {\mathcal{E}}\bigl(\varTheta^{\nu+1}, \mathbf{u}^{\nu+2}\bigr), $$

and thus \(-\mathcal{L}(\varTheta^{\nu+1})\leq -\mathcal{L}(\varTheta^{\nu})\).

Appendix C: Proof of Proposition 2

We use distribution function to show this.

Thus \(p_{\mathfrak{Z}}(z)=\sum_{k=1}^{K}\frac{\gamma_{k}}{\sqrt{2\pi}\sigma_{k}\beta(x)}\exp \{-\frac{ [z-c_{k}\beta (x) ]^{2}}{2\sigma_{k}^{2}\beta^{2}(x)} \}\).

Appendix D: Proof of Proposition 3

The proof is motivated by [40]. We first give a schematic diagram in Fig. 9. Sine we suppose ω is relative small and Γ is smooth enough, then we can regard the curve AB as a line segment. As a result, one can get h=ωcosθ,S shadow=θω 2ω 2sinθcosθ. For a sufficient small ω, the intersections of more that two boundary curves can be ignored compared to the total length of Γ. According to all the suppositions, then we have

Fig. 9
figure 9

A schematic diagram for the regularization term. Shadow area is the penalty area defined in (7)

Appendix E: The derivation of Eq. (12)

This derivation is very similar to the proof of Lemma 1. The Lagrangian function L for problem (10) is given by

$$ L(\mathbf{u},d)=\mathcal{J}(\mathbf{u})+\int_{\varOmega} d(x) \Biggl[\sum_{k=1}^K u_k(x)-1 \Biggr] \mathrm {d}x, $$

where d is the Lagrangian multiplier. Then

$$ \begin{aligned}[c] \frac{\delta L}{\delta u_k}&=\frac{1}{2(\sigma_k^2)^{\nu}}\int_{\varOmega }G_{\sigma}(y-x) \biggl[\frac{f(x)}{\beta^{\nu}(y)}-c_k^{\nu} \biggr]^2 \mathrm {d}y\\ &\quad {}-\log\frac{\gamma_k^{\nu}}{\sqrt{(\sigma_k^2)^{\nu}}}+\log u_k(x)+1+\lambda r_k^{\nu}(x)+d(x). \end{aligned} $$

By setting \(\frac{\delta L}{\delta u_{k}}=0\) and with simplification, it becomes

(20)

exp{−1−d(x)} in (20) can be obtained by summing up k from 1 to K in the both sides of this equation and using the fact \(\sum_{k=1}^{K}u_{k}(x)=1\). With the notations used in Sect. 3.3, we have

$$ \exp \bigl\{-1-d(x) \bigr\}=\frac{1}{\sum_{k=1}^K q_k^{\nu}(x)}. $$

Substituting it back to (20), this immediately leads to the updating Eq. (12).

Appendix F: The derivation of Eq. (13)

We only give the details of calculating the last equation in (13), others can be obtained in the same manner. Since

we get

By setting \(\frac{\delta\mathcal{J}}{\delta\beta}=0\), then it becomes

$$ \beta^2(y)+s^{\nu+1}(y)\beta(y)-t^{\nu+1}(y)=0. $$

It is easy to get \(\beta(y)=\frac{-s^{\nu +1}(y)+\sqrt{ [s^{\nu+1}(y) ]^{2}+4t^{\nu+1}(y)}}{2}\) is a positive root since we assume β(y)>0.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liu, J., Zhang, H. Image Segmentation Using a Local GMM in a Variational Framework. J Math Imaging Vis 46, 161–176 (2013). https://doi.org/10.1007/s10851-012-0376-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-012-0376-5

Keywords

Navigation