Abstract
Statistical shape analysis has relied on various models, each with its strengths and limitations. For multigroup analyses, while typical methods pool data to fit a single statistical model, partial pooling through hierarchical modeling can be superior. For pointset shape representations, we propose a novel hierarchical model in Riemannian shape space. The inference treats individual shapes and group-mean shapes as latent variables, and uses expectation maximization that relies on sampling shapes. Our generative model, including shape-smoothness priors, can be robust to segmentation errors, producing more compact per-group models and realistic shape samples. We propose a method for efficient sampling in Riemannian shape space. The results show the benefits of our hierarchical Riemannian generative model for hypothesis testing, over the state of the art.
S.P. Awate—The authors thank funding via IIT Bombay Seed Grant 14IRCCSG010.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction and Related Work
Statistical shape analysis typically relies on boundary point distribution models [4, 12], implicit models [5], medial models [17], or nonlinear dense diffeomorphic warps [2, 6, 10, 19]. Unlike some models based on pointsets [3, 4, 8, 21] or distance transforms [5], medial [7] and warp-based models [6] represent shape as an equivalence class of object boundaries and lead to statistical analyses in the associated Riemannian shape space. While medial representations are limited to non-branching objects, methods based on diffeomorphisms involve very large dimensional Riemannian spaces where the analysis can be expensive and challenged by noise and limited sample sizes of training data [16]. All these approaches are active areas of research in their own right.
Typical cross-sectional studies, e.g., hypothesis testing [3, 18], pool the data from multiple groups to fit a single model. However, when data has is naturally organized in groups, partial pooling through hierarchical modeling [9] offers several benefits, including more compact models per group and reduced risk of overfitting (e.g., for low sample size or in presence of outliers) through shrinkage. We propose a hierarchical model for pointset shape representations in Kendall shape space. While [21] also uses a hierarchical generative model that estimates all shape covariances in Euclidean space, our method models shape variability in Riemannian shape space, at the group and population levels. While [1, 2] model nonlinear warps as latent variables and treats the segmentations as error free, we model individual shapes as latent variables (treating data-shape similarity transforms as parameters) and allow for errors in segmentations. Nevertheless, [1, 2] do not use a hierarchical model for multigroup data.
We fit our model to the data using Monte-Carlo (MC) expectation maximization (EM). For EM, we propose a Markov chain Monte Carlo (MCMC) method for sampling in Riemannian space, by extending Skilling’s leapfrog [13] to Riemannian space, and then adapting it to shape space. This sampler is virtually parameter free, unlike the one in [21] that is sensitive to internal parameter values.
In this paper, we propose a hierarchical generative model using pointset-based shape representations for multigroup shape analysis, incorporating statistical analysis in the associated Riemannian (Kendall) shape space. We treat individual and group-mean shapes as latent variables within an EM inference framework. The generative model incorporating shape-smoothness priors to regularize model learning, helps counter errors in imaging and segmentation, producing more compact per-group models and realistic shape samples. We also propose a MCMC sampling algorithm in Riemannian space and adapt it to shape space. We use the model for hypothesis testing on simulated and medical images showing its benefits over the state of the art.
2 Methods
We describe algorithms for modeling, inference, sampling, and hypothesis testing.
2.1 Hierarchical Generative Statistical Model for Multiple Groups of Shapes
We model each shape as an equivalence class of pointsets, where equivalence is defined through the similarity transform comprising translation, rotation, and isotropic scaling [12]. We model all shape-representing pointsets to be of equal cardinality to use the Procrustes distance [11] between shapes. Model fitting fixes point correspondences across all shapes and optimizes point locations based on the data. The fixed correspondences lead to a pointset representation as an ordered set of points or as a vector.
Notation. Consider a population comprising M groups, where group m has \(N_m\) individuals. Let data \(x_{mi}\) represent the set of pixel locations on the boundary of the segmented anatomical object in individual i in group m. Let \(y_{mi}\) be the (unknown) pointset representing object shape for individual i in group m. Let \(z_m\) be the (unknown) pointset representing the mean object shape for group m. Let \(C_m\) model the covariance of shapes in group m. Let \(\mu \) be the (unknown) pointset representing the population-level mean object shape and let (unknown) C model the associated covariance; \(\mu \) and C capture the variability of the group-mean shapes \(z_m\).
Each shape-representing pointset (or shape pointset) has J points in 3D; so \(y_{mi} \in \mathbb {R}^{3J}\), \(z_m \in \mathbb {R}^{3J}\). Each data-representing pointset (or data pointset) \(x_{mi}\) has arbitrary cardinality. We model the population mean \(\mu \) and covariance C and the group covariances \(\{ C_m \}_{m=1}^M\) as parameters. We model the shape pointsets \(Y_{mi}\) and \(Z_m\) as latent variables. Each shape pointset also lies in preshape space [11, 12], with centroid at the origin and unit \(L^2\) norm. For shape pointsets a and b, the Procrustes distance is \(d_{\text {Pro}} (a, b) := \min _{\mathcal {R}} d_g (a, \mathcal {R} b)\), where operator \(\mathcal {R}\) applies a rotation to each point in the pointset and \(d_g(\cdot ,\cdot )\) is the geodesic distance on the unit hypersphere in preshape space.
Prior Models. We model a probability density function (PDF) to capture the covariance structures of (i) group-mean variation and (ii) individual shape variation, by extending the approximate Normal law on Riemannian manifolds [15] to shape space, as motivated in [8]. For a and b on the unit hypersphere, let \(\text {Log}_a (b)\) be the logarithmic map of b with respect to a. Considering the tangent space of shape space at \(\mu \) to relate to preshapes that are rotationally aligned to \(\mu \), the logarithmic map of shape a to the tangent space of the shape space at \(\mu \) is \(\text {Log}^S_\mu (a) := \text {Log}_\mu (\mathcal {R}^* a)\), where \(\mathcal {R}^* := \arg \min _{\mathcal {R}} d_g (\mathcal {R} a, \mu )\). Extending the Procrustes distance, the squared Mahalanobis distance of shape a with respect to \(\mu \) and C is \(d_{\text {Mah}}^2 (a; \mu , C) := \text {Log}^S_\mu (a)^{\top } C^{-1} \text {Log}^S_\mu (a)\). To model a PDF that gives larger probabilities to smoother shapes a, we use a prior that penalizes distances between each point \(a_j\) and its neighbors. Let the neighborhood system \(\mathcal {N} := \{ \mathcal {N}_j \}_{j=1}^J\), where set \(\mathcal {N}_j\) has the neighbor indices of j-th point. In practice, we get \(\mathcal {N}\) by fitting a triangular mesh to the segmented object boundary. Thus, the probability for shape a is
where \(\beta \ge 0\) controls the prior strength and \(\eta (\mu , C, \beta )\) is the normalization constant. We use this design to model (i) the conditional PDF \(P (z_m | \mu , C, \beta )\) of group mean shapes \(z_m\) and (ii) the conditional PDF \(P (y_{mi} | z_m, C_m, \beta _m)\) of individual shapes \(y_{mi}\). The second term in the exponent equals \(0.5 a^{\top } \varOmega a\), where \(\varOmega \) is a sparse precision matrix with diagonal elements \(2 \beta \) and the only non-zero off diagonal elements equal \((-\beta )\) when the corresponding points are neighbors in \(\mathcal {N}\).
Likelihood Model. To measure dissimilarity between data pointset \(x_{mi}\) and individual shape pointset \(y_{mi}\), of differing cardinality, we use a measure \(\varDelta (x_{mi}, y_{mi}) :=\) \(\min _{\mathcal {S}_{mi}} \left( \sum _{j=1}^J \min _l ||\mathcal {S}_{mi} x_{mil} - y_{mij} ||_2^2 + \sum _{l=1}^L \min _j ||\mathcal {S}_{mi} x_{mil} - y_{mij} ||_2^2 \right) \), where the operator \(\mathcal {S}_{mi}\) applies a similarity transform to point \(x_{mil}\) in the pointset x to factor out similarity transforms on data. Unlike methods [6, 21] that use current distance [19] having quadratic complexity in either pointset’s cardinality, \(\varDelta (x, z)\) can be well approximated efficiently using algorithms of complexity close to \(O ((J+L) (\log J + \log L))\), described later. We model \(P (x_{mi} | y_{mi}) \propto \exp (- \varDelta (x_{mi}, y_{mi}))\).
2.2 Model Fitting Using Monte-Carlo Expectation Maximization
We use EM to fit the hierarchical model to the data pointsets \(x := \{ \{ x_{mi} \}_{i=1}^{N_m} \}_{m=1}^M\). In this paper, \(\beta _m := \beta , \forall m\) and \(\beta \) is user defined. Let the parameter set be \(\theta := \{ \mu , C, \{ C_m \}_{m=1}^M \}\). At iteration t, given parameter estimates \(\theta ^t := \{ \mu ^t, C^t, \{ C_m^t \}_{m=1}^M \}\), the E step defines \(Q (\theta ; \theta ^t) := E_{P (Y,Z | x, \theta ^t)} [ \log P (x,Y,Z | \theta ) ]\) where the complete data likelihood \( P (x,y,z | \theta ) = \prod _m \prod _i P (x_{mi} | y_{mi}) P (y_{mi} | z_m, C_m, \beta _m) P (z_m | \mu , C, \beta ). \) Because the expectation is analytically intractable, we use a Monte-Carlo approximation
The M step obtains parameter updates \(\theta ^{t+1} := \arg \max _{\theta } \widehat{Q} (\theta ; \theta ^t)\). Given data x and the sampled shape pairs \(\{ (y^s, z^s) \}_{s=1}^S\), we alternatingly optimize the shape-distribution parameters \(\theta \) and the internal parameters \(\mathcal {S}_{mi}\), until convergence to a local optimum.
Update Similarity Transforms \(\mathcal {S}_{mi}\). The optimal similarity transform \(\mathcal {S}_{mi}\) is
which aligns the data pointset \(x_{mi}\) to the set of sampled shape pointsets \(\{ y_{mi}^s \}_{s=1}^S\). We optimize the parameter for translation, scaling, and rotation, \(\mathcal {S}_{mi}\) using gradient descent that approximates the objective function efficiently. The objective function value depends on the nearest point in one pointset to each point in the other pointset. For shapes \(x_{mi}\) (cardinality L) and \(y_{mi}\) (cardinality J), we can find the required pairs of nearest neighbors in \(O ((J+L) (\log J + \log L))\) time by building k-d trees (\(O (J \log J) + O (L \log L)\)) followed by nearest neighbor searches (\(O (J \log L) + O (L \log J)\)). Assuming the parameter updates to be sufficiently small, (i) for most data points \(x_{mil}\), the nearest shape point \(y_{mi\phi (l)}^s\) is the same before and after the update and (ii) for most shape points \(y_{mij}^s\), the displacement vector to the nearest data point \(x_{mi\psi (j)}\) is the same before and after the update. Thus, constraining parameter updates to be small, we approximate the gradients of the desired objective function by first finding the pairs of nearest points \((l, \phi (l))\) and \((j, \phi (j))\) and then taking the gradients of
Updates for translation and scale are in closed form. We optimize rotation using gradient descent on the manifold of orthogonal matrices with determinant 1.
Update Mean \(\mu \). The optimal \(\widehat{\mu }\) comes from the constrained nonlinear optimization
Differentiating the \(\text {Log}\) function, we optimize \(\mu \) using projected gradient descent.
Update Covariances \(C_m\). The optimal covariance \(C_m\) minimizes
Although the normalization term \(\eta (z_m^s, C_m, \beta _m)\) is difficult to evaluate analytically, it can be approximated well enough in practice. Assuming that the shape distribution \(P (y_{mi}^s | z_m^s, C_m)\) has sufficiently low variance, the tangent vector \(\text {Log}^S_{z_m^s} (y_{mi}^s)\) is close to the difference vector \(y_{mi}^s - z_m^s\), in which case \(P (y_{mi}^s | z_m^s, C_m)\) appears as a product of a multivariate Gaussian \(G (y_{mi}^s; z_m^s, C_m)\) with another multivariate Gaussian \(G (y_{mi}^s; \mathbf{0}, \varOmega )\). The product distribution equals \(G (y_{mi}^s; z_m^s, C_m^{\text {reg}})\) where the regularized covariance \(C_m^{\text {reg}}) := (C_m^{-1} + \varOmega )^{-1}\) restricts all variability to the tangent space at the mean \(z_m\) and the normalization term \(\eta (z_m^s, C_m, \beta _m) \approx (2 \pi )^{D/2} |C^{\text {reg}}|^{0.5}\). Then, the optimal covariance \(\widehat{C}_m^{\text {reg}}\) is the sample covariance of tangent vectors \(\text {Log}^S_{z_m^s} (y_{mi}^s)\) in the tangent spaces at \(z_m^s\). Thus, the optimal covariance \(\widehat{C}_m\) is obtained in closed form.
Update Covariance C. The strategy for optimizing C is analogous to the one just described for estimating \(C^m\). We first compute \(\widehat{C}^{\text {reg}}\) as the sample covariance of tangent vectors \(\text {Log}^S_{\mu } (z_m^s)\) in the tangent space at \(\mu \). Then, the optimal \(\widehat{C} = ( (\widehat{C}^{\text {reg}})^{-1} - \varOmega )^{-1}\).
2.3 Robust Efficient MCMC Sampling on Riemannian Manifolds
EM entails sampling shape-pointset pairs \((y^s, z^s)\) from their posterior PDF \(P (Y,Z | x, \theta ^t)\) in shape space. We propose a generic scheme for efficient sampling in high-dimensional spaces on a Riemannian manifold and adapt it for sampling in shape space. Standard Metropolis-Hastings or Gibbs MCMC samplers are inefficient in high-dimensional spaces [13] where the data typically shows strong correlations between dimensions. We propose to adapt Skilling’s multistate leapfrog method [13], an efficient MCMC sampler, to Riemannian spaces. Alternate efficient MCMC methods, e.g., Hamiltonian Monte Carlo used in [21], are sensitive to tuning of the underlying parameters [20]. We propose a sampler that is robust in practice, requiring little parameter tuning.
Consider a multivariate random variable F taking values f on a Riemannian manifold \(\mathbb {F}\) with associated PDF P(F). We initialize the MCMC sampler with a set of states \(\{ f^q \in \mathbb {F} \}_{q=1}^Q\). We propose to leapfrog a randomly-chosen current state \(f^{q_1}\) over a randomly-chosen state \(f^{q_2}\) to give a proposal state \(f^{q_3} := \text {Exp}^{\mathbb {F}}_{f^{q_2}} (- \text {Log}^{\mathbb {F}}_{f^{q_2}} (f^{q_1}) )\), where the logarithmic and exponential maps are with respect to the manifold \(\mathbb {F}\). The proposal state \(f^{q_3}\) is accepted, according to the Metropolis method, with probability equal to the ratio \(P (f^{q_3}) / P (f^{q_1})\). The sampler only needs to evaluate probabilities, without needing the gradients of P(F). Such leapfrog jumps are repeated and after sufficient burn in, the set of Q states are considered to be a sample from P(F).
We adapt the proposed leapfrog sampling scheme to shape space for sampling from the Normal law \(P (z | \mu , C, \beta )\) that defines a Gaussian distribution in the tangent space of shape space at \(\mu \), where the tangent space comprises all shapes aligned to \(\mu \). We initialize the set of states to the pointsets \(\{ z^q \}_{q=1}^Q\) in preshape space and rotationally aligned to the mean \(\mu \). In shape space, we propose the leapfrog step \( z^{q_3} {:=} \arg \min _{c := \mathcal {R} b} d_g (c, \mu ) \text{, } \text{ where } b {:=} \text {Exp}_{z^{q_2}} (- \text {Log}_{z^{q_2}} (z^{q_1})), \) which approximates the geodesic from \(z^{q_1}\) to \(z^{q_2}\) and uses that to “leap” to \(z^{q_3}\) in shape space.
2.4 Hypothesis Testing on Riemannian Manifolds
Unlike parametric hypothesis, permutation tests are nonparametric, rely on the generic assumption of exchangeability, lead to stronger control over Type-1 error, and are more robust to random errors in the measurements/ post-processing of image data. We use permutation testing to test the null hypothesis of the equality of two group distributions in shape space. After estimating the group means and covariances \(\{ \mu ^m, C^m \}_{m=1}^M\), we propose a test statistic to measure the differences between the shape distributions arising from two cohorts, say, A and B, by adding the squared Mahalanobis geodesic distance between the group means with respect to each group covariance, i.e., \(T := d_{\text {Mah}}^2 (\mu ^A; \mu ^B, C^B) + d_{\text {Mah}}^2 (\mu ^B; \mu ^A, C^A)\). The test-statistic distribution is unknown analytically and we infer it using bootstrap sampling (150 repeats).
3 Results and Conclusion
We compare our method with ShapeWorks [3] that does not employ a hierarchical model, restricts point locations within shapes to the object-boundary, and does not enforce shape smoothness. We also compare our method with an improved version of the hierarchical non-Riemannian method in [21] by adding the shape smoothness prior and replacing the current distance with the cost-effective one in [8]. After EM gives the optimal parameters \(\theta ^*\), we get optimal individual shapes \(y_{mi}^*\) and group-mean shapes \(z_m^*\) as the maximum-a-posteriori (MAP) estimates \(\arg \max _{z_m, y_{mi}} P (z_m, y_{mi} | x, \theta ^*, \beta )\).
Simulated Data, Proposed Method. (a) Example noisy ellipsoid segmentation, whose boundary gives data \(x_{mi}\). (b), (c) Example sampled shapes \(y_{mi}^s\).(d), (e) Group mean MAP estimates \(z_1^*, z_2^*\), obtained after EM optimization for parameters \(\theta ^*\). (f) Population mean estimate \(\mu ^*\) with Cohen’s effect sizes at each vertex j computed via means \({z_{1j}^*}, {z_{2j}^*}\) and variances \(C_{1jj}, C_{2jj}\).
Simulated Data, Comparison with Other Methods. Permutation-test histograms of test statistics, along with their variability estimated through bootstrap sampling of cohorts, for (a) ShapeWorks, (b) a hierarchical model with Euclidean analysis, (c) the proposed hierarchical model with Riemannian analysis. Eigenvalue spectra of estimated (d) population covariance C, (e) group covariance \(C_1\), (f) group covariance \(C_2\), for each of the 3 aforementioned methods.
Validation on Simulated Data. We generate 2 groups, with subtle differences, of 40 ellipsoids each. Each group has 1 major mode of variation, where we fix 2 of the ellipsoid axes lengths to 10 mm and vary the third. The third axis lengths for group I are Gaussian distributed with mean 7.25 mm and standard deviation (SD) 0.4 mm and for group II are Gaussian distributed with mean 8.5 mm and SD 0.5 mm. To mimic realistic segmentations in typical applications, we corrupt the segmentations with random coarse-scale and fine-scale (noise) perturbations to each object boundary (Fig. 1(a)).
Figures 1(b)–(c) show example sampled shapes, which are well regularized because of the smoothness prior controlled by \(\beta \). The population mean estimate (Figs. 1(d)) expectedly lies “between” the MAP estimates for the group means (Figs. 1(e)–(f)). The proposed hierarchical Riemannian approach leads to a more compact model (Fig. 2(d)–(f)), i.e., larger fraction of variability captured in fewer modes, compared to the hierarchical non-Riemannian approach and non-hierarchical non-Riemannian ShapeWorks. ShapeWorks’s eigenspectra decay slowly because (i) it does not use partial pooling and (ii) assumes the segmentation to be devoid of errors that can lead to fine-scale or coarse-scale perturbations, resulting from noise or artifacts (e.g., inhomogeneity) or human errors. Repeated permutation testing through bootstrap sampling of cohorts (Fig. 2(a)–(c)) gives the smallest p values (significantly) for the proposed hierarchical Riemannian approach, correctly favoring the rejection of the null hypothesis.
Bone Imaging Data, Proposed Method. (a) Example segmentation for the capitate wrist bone, whose boundary gives data \(x_{mi}\).(b), (c) Example sampled shapes \(y_{mi}^s\). (d), (e) Group mean MAP estimates \(z_1^*, z_2^*\), obtained after EM optimization for parameters \(\theta ^*\). (f) Population mean estimate \(\mu ^*\) with Cohen’s effect sizes at each vertex; computed as in Fig. 1(f).
Bone Imaging Data, Comparison with Other Methods. Permutation-test histograms of test statistics, along with their variability estimated through bootstrap sampling of cohorts, for (a) ShapeWorks, (b) a hierarchical model with Euclidean analysis, (c) the proposed hierarchical model with Riemannian analysis. Eigenvalue spectra of estimated (d) population covariance C, (e) group covariance \(C_1\), (f) group covariance \(C_2\), for each of the 3 aforementioned methods.
Evaluation on Medical Data. We test for gender differences in 4 wrist bone shapes; 15 subjects per bone per group [14]. For the capitate bone (Fig. 3), the proposed method gives smaller permutation-test p values (Fig. 4(a)–(c)), compared to ShapeWorks and the hierarchical non-Riemannian approach, stemming from a more compact model (Fig. 4(d)–(f)). Bootstrap sampling of the cohort yielded variability of the p values, which we summarize through the mean (and SD in parentheses): (i) proposed approach: 0.04(0.04), (ii) ShapeWorks: 0.60(0.13), (iii) hierarchical non-Riemannian 0.36(0.11). For both trapezoid and trapezium bones, the p values were: (i) proposed approach: 0.04(0.02) and (ii) ShapeWorks: 0.13(0.04). For the pisiform bone, the p values were (i) proposed approach: 0.08(0.03) and (ii) ShapeWorks: 0.10(0.04).
We evaluate the quality of the shape fit \(y_{mi}^*\) by measuring the distances between each point in shape \(y_{mi}^*\) to the nearest point in data \(x_{mi}\). These distances, as a percentage of the distance between the two farthest points of the population mean \(\mu ^*\) (that has norm 1) were small for the bone populations: mean 0.98 %, median 0.88 %, SD 0.5 %.
Conclusion. We propose a hierarchical model in Riemannian shape space. The generative model counter errors in the data and the shape-smoothness prior acts as regularization. We propose novel methods for robust efficient sampling and hypothesis testing in Riemannian shape space. Our method detects subtle differences between small cohorts, simulated and medical, more accurately than the state of the art.
References
Allassonniere, S., Amit, Y., Trouve, A.: Toward a coherent statistical framework for dense deformable template estimation. J. R. Stat. Soc. Ser. B 69(1), 3–29 (2007)
Allassonniere, S., Kuhn, E., Trouve, A.: Construction of bayesian deformable models via a stochastic approximation algorithm: a convergence study. Bernoulli 16(3), 641–678 (2010)
Cates, J., Fletcher, T., Styner, M., Shenton, M., Whitaker, R.: Shape modeling and analysis with entropy-based particle systems. Proc. Inf. Process. Med. Imaging 20, 333–45 (2007)
Cootes, T., Taylor, C., Cooper, D., Graham, J.: Active shape models - their training and application. Comput. Vis. Image Underst. 61(1), 38–59 (1995)
Dambreville, S., Rathi, Y., Tannenbaum, A.: A framework for image segmentation using shape models kernel space shape priors. IEEE Trans. Pattern Anal. Mach. Intell. 30(8), 1385–1399 (2008)
Durrleman, S., Pennec, X., Trouve, A., Ayache, N.: Statistical models of sets of curves and surfaces based on currents. Med. Imaging Anal. 13(5), 793–808 (2009)
Fletcher, T., Lu, C., Pizer, S., Joshi, S.: Principal geodesic analysis for the study of nonlinear statistics of shape. IEEE Trans. Med. Imaging 23(8), 995–1005 (2004)
Gaikwad, A.V., Shigwan, S.J., Awate, S.P.: A statistical model for smooth shapes in kendall shape space. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 628–635. Springer, Heidelberg (2015). doi:10.1007/978-3-319-24574-4_75
Gelman, A.: Multilevel (hierarchical) modeling: what it can and cannot do. Am. Stat. Assoc. 48(3), 432–435 (2006)
Glasbey, C.A., Mardia, K.V.: A penalized likelihood approach to image warping. J R. Stat. Soc. Ser. B 63(3), 465–514 (2001)
Goodall, C.: Procrustes methods in statistical analysis of shape. J. R. Stat. Soc. 53(2), 285–339 (1991)
Kendall, D.: A survey of the statistical theory of shape. Stat. Sci. 4(2), 87–99 (1989)
MacKay, D.: Information Theory, Inference, Learning Algorithms. Cambridge University Press, Cambridge (2012)
Moore, D., Crisco, J., Trafton, T., Leventhal, E.: A digital database of wrist bone anatomy and carpal kinematics. J. Biomech. 40(11), 2537–2542 (2007)
Pennec, X.: Intrinsic statistics on Riemannian manifolds: basic tools for geometric measurements. J. Math. Imaging Vis. 25(1), 127–154 (2006)
Pizer, S., Jung, S., Goswami, D., Vicory, J., Zhao, X., Chaudhuri, R., Damon, J., Huckemann, S., Marron, J.: Nested sphere statistics of skeletal models. Innov. Shape Anal. 93 (2012)
Siddiqi, K., Pizer, S.: Medial Representations: Mathematics, Algorithms and Applications. Springer, Netherlands (2008)
Terriberry, T.B., Joshi, S.C., Gerig, G.: Hypothesis testing with nonlinear shape models. In: Christensen, G.E., Sonka, M. (eds.) IPMI 2005. LNCS, vol. 3565, pp. 15–26. Springer, Heidelberg (2005). doi:10.1007/11505730_2
Vaillant, M., Glaunès, J.: Surface matching via currents. In: Christensen, G.E., Sonka, M. (eds.) IPMI 2005. LNCS, vol. 3565, pp. 381–392. Springer, Heidelberg (2005)
Wang, Z., Mohamed, S., Freitas, N.: Adaptive Hamiltonian and Riemann manifold monte carlo samplers. Proc. Int. Conf. Mach. Learn. 3, 1462–1470 (2013)
Yu, Y.-Y., Fletcher, P.T., Awate, S.P.: Hierarchical bayesian modeling, estimation, and sampling for multigroup shape analysis. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Howe, R. (eds.) MICCAI 2014. LNCS, vol. 8675, pp. 9–16. Springer, Heidelberg (2014). doi:10.1007/978-3-319-10443-0_2
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Shigwan, S.J., Awate, S.P. (2016). Hierarchical Generative Modeling and Monte-Carlo EM in Riemannian Shape Space for Hypothesis Testing. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds) Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. MICCAI 2016. Lecture Notes in Computer Science(), vol 9902. Springer, Cham. https://doi.org/10.1007/978-3-319-46726-9_23
Download citation
DOI: https://doi.org/10.1007/978-3-319-46726-9_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-46725-2
Online ISBN: 978-3-319-46726-9
eBook Packages: Computer ScienceComputer Science (R0)