Skip to main content

Advertisement

Log in

On Model-Based Clustering of Directional Data with Heavy Tails

  • Published:
Journal of Classification Aims and scope Submit manuscript

Abstract

Directional statistics deals with data that can be naturally expressed in the form of vector directions. The von Mises-Fisher distribution is one of the most fundamental parametric models to describe directional data. Mixtures of von Mises-Fisher distributions represent a popular approach to handling heterogeneous populations. However, components of such models can be affected by the presence of mild outliers or cluster tails heavier than what can be accommodated by means of a von Mises-Fisher distribution. To relax these model limitations, a mixture of contaminated von Mises-Fisher distributions is proposed. The performance of the proposed methodology is tested on synthetic data and applied to text and genetics data. The obtained results demonstrate the importance of the proposed procedure and its superiority over the traditional mixture of von Mises-Fisher distributions in the presence of heavy tails.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data availibility

The data set analyzed in Section 4.1 is publicly available as a supplement for Maitra and Ramler (2010). The data set considered in Section 4.2 is publicaly available from the R package spls (Chung et al. , 2019). Other data are available from the corresponding author upon request.

References

  • Banerjee, A., Dhillon, I. S., Ghosh, J., and Sra, S. (2003), Generative model-based clustering of directional data, in Proceedings of the ninth ACM SIGKDD international conference on Knowledge Discovery and Data Mining, ACM, pp. 19–28.

  • Banerjee, A., Dhillon, I. S., Ghosh, J., & Sra, S. (2005). Clustering on the unit hypersphere using von Mises-Fisher distributions. Journal of Machine Learning Research, 6, 1345–1382.

    MathSciNet  MATH  Google Scholar 

  • Banerjee, A., Dhillon, I. S., Ghosh, J., and Sra, S. (2009), Text clustering with mixture of von Mises-Fisher distributions, Chapman and Hall/CRC.

  • Begashaw, G. B., & Yohannes, Y. B. (2020). Review of outlier detection and identifying using robust regression model. International Journal of Systems Science and Applied Mathematics, 5, 4–11.

    Article  Google Scholar 

  • Bijral, A. S., Breitenbach, M., and Grudic, G. (2007), Mixture of Watson distributions: A generative model for hyperspherical embeddings, in Artificial Intelligence and Statistics, PMLR, pp. 35–42.

  • Bingham, C. (1974), An antipodally symmetric distribution on the sphere, The Annals of Statistics, 1201–1225.

  • Boomsma, W., Kent, J. T., Mardia, K. V., Taylor, C. C., & Hamelryck, T. (2006). Graphical models and directional statistics capture protein structure. Interdisciplinary Statistics and Bioinformatics, 25, 91–94.

    Google Scholar 

  • Cabella, P., & Marinucci, D. (2009). Statistical challenges in the analysis of cosmic microwave background radiation. The Annals of Applied Statistics, 3, 61–95.

    Article  MathSciNet  MATH  Google Scholar 

  • Cabral, C., Lachos, V., & Prates, M. (2012). Multivariate mixture modelling using skew-normal independent distributions. Computational Statistics & Data Analysis, 56, 126–142.

    Article  MathSciNet  MATH  Google Scholar 

  • Chun, H., & Keleş, S. (2010). Sparse partial least squares regression for simultaneous dimension reduction and variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72, 3–25.

    Article  MathSciNet  MATH  Google Scholar 

  • Chung, D., Chun, H., and Keles, S. (2019), spls, R package version 2.2-3.

  • Dang, U. J., Browne, R. P., & McNicholas, P. D. (2015). Mixtures of multivariate power exponential distributions. Biometrics, 71, 1081–1089.

    Article  MathSciNet  MATH  Google Scholar 

  • Defays, D. (1977). An efficient algorithm for a complete link method. The Computer Journal, 20, 364–366.

    Article  MathSciNet  MATH  Google Scholar 

  • Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 39, 1–22.

    MathSciNet  MATH  Google Scholar 

  • Dhillon, I. S., & Modha, D. S. (2001). Concept decompositions for large sparse text data using clustering. Machine Learning, 42, 143–175.

    Article  MATH  Google Scholar 

  • Dhillon, I. S. and Sra, S. (2003), Modeling data using directional distributions, Tech. rep., TR-03-06, Department of Computer Sciences, The University of Texas at Austin.

  • Ester, M., Kriegel, H.-P., Sander, J., and Xu, X. (1996), A density-based algorithm for discovering clusters in large spatial databases with noise, in KDD’96, vol. 96, pp. 226–231.

  • Farcomeni, A., & Punzo, A. (2020). Robust model-based clustering with mild and gross outliers. TEST, 29, 989–1007.

    Article  MathSciNet  MATH  Google Scholar 

  • Fiedler, M. (1973). Algebraic connectivity of graphs. Czechoslovak Mathematical Journal, 23, 298–305.

    Article  MathSciNet  MATH  Google Scholar 

  • Fraley, C., & Raftery, A. E. (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association, 97, 611–631.

    Article  MathSciNet  MATH  Google Scholar 

  • Frühwirth-Schnatter, S. (2006), Finite mixture and Markov switching models, Springer Science & Business Media.

  • García-Portugués, E., Barros, A. M. G., Crujeiras, R. M., González-Manteiga, W., & Pereira, J. (2014). A test for directional-linear independence, with applications to wildfire orientation and size. Stochastic environmental research and risk assessment, 28, 1261–1275.

    Article  Google Scholar 

  • Gather, U., & Becker, C. (1997). 6 Outlier identification and robust methods. Handbook of statistics, 15, 123–143.

    Article  MathSciNet  MATH  Google Scholar 

  • Hassanzadeh, F., & Kalaylioglu, Z. (2018). A new multimodal and asymmetric bivariate circular distribution. Environmental and Ecological Statistics, 25, 363–385.

    Article  MathSciNet  Google Scholar 

  • Hornik, K., Feinerer, I., Kober, M., & Buchta, C. (2012). Spherical \(k\)-means clustering. Journal of Statistical Software, 50, 1–22.

    Article  Google Scholar 

  • Hornik, K., & Grün, B. (2014). On maximum likelihood estimation of the concentration parameter of von Mises-Fisher distributions. Computational Statistics, 29, 945–957.

    Article  MathSciNet  MATH  Google Scholar 

  • Hubert, L., & Arabie, P. (1985). Comparing partitions. Journal of Classification, 2, 193–218.

    Article  MATH  Google Scholar 

  • Jung, S., Foskey, M., & Marron, J. S. (2011). Principal arc analysis on direct product manifolds. The Annals of Applied Statistics, 5, 578–603.

    Article  MathSciNet  MATH  Google Scholar 

  • Karypis, G. (2002). CLUTO - a clustering toolkit. Tech. rep.: University of Minnesota, Department of Computer Science.

    Book  Google Scholar 

  • Kato, S., & Jones, M. C. (2013). An extended family of circular distributions related to wrapped Cauchy distributions via Brownian motion. Bernoulli, 19, 154–171.

    Article  MathSciNet  MATH  Google Scholar 

  • Kaufman, L., & Rousseeuw, P. J. (2009). Finding groups in data: An introduction to cluster analysis. John Wiley & Sons.

    MATH  Google Scholar 

  • Kent, J. T. (1982). The Fisher-Bingham distribution on the sphere. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 44, 71–80.

    MathSciNet  MATH  Google Scholar 

  • Kim, S., & SenGupta, A. (2021). Multimodal exponential families of circular distributions with application to daily peak hours of PM2.5 level in a large city. Journal of Applied Statistics, 48, 3193–3207.

    Article  MathSciNet  MATH  Google Scholar 

  • Krishna, K., & Murty, M. N. (1999). Genetic k-means algorithm, IEEE Transactions on Systems, Man, and Cybernetics. Part B (Cybernetics), 29, 433–439.

    Article  Google Scholar 

  • Lee, S. X., & McLachlan, G. (2013). On mixtures of skew normal and skew \(t\)-distributions. Advances in Data Analysis and Classification, 7, 241–266.

    Article  MathSciNet  MATH  Google Scholar 

  • Lee, T. I., Rinaldi, N. J., Robert, F., Odom, D. T., Bar-Joseph, Z., Gerber, G. K., Hannett, N. M., Harbison, C. T., Thompson, C. M., Simon, I., Zeitlinger, J., Jennings, E. G., Murray, H. L., Gordon, D. B., Ren, B., Wyrick, J. J., Tagne, J.-B., Volkert, T. L., Fraenkel, E., … Young, R. A. (2002). Transcriptional regulatory networks in Saccharomyces cerevisiae. Science, 298, 799–804.

    Article  Google Scholar 

  • Ley, C. and Verdebout, T. (2017), Modern directional statistics, Chapman and Hall/CRC.

  • Ley, C., & Verdebout, T. (2018). Applied directional statistics: Modern methods and case studies. CRC Press.

    Book  MATH  Google Scholar 

  • Lin, T. I., Lee, J. C., & Hsieh, W. J. (2007). Robust mixture modeling using the skew \(t\) distribution. Statistics and Computing, 17, 81–92.

    Article  MathSciNet  Google Scholar 

  • Lygre, A., & Krogstad, H. E. (1986). Maximum entropy estimation of the directional distribution in ocean wave spectra. Journal of Physical Oceanography, 16, 2052–2060.

    Article  Google Scholar 

  • MacQueen, J. (1967), Classification and analysis of multivariate observations, in 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297.

  • Maitra, R., & Ramler, I. (2010). A k-mean-directions algorithm for fast clustering of data on the sphere. Journal of Computational and Graphical Statistics, 19, 377–396.

    Article  MathSciNet  Google Scholar 

  • Mardia, K. V., Foldager, J. I., and Frellsen, J. (2018), Directional statistics in protein bioinformatics, in Applied Directional Statistics, Chapman and Hall/CRC, pp. 17–40.

  • Mardia, K. V., & Jupp, P. E. (2000). Directional statistics. John Wiley & Sons.

    MATH  Google Scholar 

  • Marinucci, D., & Peccati, G. (2011). Random fields on the sphere: Representation, limit theorems and cosmological applications. Cambridge University Press.

    Book  MATH  Google Scholar 

  • McLachlan, G., & Peel, D. (2000). Finite mixture models. John Wiley & Sons.

    Book  MATH  Google Scholar 

  • McNicholas, P. D. (2016). Mixture model-based classification. CRC Press.

    Book  MATH  Google Scholar 

  • Melnykov, Y., Zhu, X., & Melnykov, V. (2021). Transformation mixture modeling for skewed data groups with heavy tails and scatter. Computational Statistics, 36, 61–78.

    Article  MathSciNet  MATH  Google Scholar 

  • Morris, K., Punzo, A., McNicholas, P., & Browne, R. (2019). Asymmetric clusters and outliers: Mixtures of multivariate contaminated shifted asymmetric Laplace distributions. Computational Statistics & Data Analysis, 132, 145–166.

    Article  MathSciNet  MATH  Google Scholar 

  • Peel, D., & McLachlan, G. (2000). Robust mixture modelling using the \(t\) distribution. Statistics and Computing, 10, 339–348.

    Article  Google Scholar 

  • Pewsey, A. (2006). Modelling asymmetrically distributed circular data using the wrapped skew-normal distribution. Environmental and Ecological Statistics, 13, 257–269.

    Article  MathSciNet  Google Scholar 

  • Punzo, A., & Maruotti, A. (2016). Clustering multivariate longitudinal observations: The contaminated Gaussian hidden Markov model. Journal of Computational and Graphical Statistics, 25, 1097–1116.

    Article  MathSciNet  Google Scholar 

  • Punzo, A., & McNicholas, P. D. (2016). Parsimonious mixtures of multivariate contaminated normal distributions. Biometrical Journal, 58, 1506–1537.

    Article  MathSciNet  MATH  Google Scholar 

  • Punzo, A., & Tortora, C. (2019). Multiple scaled contaminated normal distribution and its application in clustering. Statistical Modelling, 21, 332–358.

    Article  MathSciNet  MATH  Google Scholar 

  • Rad, N. N., Bekker, A., and Arashi, M. (2020), A unified model for skewed circular data, in 2020 IEEE 23rd International Conference on Information Fusion (FUSION), IEEE, pp. 1–6.

  • Ritter, G. (2015). Robust cluster analysis and variable selection (Vol. 137). Boca Raton, FL: CRC Press.

    MATH  Google Scholar 

  • Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6, 461–464.

    Article  MathSciNet  MATH  Google Scholar 

  • Shoji, T. (2006). Statistical and geostatistical analysis of wind: A case study of direction statistics. Computers & Geosciences, 32, 1025–1039.

    Article  Google Scholar 

  • Sibson, R. (1973). SLINK: An optimally efficient algorithm for the single-link cluster method. The Computer Journal, 16, 30–34.

    Article  MathSciNet  Google Scholar 

  • Sokal, R., & Michener, C. (1958). A statistical method for evaluating systematic relationships. University of Kansas Science Bulletin, 38, 1409–1438.

    Google Scholar 

  • Sorensen, T. (1948). A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons. Biologiske Skrifter, 5, 1–34.

    Google Scholar 

  • Spielman, D. and Teng, S. (1996), Spectral partitioning works: Planar graphs and finite element meshes, in 37th Annual Symposium on Foundations of Computer Science, IEEE Comput. Soc. Press, pp. 96–105.

  • Sra, S. (2016), Directional statistics in machine learning: A brief review, ArXiv:1605.00316.

  • Tomarchio, S. D., Gallaugher, M. P. B., Punzo, A., and McNicholas, P. D. (2022), Mixtures of matrix-variate contaminated normal distributions, Journal of Computational and Graphical Statistics, 1–9.

  • Vrbik, I., & McNicholas, P. D. (2012). Analytic calculations for the EM algorithm for multivariate skew-\(t\) mixture models. Statistics & Probability Letters, 82, 1169–1174.

    Article  MathSciNet  MATH  Google Scholar 

  • Watson, G. S., & Williams, E. J. (1956). On the construction of significance tests on the circle and the sphere. Biometrika, 43, 344–352.

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang, J., & Liang, F. (2010). Robust clustering using exponential power mixtures. Biometrics, 66, 1078–1086.

    Article  MathSciNet  MATH  Google Scholar 

  • Zhe, X., Chen, S., & Yan, H. (2019). Directional statistics-based deep metric learning for image classification and retrieval. Pattern Recognition, 93, 113–123.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Volodymyr Melnykov.

Ethics declarations

Ethical Approval

The submitted work is original and has not been published anywhere in any form or language and is not under consideration by another journal.

Conflict of Interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 M-Step Derivation

Based on the Q-function provided in (3), closed-form expressions for parameters \(\pi _g\), \(\delta _g\), and \(\varvec{\mu }_g\) can be obtained. Parameters \(\kappa _g\) and \(\lambda _g\) need to be estimated numerically. For parameter \(\kappa _g\), we obtain

$$\begin{aligned} \frac{\partial Q(\kappa _g)}{\partial \kappa _g} = \sum _{i=1}^{n} \tau ^{(s)}_{ig} \bigg [v_{i|g}^{(s)} \frac{c_p^{\prime } (\kappa _g)}{c_p(\kappa _g)} + v^{(s)}_{i|g} \varvec{\mu }_g^{(s)\top } \varvec{x}_i + (1-v^{(s)}_{i|g}) \frac{c_p^\prime (\lambda _g^{(s-1)} \kappa _g) }{c_p(\lambda ^{(s-1)}_g \kappa _g)} + (1-v^{(s)}_{i|g}) \lambda _g^{(s-1)} \varvec{\mu }_g^{(s) \top } \varvec{x}_i\bigg ]. \end{aligned}$$
(10)

As \(c_p(\kappa _g) = (2\pi )^{-\frac{p}{2}} \kappa _g^{\frac{p}{2} - 1} I^{-1}_{\frac{p}{2}-1}(\kappa _g)\), by denoting \(r=\frac{p}{2}-1\) we obtain \(c_p(\kappa _g) = \frac{\kappa _g^r}{(2\pi )^{r+1} I_r(\kappa _g)}\) and \(c_p^{\prime } (\kappa _g) = \frac{r \kappa _g^{r-1} I_r(\kappa _g) - \kappa _g^r I_r^{\prime }(\kappa _g)}{(2\pi )^{r+1} I^2_r(\kappa _g)}\). Taking into account the fact that \(I_r^{\prime } (\kappa _g) = \frac{r}{\kappa _g} I_r (\kappa _g) + I_{r+1} (\kappa _g)\), it yields

$$\begin{aligned} \frac{c_p^{\prime } (\kappa _g)}{c_p(\kappa _g)} = \frac{\big [r\kappa _g^{r-1} I_r(\kappa _g) - \kappa _g^r (\frac{r}{\kappa _g} I_r(\kappa _g) + I_{r+1}(\kappa _g))\big ] (2\pi )^{r+1} I_r(\kappa _g)}{(2\pi )^{r+1} I^2_r(\kappa _g) \kappa _g^r} = -\frac{I_{r+1}(\kappa _g)}{I_r(\kappa _g)}. \end{aligned}$$
(11)

Similarly, it follows that

$$\begin{aligned} \frac{c_p^\prime (\lambda _g^{(s-1)} \kappa _g) }{c_p(\lambda _g^{(s-1)} \kappa _g)} = -\lambda _g^{(s-1)} \frac{I_{r+1}(\lambda _g^{(s-1)} \kappa _g)}{I_r(\lambda _g^{(s-1)} \kappa _g)}. \end{aligned}$$
(12)

Employing the Newton–Raphson method for the maximization of the Q-function with respect to \(\kappa _g\), we have \(\kappa _g^{(s)[j]} = \kappa _g^{(s)[j-1]}-\frac{U(\kappa _g^{(s)[j-1]})}{U^\prime (\kappa _g^{(s)[j-1]})}\), where j is the iteration number of the Newton–Raphson algorithm and \(U(\kappa _g)=\frac{\partial Q(\kappa _g)}{\partial \kappa _g}\) is given in (10). We proceed to obtain the derivative of \(U(\kappa _g)\) as follows below:

$$\begin{aligned} U^{\prime } (\kappa _g) = \sum _{i=1}^{n} \tau _{ig}^{(s)} v_{i|g}^{(s)} \frac{\partial }{\partial \kappa _g}\bigg (-\frac{I_{r+1}(\kappa _g)}{I_r(\kappa _g)}\bigg ) -\lambda _g^{(s-1)} \sum _{i=1}^{n} \tau _{ig}^{(s)} (1-v_{i|g}^{(s)}) \frac{\partial }{\partial \kappa _g}\bigg (\frac{I_{r+1}(\lambda _g^{(s-1)} \kappa _g)}{I_r(\lambda _g^{(s-1)} \kappa _g)}\bigg ), \end{aligned}$$
(13)

where

$$\begin{aligned} \begin{aligned} \frac{\partial }{\partial \kappa _g}\bigg (\frac{I_{r+1}(\kappa _g)}{I_r(\kappa _g)}\bigg )&= \frac{[\frac{r+1}{\kappa _g} I_{r+1}(\kappa _g) + I_{r+2}(\kappa _g)] I_r(\kappa _g) - I_{r+1}(\kappa _g) [\frac{r}{\kappa _g} I_r(\kappa _g) + I_{r+1}(\kappa _g)]}{I_r^2(\kappa _g)} \\&= \frac{I_{r+2}(\kappa _g)}{I_r(\kappa _g)} + \frac{1}{\kappa _g} \frac{I_{r+1}(\kappa _g)}{I_r(\kappa _g)} - \frac{I_{r+1}^2(\kappa _g)}{I_r^2(\kappa _g)} \end{aligned} \end{aligned}$$
(14)

and

$$\begin{aligned} \begin{aligned} \frac{\partial }{\partial \kappa _g}&\bigg (\frac{I_{r+1}(\lambda _g^{(s-1)} \kappa _g)}{I_r(\lambda _g^{(s-1)} \kappa _g)}\bigg ) = \frac{ \frac{\partial I_{r+1}(\lambda _g^{(s-1)} \kappa _g)}{\partial \kappa _g} I_r(\lambda _g^{(s-1)} \kappa _g) - I_{r+1}(\lambda _g^{(s-1)} \kappa _g) \frac{\partial I_r(\lambda _g^{(s-1)} \kappa _g)}{\partial \kappa _g}}{I_r^2(\lambda _g^{(s-1)} \kappa _g)} \\&= \frac{\big [\frac{r+1}{\lambda _g^{(s-1)} \kappa _g} I_{r+1}(\lambda _g^{(s-1)} \kappa _g) + I_{r+2}(\lambda _g^{(s-1)} \kappa _g)\big ] \lambda _g^{(s-1)} I_r(\lambda _g^{(s-1)} \kappa _g)}{I_r^2(\lambda _g^{(s-1)} \kappa _g)} \\&\quad - \frac{I_{r+1}(\lambda _g^{(s-1)} \kappa _g) \big [\frac{r}{\lambda _g^{(s-1)} \kappa _g} I_r(\lambda _g^{(s-1)} \kappa _g) + I_{r+1}(\lambda _g^{(s-1)} \kappa _g)\big ] \lambda _g^{(s-1)}}{I_r^2(\lambda _g^{(s-1)} \kappa _g)}\\&= \lambda _g^{(s-1)} \frac{I_{r+2}(\lambda _g^{(s-1)} \kappa _g)}{I_r(\lambda _g^{(s-1)} \kappa _g)} + \frac{1}{\kappa _g} \frac{I_{r+1}(\lambda _g^{(s-1)} \kappa _g)}{I_r(\lambda _g^{(s-1)} \kappa _g)}- \lambda _g^{(s-1)} \frac{I_{r+1}^2(\lambda _g^{(s-1)} \kappa _g)}{I_r^2(\lambda _g^{(s-1)} \kappa _g)}. \end{aligned} \end{aligned}$$
(15)

Substituting (11)-(15) into (10) yields the result presented in (5) and (6).

Similarly, for \(\lambda _g\) we have

$$\begin{aligned} \frac{\partial Q(\lambda _g)}{\partial \lambda _g} = \sum _{i=1}^{n} \tau _{ig}^{(s)} (1-v_{i|g}^{(s)})\bigg [\frac{1}{c_p(\lambda _g \kappa _g^{(s)})}\frac{\partial c_p(\lambda _g \kappa _g^{(s)}) }{\partial \lambda _g } + \kappa _g^{(s)} \varvec{\mu }_g^{(s) \top } \varvec{x}_i\bigg ]. \end{aligned}$$

Now, noticing that \(\frac{1}{c_p(\lambda _g \kappa _g^{(s)})}\frac{\partial c_p(\lambda _g \kappa _g^{(s)}) }{\partial \lambda _g } = -\kappa _g^{(s)} \frac{I_{r+1}(\lambda _g \kappa _g^{(s)})}{I_r (\lambda _g \kappa _g^{(s)})}\) and making use of

$$\begin{aligned} \frac{\partial }{\partial \lambda _g}\bigg (\frac{I_{r+1}(\lambda _g \kappa _g^{(s)})}{I_r(\lambda _g \kappa _g^{(s)})}\bigg ) = \kappa _g^{(s)} \frac{I_{r+2}(\lambda _g \kappa _g^{(s)})}{I_r(\lambda _g \kappa _g^{(s)})} + \frac{1}{\lambda _g} \frac{I_{r+1}(\lambda _g \kappa _g^{(s)})}{I_r(\lambda _g \kappa _g^{(s)})}- \kappa _g^{(s)} \frac{I_{r+1}^2(\lambda _g \kappa _g^{(s)})}{I_r^2(\lambda _g \kappa _g^{(s)})}, \end{aligned}$$

we obtain (7) and (8) for \(V(\lambda _g)=\frac{\partial Q(\lambda _g)}{\partial \lambda _g}\).

1.2 \(c_p(\kappa )\) Approximation

Let \(H_p(\kappa )=\frac{1}{c_{\frac{p}{2}-1}(\kappa )}\). Then, based on the Amos-type bounds obtain

$$\begin{aligned} S_{p+\frac{1}{2},p+\frac{3}{2}} (\kappa ) \le \log (H_{p}(\kappa )) \le \min (S_{p,p+2}(\kappa ), S_{p+\frac{1}{2},\sqrt{(p+\frac{1}{2})(p+\frac{3}{2})}}(\kappa )), \end{aligned}$$

where \(S_{\alpha ,\beta }(\kappa )=\sqrt{\kappa ^2+\beta ^2}-\alpha \log (\alpha +\sqrt{\kappa ^2+\beta ^2})-\beta +\alpha \log (\alpha +\beta )\). Let

$$\begin{aligned} L_{p}(\kappa )=S_{p+\frac{1}{2}, \sqrt{(p+\frac{1}{2})(p+\frac{3}{2})}}(\kappa ) + S_{p,p+2}(\min (\kappa ,\kappa _{p})-S_{p+\frac{1}{2}, \sqrt{(p+\frac{1}{2})(p+\frac{3}{2})}}(\min (\kappa ,\kappa _{p})) \end{aligned}$$
(16)

denote an approximation for \(\log (H_{p}(\kappa ))\), where \(\kappa _{p}=\sqrt{(3p+\frac{11}{2})(p+\frac{3}{2})}\). First, calculate \(L_{p}(\kappa )\) based on (16). For a threshold value \(\theta =700\) chosen in such a way that \(e^{\theta }\) does not overflow and \(e^{-\theta }\) does not underflow at the first step, calculate the logarithm of \(H_{p}(\kappa )\) directly if \(L_{p}(\kappa ) \le \theta -\frac{1}{2}\). If \(\theta - \frac{1}{2} < L_{p}(\kappa ) \le 2\theta -1\), use the approximation

$$\begin{aligned} \log H_{p}(\kappa )=\frac{L_{p}(\kappa )}{2} + \log \Big (\sum _{j=0}^{\infty } \frac{e^{\frac{-L_{p}(\kappa )}{2}} \Gamma (p+1) }{\Gamma (p+1+j)} \frac{(\frac{\kappa ^2}{4})^j}{j!}\Big ). \end{aligned}$$

Otherwise, use \(L_{p}(\kappa )\) as an approximation for \(\log (H_{p}(\kappa ))\). The reader is referred to Hornik and Grün (2014) for more details.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Melnykov, V. & Melnykov, I. On Model-Based Clustering of Directional Data with Heavy Tails. J Classif 40, 527–551 (2023). https://doi.org/10.1007/s00357-023-09445-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00357-023-09445-z

Keywords

Navigation