Skip to main content
Log in

Regularized semi-supervised KLFDA algorithm based on density peak clustering

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

To solve the problem that the existing semi-supervised FISHER discriminant analysis algorithm (FDA) cannot effectively use both labeled and unlabeled data for learning, we propose a semi-supervised Kernel local FDA Algorithm based on density peak clustering pseudo-labels (SDPCKLFDA). First, the proposed algorithm adopts the density peak clustering algorithm to generate the pseudo cluster labels for labeled and unlabeled data, and then the generated pseudo-labels are explored to construct two regularization strategies. The two regularization strategies are used to regularize the corresponding within-class scatter matrix and between-class scatter matrix of the local Fisher discriminant analysis, and finally the optimal projection vector is obtained by solving the objective function of the local Fisher discriminant analysis. The two constructed regularization strategies can not only effectively enhance the discriminant performance of the extracted feature but also make the proposed algorithm suitable for multimodal and noisy data. In addition, to accommodate nonlinear and non-Gaussian datasets, we also develop a kernel version of the proposed algorithm with the help of kernel trick. In the experiment, the proposed algorithm is compared with the FDA and its improved algorithms on some benchmark artificial datasets and UCI datasets. The experimental results show that the discriminant performance of the proposed algorithm has been significantly improved compared with the other algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Liu J, Jiang P, Song C et al (2022) Manifold-preserving sparse graph and deviation information based Fisher discriminant analysis for industrial fault classification considering label-noise and unobserved faults. IEEE Sens J 04(1):1–1. https://doi.org/10.1109/JSEN.2021.3140081

    Article  Google Scholar 

  2. Zaatour R, Bouzidi S, Zagrouba E (2019) Class-adapted local fisher discriminant analysis to reduce highly-dimensioned data on commodity hardware: application to hyperspectral images. Multimed Tools Appl 78(12):17113–17134. https://doi.org/10.1007/s11042-018-6887-3

    Article  Google Scholar 

  3. Zaatour R, Bouzidi S, Zagrouba E (2020) Unsupervised image-adapted local fisher discriminant analysis to reduce hyperspectral images without ground truth. IEEE Trans Geosci Remote 58(11):7931–7941. https://doi.org/10.1109/TGRS.2020.2985260

    Article  Google Scholar 

  4. Dong SQ, Zeng LB, Liu JJ et al (2020) Fracture identification in tight reservoirs by multiple kernel Fisher discriminant analysis using conventional logs. Interpretation-J Sub 8(4):215–225. https://doi.org/10.1190/INT-2020-0048.1

    Article  Google Scholar 

  5. Zhao DL, Lin ZC, Xiao R, Tang XO (2007) Linear Laplacian discrimination for feature extraction. IEEE CVPR. https://doi.org/10.1109/CVPR.2007.383125

    Article  Google Scholar 

  6. Sugiyama M (2007) Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis. J Mach Learn Res 8:1027–1061

    MATH  Google Scholar 

  7. Dugué N, Lamirel JC, Chen Y (2021) Evaluating clustering quality using features salience: a promising approach. Neural Com 33:12939–12956. https://doi.org/10.1007/s00521-021-05942-7

    Article  Google Scholar 

  8. Lamirel JC, Chen Y, Cuxac P, Shehabi SA, Dugué N (2020) An overview of the history of Science of Science in China based on the use of bibliographic and citation data: a new method of analysis based on clustering with feature maximization and contrast graphs. Scientometrics 125:2971–2999. https://doi.org/10.1007/s11192-020-03503-8

    Article  Google Scholar 

  9. Thuy NN, Wongthanavasu S (2021) A novel feature selection method for high-dimensional mixed decision tables. IEEE Trans Neur Net Lear. https://doi.org/10.1109/TNNLS.2020.3048080

    Article  Google Scholar 

  10. Zhong WC, Chen XJ, Nie FP et al (2021) Adaptive discriminant analysis for semi-supervised feature selection. Inform Sci 566(8):178–194. https://doi.org/10.1016/j.ins.2021.02.035

    Article  MathSciNet  Google Scholar 

  11. Tavernier J, Simm J, Meerbergen K et al (2019) Fast semi-supervised discriminant analysis for binary classification of large data sets. Pattern Recognit 91:86–99. https://doi.org/10.1016/j.patcog.2019.02.015

    Article  Google Scholar 

  12. Lv WJ, Kang Y, Zheng WX et al (2020) Feature-temporal semi-supervised extreme learning machine for robotic terrain classification. IEEE Trans Circuits-ii 67(12):3567–3571. https://doi.org/10.1109/TCSII.2020.2990661

    Article  Google Scholar 

  13. Rodriguez A, Laio A (2014) Clustering by fast search and find of density peaks. Science 334(6191):1492–1496. https://doi.org/10.1126/science.1242072

    Article  Google Scholar 

  14. Cho HJ, Kang SJ, Kim YH (2017) Image segmentation using linked mean-shift vectors and global/local attributes. IEEE Trans Circ Syst Vid 27(10):2132–2140. https://doi.org/10.1109/TCSVT.2016.2576918

    Article  Google Scholar 

  15. Cai D, He XF, Han JW (2007) Semi-supervised discriminant analysis. In: IEEE ICCV Rio de Janeiro, Brazil, pp 1–7. https://doi.org/10.1109/ICCV.2007.4408856

  16. Song YQ, Nie FP, Zhang CS et al (2008) A unified framework for semi-supervised dimensionality reduction. Pattern Recognit 41(9):2789–2799. https://doi.org/10.1016/j.patcog.2008.01.001

    Article  MATH  Google Scholar 

  17. Jiang L, Xuan JP, Shi TL (2013) Feature extraction based on semi-supervised kernel Marginal Fisher analysis and its application in bearing fault diagnosis. Mech Syst Signal Pr 41(1–2):113–126. https://doi.org/10.1016/j.ymssp.2013.05.017

    Article  Google Scholar 

  18. Huang SC, Tang YC, Lee CW et al (2011) Kernel local Fisher discriminant analysis-based manifold-regularized SVM model for financial distress predictions. Expert Syst Appl 39(3):3855–3861. https://doi.org/10.1016/j.eswa.2011.09.095

    Article  Google Scholar 

  19. Sugiyama M, Ide T, Nakajima S, Sese J (2010) Semi-supervised local fisher discriminant analysis for dimensionality reduction. Mach Learn 78(1–2):35–61. https://doi.org/10.1007/s10994-009-5125-7

    Article  MathSciNet  MATH  Google Scholar 

  20. Liao WZ, Pizurica A, Scheunders P et al (2013) Semisupervised local discriminant analysis for feature extraction in hyperspectral images. IEEE Trans Geosci Remote 51(1):184–198. https://doi.org/10.1109/TGRS.2012.2200106

    Article  Google Scholar 

  21. Nie FP, Xiang SM, Jia YQ et al (2009) Semi-supervised orthogonal discriminant analysis via label propagation. Pattern Recognit 42(11):2615–2627. https://doi.org/10.1016/j.patcog.2009.04.001

    Article  MATH  Google Scholar 

  22. Zhao MB, Zhang Z, Chow TWS, Li B (2014) A general soft label based linear discriminant analysis for semi-supervised dimensionality reduction. Neural Netw 55:83–97. https://doi.org/10.1016/j.neunet.2014.03.005

    Article  MATH  Google Scholar 

  23. Lu JW, Zhou XZ, Tan YP et al (2012) Cost-sensitive semi-supervised discriminant analysis for face recognition. IEEE Trans Inf Foren Sec 7(3):944–953. https://doi.org/10.1109/TIFS.2012.2188389

    Article  Google Scholar 

  24. Zhang Y, Yeung DY (2011) Semisupervised generalized discriminant analysis. IEEE Trans Neural Netw 22(8):1207–1217. https://doi.org/10.1109/TNN.2011.2156808

    Article  Google Scholar 

  25. Wang S, Lu JF, Gu XJ et al (2016) Semi-supervised linear discriminant analysis for dimension reduction and classification. Pattern Recognit 57(C):179–189. https://doi.org/10.1007/s00500-019-03990-9

    Article  MATH  Google Scholar 

  26. Chen PH, Jiao LC, Liu F et al (2017) Semi-supervised double sparse graphs based discriminant analysis for dimensionality reduction. Pattern Recognit 61:361–378. https://doi.org/10.1016/j.patcog.2016.08.010

    Article  MATH  Google Scholar 

  27. Wu H, Prasad S (2017) Semi-supervised dimensionality reduction of hyperspectral imagery using pseudo-labels. Pattern Recognit 74:212–224. https://doi.org/10.1016/j.patcog.2017.09.003

    Article  Google Scholar 

  28. Lu N, Lin H, Lu J, Zhang GQ (2014) A customer churn prediction model in telecom industry using boosting. IEEE Trans Ind Inform 10(2):1659–1665. https://doi.org/10.1109/TII.2012.2224355

    Article  Google Scholar 

  29. Zhu ZB, Song ZH (2011) A Novel Fault diagnosis system using pattern classification on kernel FDA subspace. Expert Syst Appl 38:6895–6905. https://doi.org/10.1016/j.eswa.2010.12.034

    Article  Google Scholar 

  30. Wan ST, Zhang X (2018) Teager energy entropy ratio of wavelet packet transform and its application in bearing fault diagnosis. Entropy 20(5):1–19. https://doi.org/10.3390/e20050388

    Article  Google Scholar 

  31. Tao XM, Guo WJ, Ren C et al (2021) Density peak clustering using global and local consistency adjustable manifold distance. Inform Sci 577:769–804. https://doi.org/10.1016/j.ins.2021.08.036

    Article  MathSciNet  Google Scholar 

  32. Zelnik-Manor L, Perona P (2004) Self-tuning spectral clustering. Adv Neur In 17:1601–1608

    Google Scholar 

  33. Ester M, Kriegel HP, Sander J, Xu X (1996) A density-based algorithm for discovering clusters in large spatial databases with noise. KDD 96:226–231

    Google Scholar 

  34. Fritzke B (1994) A growing neural gas network learns topologies. Adv Neur In 7:625–632

    Google Scholar 

  35. Tobin J, Zhang MM (2021) DCF: an efficient and robust density-based clustering method. In: 2021 IEEE ICDM 629–638. https://doi.org/10.1109/ICDM51629.2021.00074

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China No. 62176050, Innovative talent fund of Harbin science and technology Bureau, China (No. 2017RAXXJ018). The authors are grateful to the anonymous reviewers for their valuable comments and suggestions which were very helpful in improving the quality and presentation of this paper.

Funding

The funding was provided by the National Natural Science Foundation of China No. 62176050 and the Fundamental Research Funds for the Central Universities (No. 2572017EB02).

Author information

Authors and Affiliations

Authors

Contributions

Xinmin Tao: Methodology, Software, Writing—original draft, Supervision. Yixuan Bao: Writing—review & editing. Xiaohan Zhang: Conceptualization. Tian Liang: Validation. Lin Qi: Software. Zhiting Fan: Visualization. Shan Huang: Visualization.

Corresponding author

Correspondence to Xinmin Tao.

Ethics declarations

Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

A. Deduction:

$$\sum_{i=1}^{n}\left({{\varvec{x}}}_{i}-{\varvec{\mu}}\right){\left({{\varvec{x}}}_{i}-{\varvec{\mu}}\right)}^{\mathrm{T}}=$$
$$\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}- n{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}$$

Proof:

\(\sum_{i=1}^{n}\left({{\varvec{x}}}_{i}-{\varvec{\mu}}\right){\left({{\varvec{x}}}_{i}-{\varvec{\mu}}\right)}^{\mathrm{T}}=\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}-\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{\varvec{\mu}}}^{\mathrm{T}}-\sum_{i=1}^{n}{\varvec{\mu}}{{{\varvec{x}}}_{i}}^{\mathrm{T}}+\sum_{i=1}^{n}{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}\)

$$=\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}-\left(\sum_{i=1}^{n}{{\varvec{x}}}_{i}\right)*{{\varvec{\mu}}}^{\mathrm{T}}-{\varvec{\mu}}\left(\sum_{i=1}^{n}{{{\varvec{x}}}_{i}}^{\mathrm{T}}\right)+\sum_{i=1}^{n}{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}$$
$$=\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}-n\left(\sum_{i=1}^{n}{{\varvec{x}}}_{i}\right)/n*{{\varvec{\mu}}}^{\mathrm{T}}-n*{\varvec{\mu}}\left(\sum_{i=1}^{n}{{{\varvec{x}}}_{i}}^{\mathrm{T}}\right)/n+\sum_{i=1}^{n}{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}$$
$$=\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}-n\left(\sum_{i=1}^{n}{{\varvec{x}}}_{i}/n\right)*{{\varvec{\mu}}}^{\mathrm{T}}-n*{\varvec{\mu}}\left(\sum_{i=1}^{n}{{{\varvec{x}}}_{i}}^{\mathrm{T}}/n\right)+\sum_{i=1}^{n}{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}$$
$$=\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}-n\left(\sum_{i=1}^{n}{{\varvec{x}}}_{i}/n\right)*{{\varvec{\mu}}}^{\mathrm{T}}-n*{\varvec{\mu}}\left(\sum_{i=1}^{n}{{{\varvec{x}}}_{i}}^{\mathrm{T}}/n\right)+n*{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}$$
$$=\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}-n{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}-n*{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}+n*{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}$$
$$=\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}-n{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}-n*{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}+n*{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}$$
$$=\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}- n{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}=\frac{1}{n}*\left(\sum_{i,j=1}^{n}({{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}-{{\varvec{x}}}_{i}{{{\varvec{x}}}_{j}}^{\mathrm{T}})\right)$$
$$=\frac{1}{2n}*\sum_{i,j=1}^{n}({{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}+{{\varvec{x}}}_{j}{{{\varvec{x}}}_{j}}^{\mathrm{T}}-{{\varvec{x}}}_{i}{{{\varvec{x}}}_{j}}^{\mathrm{T}}-{{\varvec{x}}}_{i}{{{\varvec{x}}}_{j}}^{\mathrm{T}})$$
$$=\frac{1}{2n}*\sum_{i,j=1}^{n}({{\varvec{x}}}_{i}-{{\varvec{x}}}_{j}){({{\varvec{x}}}_{i}-{{\varvec{x}}}_{j})}^{\mathrm{T}}$$

B. Deduction:

$${{S}^{(t)}=S}^{(b)}+{S}^{(w)}$$
$${S}^{(t)}=\sum_{i=1}^{n}\left({{\varvec{x}}}_{i}-{\varvec{\mu}}\right){\left({{\varvec{x}}}_{i}-{\varvec{\mu}}\right)}^{\mathrm{T}}=\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}- n{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}$$
$${S}^{(w)}=\sum_{j=1}^{c}\left(\sum_{i=1}^{{n}_{j}}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}- {n}_{j}{{\varvec{\mu}}}_{j}{{{\varvec{\mu}}}_{j}}^{\mathrm{T}}\right)=\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}-\sum_{j=1}^{c}{n}_{j}({{\varvec{\mu}}}_{j}{{{\varvec{\mu}}}_{j}}^{\mathrm{T}})$$
$${S}^{(b)}=\sum_{j=1}^{c}{n}_{j}({{\varvec{\mu}}}_{j}-{\varvec{\mu}}){({{\varvec{\mu}}}_{j}-{\varvec{\mu}})}^{\mathrm{T}}=\sum_{j=1}^{c}{n}_{j}({{\varvec{\mu}}}_{j}-{\varvec{\mu}}){({{\varvec{\mu}}}_{j}-{\varvec{\mu}})}^{\mathrm{T}}$$
$$\sum_{j=1}^{c}{n}_{j}({{\varvec{\mu}}}_{j}-{\varvec{\mu}}){({{\varvec{\mu}}}_{j}-{\varvec{\mu}})}^{\mathrm{T}}=\sum_{j=1}^{c}{n}_{j}({{\varvec{\mu}}}_{j}{{{\varvec{\mu}}}_{j}}^{\mathrm{T}}-{{\varvec{\mu}}}_{j}{{\varvec{\mu}}}^{\mathrm{T}}-{\varvec{\mu}}{{{\varvec{\mu}}}_{j}}^{\mathrm{T}}+{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}})$$
$$=\sum_{j=1}^{c}{n}_{j}({{\varvec{\mu}}}_{j}{{{\varvec{\mu}}}_{j}}^{\mathrm{T}})-\sum_{j=1}^{c}{n}_{j}{{\varvec{\mu}}}_{j}{{\varvec{\mu}}}^{\mathrm{T}}-\sum_{j=1}^{c}{n}_{j}{\varvec{\mu}}{{{\varvec{\mu}}}_{j}}^{\mathrm{T}}+n*{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}$$
$$=\sum_{j=1}^{c}{n}_{j}({{\varvec{\mu}}}_{j}{{{\varvec{\mu}}}_{j}}^{\mathrm{T}})-\left(\sum_{j=1}^{c}{n}_{j}{{\varvec{\mu}}}_{j}\right)*{{\varvec{\mu}}}^{\mathrm{T}}-{\varvec{\mu}}*(\sum_{j=1}^{c}{n}_{j}{{{\varvec{\mu}}}_{j}}^{\mathrm{T}})+n*{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}$$
$$=\sum_{j=1}^{c}{n}_{j}({{\varvec{\mu}}}_{j}{{{\varvec{\mu}}}_{j}}^{\mathrm{T}})-n*{\varvec{\mu}}*{{\varvec{\mu}}}^{\mathrm{T}}-n*{\varvec{\mu}}*{{\varvec{\mu}}}^{\mathrm{T}}+n*{\varvec{\mu}}{{\varvec{\mu}}}^{\mathrm{T}}$$

because:

$${S}^{(b)}=\sum_{j=1}^{c}{n}_{j}({{\varvec{\mu}}}_{j}{{{\varvec{\mu}}}_{j}}^{\mathrm{T}})-n*{\varvec{\mu}}*{{\varvec{\mu}}}^{\mathrm{T}}$$
$${S}^{(b)}+{S}^{(w)}$$
$$=\sum_{j=1}^{c}{n}_{j}({{\varvec{\mu}}}_{j}{{{\varvec{\mu}}}_{j}}^{\mathrm{T}})-n*{\varvec{\mu}}*{{\varvec{\mu}}}^{\mathrm{T}}+\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}-\sum_{j=1}^{c}{n}_{j}({{\varvec{\mu}}}_{j}{{{\varvec{\mu}}}_{j}}^{\mathrm{T}})$$
$$=\sum_{i=1}^{n}{{\varvec{x}}}_{i}{{{\varvec{x}}}_{i}}^{\mathrm{T}}-n*{\varvec{\mu}}*{{\varvec{\mu}}}^{\mathrm{T}}={S}^{(t)}$$

so:

$${{S}^{(t)}=S}^{(b)}+{S}^{(w)}$$

C. Deduction

$${S}^{(w)}=\frac{1}{2n}*\sum_{i,j=1}^{n}{W}_{i,j}^{w}({{\varvec{x}}}_{i}-{{\varvec{x}}}_{j}){({{\varvec{x}}}_{i}-{{\varvec{x}}}_{j})}^{\mathrm{T}}$$
$$\begin{array}{*{20}c} {W_{{i,j}}^{w} = \left\{ {\begin{array}{*{20}l} {\left( {1/n_{{yi}} } \right)} \hfill & {y_{i} = y_{j} } \hfill \\ 0 \hfill & {y_{i} \ne y_{j} } \hfill \\ \end{array} } \right.} \\ \end{array}$$

by.

\({S}^{(w)}=\sum_{j=1}^{c}\left(\frac{1}{2{n}_{j}}*\sum_{i,j=1}^{{n}_{j}}({{\varvec{x}}}_{i}-{{\varvec{x}}}_{j}){({{\varvec{x}}}_{i}-{{\varvec{x}}}_{j})}^{\mathrm{T}}\right)\) is not difficult to derive.

D. Deduction

$${S}^{(b)}=\frac{1}{2n}*\sum_{i,j=1}^{n}{W}_{i,j}^{b}({{\varvec{x}}}_{i}-{{\varvec{x}}}_{j}){({{\varvec{x}}}_{i}-{{\varvec{x}}}_{j})}^{\mathrm{T}}$$
$$\begin{array}{*{20}c} {W_{{i,j}}^{b} = \left\{ {\begin{array}{*{20}l} {\left( {1/n_{{yi}} } \right)} \hfill & {y_{i} = y_{j} } \hfill \\ {1/n} \hfill & {y_{i} \ne y_{j} } \hfill \\ \end{array} } \right.} \\ \end{array}$$

by \({{S}^{(t)}=S}^{(b)}+{S}^{(w)}\) and corollary 3 are not difficult to derive from the above equation.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tao, X., Bao, Y., Zhang, X. et al. Regularized semi-supervised KLFDA algorithm based on density peak clustering. Neural Comput & Applic 34, 19791–19817 (2022). https://doi.org/10.1007/s00521-022-07495-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07495-9

Keywords

Navigation