skip to main content
10.1145/3654823.3654892acmotherconferencesArticle/Chapter ViewAbstractPublication PagescacmlConference Proceedingsconference-collections
research-article

Parameter Selection of Robust Weighted SCAD Model Under Density Power Divergence

Published: 29 May 2024 Publication History

Abstract

Logistic regression is an important classification model in regression analysis, but its ability to deal with contaminated data is poor. To solve this problem, this paper extends the logistic loss function which based on maximum likelihood reasoning robustly, and constructs the logic DPD Loss function in the form of density power divergence. In the current research on regression models related to density power divergence, the penalty terms of the model are all non-concave penalties. This paper uses the non-convex penalty function SCAD as the penalty term to build a new Logistic regression model based on DPD, and gives the theoretical proof of the robustness of the model under the contaminated data. Through data simulation, we have confirmed that the SCAD model under density power divergence has strong contaminated data processing ability, and found a suitable selection method for important parameters in the model.

References

[1]
Ghosh A,Majumdar S.Ultrahigh-dimensional Robust and Efficient Sparse Regression using Non-Concave Penalized Density Power Divergence[J]. 2018.
[2]
Basu.A,Harris,Jones.Robust and efficient estimation by minimising a density power divergence[J]. Biometrika, 1998,85(3).
[3]
Alessandra Durio, Ennio Davide Isaia, The Minimum Density Power Divergence Approach in Building Robust Regression Models, Informatica 22(2011).
[4]
Mandal A, Beyaztas B H, Bandyopadhyay S .Robust Density Power Divergence Estimates for Panel Data Models[J]. 2021.
[5]
Ghosh A, Martin N, Pardo L.Robust adaptive variable selection in ultra-high dimensional regression models based on the density power divergence loss[J]. 2020.
[6]
Basu A, Ghosh A, Mandal A.A Wald-type test statistic for testing linear hypothesis in logistic regression models based on minimum density power divergence estimator[J].Electronic Journal of Statistics, 2017.
[7]
Li, Fan Runze .Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties[J].Publications of the American Statistical Association, 2001, 96(456):1348-1360.
[8]
Fan J, Lv J.Nonconcave Penalized Likelihood With NP-Dimensionality[J].IEEE Transactions on Information Theory, 2011, 57(8).
[9]
Fan J, Fan Y, Barut E.Adaptive robust variable selection[J].The Annals of Statistics, 2012, 42(1).
[10]
Takayuki K, Hironori F.Robust and Sparse Regression via γ-Divergence[J].Entropy, 2017, 19(11):608.
[11]
Zhou P, Du L, Wang H,et al.Learning a robust consensus matrix for clustering ensemble via Kullback-Leibler divergence minimization[J].AAAI Press, 2015.
[12]
T. Wang and L. Zhu, Consistent tuning parameter selection in high dimensional sparse linear regression [J], Journal of Multivariate Analysis, 2011, 102.
[13]
Warton, David I,Hui,et al.Tuning Parameter Selection for the Adaptive Lasso Using ERIC[J].JASA: Journal of the American Statistical Association, 2015.
[14]
Li Y, Wu Y, Jin B.Consistent tuning parameter selection in high-dimensional group-penalized regression[J].Science China Mathematics, 2019, 62(04):139-158.
[15]
Fan Y, Tang C Y.Tuning parameter selection in high dimensional penalized likelihood[J].Journal of the Royal Statistical Society, 2013, 75(3):531-552.
[16]
F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, W. A. Stahel.Robust Statistics-The Approach Based on Influence Functions[J].Journal of the Royal Statistical Society. Series D (The Statistician),1986,565-566.
[17]
Avella-Medina M.Influence functions for penalized M-estimators[J].Bernoulli, 2017, 23(4B):3178-3196.
[18]
Ghosh, A, Jaenada, M. and Pardo, L. Robust adaptive variable selection in ultrahigh dimensional regression models based on the density power divergence loss[J]. 2020.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
CACML '24: Proceedings of the 2024 3rd Asia Conference on Algorithms, Computing and Machine Learning
March 2024
478 pages
ISBN:9798400716416
DOI:10.1145/3654823
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 May 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Density power Divergence
  2. Information Criterion
  3. Parameter Selection
  4. Robustness
  5. SCAD

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

CACML 2024

Acceptance Rates

Overall Acceptance Rate 93 of 241 submissions, 39%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 23
    Total Downloads
  • Downloads (Last 12 months)23
  • Downloads (Last 6 weeks)6
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media