Skip to main content

A Cost-Sensitive Loss Function for Machine Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10829))

Abstract

In training machine learning models, loss functions are commonly applied to judge the quality and capability of the models. Traditional loss functions usually neglect the cost-sensitive loss in different intervals, although sensitivity plays an important role for the models. This paper proposes a cost-sensitive loss function based on an interval error evaluation method (IEEM). Using the key points of grade-structured intervals, two methods are proposed to construct the loss function: a piecewise function linking by key points, and a curve function fitting by key points. The proposed function was evaluated against three different loss functions based on a BP neural network. The comparison results show that the proposed loss function based on IEEM made the best prediction of the PM2.5 air quality grade in Guangzhou, China.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Web site: http://www.stateair.net/web/post/1/3.html.

  2. 2.

    Web site: http://www.cma.gov.cn/2011qxfw/2011qsjgx.

References

  1. Zhu, M.: Comparison Research of SVR Algorithms Based on Several Loss Functions. East China Normal University (2012)

    Google Scholar 

  2. Steinwart, I.: How to compare different loss functions and their risks. Constr. Approx. 26, 225–287 (2007)

    Article  MathSciNet  Google Scholar 

  3. Shalev-Shwartz, S., Shamir, O., Sridharan, K.: Learning linear and kernel predictors with the 0–1 loss function. In: International Joint Conference on Artificial Intelligence, pp. 2740–2745 (2011)

    Google Scholar 

  4. Zhao, H., Sinha, A.P., Bansal, G.: An extended tuning method for cost-sensitive regression and forecasting. Decis. Support Syst. 51, 372–383 (2011)

    Article  Google Scholar 

  5. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1995). https://doi.org/10.1007/978-1-4757-3264-1

    Book  MATH  Google Scholar 

  6. Cesa-Bianchi, N., Lugosi, G.: Worst-case bounds for the logarithmic loss of predictors. Mach. Learn. 43, 247–264 (2001)

    Article  Google Scholar 

  7. Hu, J., Luo, G., Li, Y., Cheng, W., Wei, X.: An AdaBoost algorithm for multi-class classification based on exponential loss function and its application. Acta Aeronaut. Astronaut. Sin. (2008)

    Google Scholar 

  8. Reich, Y., Barai, S.V.: Evaluating machine learning models for engineering problems. Artif. Intell. Eng. 13, 257–272 (1999)

    Article  Google Scholar 

  9. Cheng, T., Lan, C., Wei, C.P., Chang, H.: Cost-sensitive learning for recurrence prediction of breast cancer (2010)

    Google Scholar 

  10. Huber, P.J.: Robust estimation of a location parameter. Ann. Math. Stat. 35, 73–101 (1964)

    Article  MathSciNet  Google Scholar 

  11. Ping, W., Daming, J.: Parameter design based on exponential loss function. J. Changzhou Inst. Technol. 28, 16–20 (2015)

    Google Scholar 

  12. Granger, C.W.J.: Outline of forecast theory using generalized cost functions. Span. Econ. Rev. 1, 161–173 (1999)

    Article  Google Scholar 

  13. Zellner, A.: Bayesian estimation and prediction using asymmetric loss functions. Publ. Am. Stat. Assoc. 81, 446–451 (1986)

    Article  MathSciNet  Google Scholar 

  14. Xiaoqing, L., Shihong, C., Danling, T., Yonghui, Y.: Interval Error Evaluation Method (IEEM) and it’s application. Stat. Decis., 84–86 (2016)

    Google Scholar 

  15. Huber, P.J., Ronchetti, E.M.: Robust Statistics, 2nd edn. Wiley, New York (2011)

    MATH  Google Scholar 

  16. Karasuyama, M., Takeuchi, I.: Nonlinear regularization path for the modified Huber loss Support Vector Machines. In: International Joint Conference on Neural Networks, pp. 1–8 (2010)

    Google Scholar 

  17. Yamamoto, T., Yamagishi, M., Yamada, I.: Adaptive proximal forward-backward splitting applied to Huber loss function for sparse system identification under impulsive noise. IEICE Tech. Rep. Sig. Process. 111, 19–23 (2012)

    Google Scholar 

  18. Chen, C., Yan, C., Zhao, N., Guo, B., Liu, G.: A robust algorithm of support vector regression with a trimmed Huber loss function in the primal. Soft. Comput. 21, 1–9 (2016)

    Google Scholar 

  19. Chen, C., Li, Y., Yan, C., Dai, H., Liu, G.: A robust algorithm of multiquadric method based on an improved Huber loss function for interpolating remote-sensing-derived elevation data sets. Remote Sens. 7, 3347–3371 (2015)

    Article  Google Scholar 

  20. Cavazza, J., Murino, V.: Active-labelling by adaptive Huber loss regression (2016)

    Google Scholar 

  21. Peker, E., Wiesel, A.: Fitting generalized multivariate Huber loss functions. IEEE Signal Process. Lett. 23, 1647–1651 (2016)

    Article  Google Scholar 

  22. Granger, C.W.J.: Prediction with a generalized cost of error function. J. Oper. Res. Soc. 20, 199–207 (1969)

    Article  MathSciNet  Google Scholar 

  23. Coetsee, J., Bekker, A., Millard, S.: Preliminary test and Bayes estimation of a location parameter under BLINEX loss. Commun. Stat. 43, 3641–3660 (2014)

    Article  MathSciNet  Google Scholar 

  24. Arashi, M., Tabatabaey, S.M.M.: Estimation in multiple regression model with elliptically contoured errors under MLINEX loss. J. Appl. Probab. Stat. 3, 23–35 (2008)

    MathSciNet  MATH  Google Scholar 

  25. Domingos, P.: MetaCost: a general method for making classifiers cost-sensitive. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 155–164 (1999)

    Google Scholar 

  26. Bansal, G., Sinha, A.P., Zhao, H.: Tuning data mining methods for cost-sensitive regression: a study in loan charge-off forecasting. J. Manag. Inf. Syst. 25, 315–336 (2008)

    Article  Google Scholar 

  27. Chai, T., Draxler, R.R.: Root mean square error (RMSE) or mean absolute error (MAE)? Geoscientific Model Dev. 7, 1247–1250 (2014)

    Article  Google Scholar 

  28. Chesher, A.: A note on a general definition of the coefficient of determination. Biometrika 78, 691–692 (1991)

    Article  MathSciNet  Google Scholar 

  29. Long, L., Lei, M., Jianfeng, H., Dangguo, S., Sanli, Y., Yan, X., Lifang, L.: PM2.5 concentration prediction model of least squares support vector machine based on feature vector. J. Comput. Appl. 34, 2212–2216 (2014)

    Google Scholar 

  30. Ting, D., Jianhui, Z., Yong, H.: AQI levels prediction based on deep neural network with spatial and temporal optimizations. Comput. Eng. Appl. 53, 17–23 (2017)

    Google Scholar 

  31. Song, L.I., Wang, J., Zhang, D.C., Xia, W.: Simulation analysis of prediction optimization model for atmospheric PM2.5 pollution index. Comput. Simul. (2015)

    Google Scholar 

  32. Chen, Y., Wang, L., Zhang, L.: Research on application of BP artificial neural network in prediction of the concentration of PM2.5 in Beijing. J. Comput. Appl. 30, 153–155 (2016)

    Google Scholar 

  33. Zhou, S., Li, W., Qiao, J.: Prediction of PM2.5 concentration based on recurrent fuzzy neural network. In: Control Conference, pp. 3920–3924 (2017)

    Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (grant No. 61772146), the Colleges Innovation Project of Guangdong (grant No. 2016KTSCX036) and Guangzhou program of Philosophy and Science Development for 13rd 5-Year Planning (grant No. 2018GZGJ40).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoqing Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, S., Liu, X., Li, B. (2018). A Cost-Sensitive Loss Function for Machine Learning. In: Liu, C., Zou, L., Li, J. (eds) Database Systems for Advanced Applications. DASFAA 2018. Lecture Notes in Computer Science(), vol 10829. Springer, Cham. https://doi.org/10.1007/978-3-319-91455-8_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-91455-8_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-91454-1

  • Online ISBN: 978-3-319-91455-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics