ABSTRACT
For the binary classification tasks in supervised learning, the labels of data have to be available for classifier development. Cohen's kappa is usually employed as a quality measure for data annotation, which is inconsistent with its true functionality of assessing the inter-annotator consistency. However, the derived relationship functions of Cohen's kappa, sensitivity, and specificity in the literature are complicated, thus are unable to be employed to interpret classification performance from kappa values. In this study, based on an annotation generation model, we develop simple relationships of kappa, sensitivity, and specificity when there is no bias in the annotations. A relationship between kappa and Youden's J statistic, a performance metric for binary classification, is further obtained. The derived relationships are evaluated on a synthetic dataset using linear regression analysis. The results demonstrate the accuracy of the derived relationships. It suggests the potential of estimating classification performance from kappa values when bias is absent in the annotations.
- Aickin, M., 1990. Maximum likelihood estimation of agreement in the constant predictive probability model, and its relation to Cohen's kappa. Biometrics, (June 1990), 293--302.Google ScholarCross Ref
- Beigman Klebanov, B. and Beigman, E., 2009. From annotator agreement to noise models. Computational Linguistics, 35, 4 (Dec. 2009), pp. 495--503.Google ScholarDigital Library
- Byrt, T., Bishop, J. and Carlin, J.B., 1993. Bias, prevalence and kappa. Journal of clinical epidemiology, 46, 5 (May 1993), 423--429.Google ScholarCross Ref
- Carletta, J., 1996. Assessing agreement on classification tasks: the kappa statistic. Computational linguistics, 22, 2 (June 1996), 249--254.Google Scholar
- Cohen, J., 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20, 1 (April 1960), 37--46.Google Scholar
- Eugenio, B.D. and Glass, M., 2004. The kappa statistic: A second look. Computational linguistics, 30, 1 (March 2004), 95--101.Google Scholar
- Feuerman, M. and Miller, A.R., 2005. The kappa statistic as a function of sensitivity and specificity. International Journal of Mathematical Education in Science and Technology, 36,5 (July 2005), 517--527.Google ScholarCross Ref
- Feuerman, M. and Miller, A.R., 2007. Critical points for certain statistical measures of agreement. International Journal of Mathematical Education in Science and Technology, 38, 6, (Sep. 2007), 739--748.Google ScholarCross Ref
- Feuerman, M. and Miller, A.R., 2008. Relationships between statistical measures of agreement: sensitivity, specificity and kappa. Journal of evaluation in clinical practice, 14, 5(Oct. 2008), pp. 930--933.Google ScholarCross Ref
- Frénay, B. and Verleysen, M., 2014. Classification in the presence of label noise: a survey. IEEE Transactions on Neural Networks and Learning Systems, 25, 5 (May 2014), 845--869.Google ScholarCross Ref
- Ghosh, A., Manwani, N. and Sastry, P.S., 2015. Making risk minimization tolerant to label noise. Neurocomputing, 160 (July 2015), 93--107.Google Scholar
- Natarajan, N., Dhillon, I.S., Ravikumar, P.K. and Tewari, A., 2013. Learning with noisy labels. In Advances in Neural Information Processing Systems (Lake Tahoe, USA, Dec. 05-10, 2013), Curran Associates, Inc., New York, NY, 1196--1204.Google Scholar
- Passonneau, R.J. and Carpenter, B., 2014. The benefits of a model of annotation. Transactions of the Association for Computational Linguistics, 2 (Dec. 2014), 311--326.Google Scholar
- Schisterman, E.F., Perkins, N.J., Liu, A. and Bondell, H., 2005. Optimal cut-point and its corresponding Youden Index to discriminate individuals using pooled blood samples. Epidemiology, (Jan. 2005), 73--81.Google Scholar
- Thompson, W.D. and Walter, S.D., 1988. A reappraisal of the kappa coefficient. Journal of clinical epidemiology, 41, 10, (Jan. 1988), 949--958.Google ScholarCross Ref
Index Terms
- Relationships of Cohen's Kappa, Sensitivity, and Specificity for Unbiased Annotations
Recommendations
Cohen's weighted kappa with additive weights
Cohen's weighted kappa is a popular descriptive statistic for summarizing interrater agreement on an ordinal scale. An agreement table with $$n\in \mathbb N _{\ge 3}$$ ordered categories can be collapsed into $$n-1$$ distinct $$2\times 2$$ tables by combining adjacent categories. Weighted kappa with ...
About the relationship between ROC curves and Cohen's kappa
Receiver operating characteristic (ROC) curves are very powerful tools for measuring classifiers' accuracy in binary-class problems. However, their usefulness in real-world multi-class problems has not been demonstrated yet. In these frequently ...
Generalized Cohen’s Kappa: A Novel Inter-rater Reliability Metric for Non-mutually Exclusive Categories
Human Interface and the Management of InformationAbstractQualitative coding of large datasets has been a valuable tool for qualitative researchers. In terms of inter-rater reliability, existing metrics have not evolved to fit current approaches, presenting a variety of restrictions. In this paper, we ...
Comments