Predicting warfarin dosage from clinical data: A supervised learning approach

https://doi.org/10.1016/j.artmed.2012.04.001Get rights and content

Abstract

Objective

Safety of anticoagulant administration has been a primary concern of the Joint Commission on Accreditation of Healthcare Organizations. Among all anticoagulants, warfarin has long been listed among the top ten drugs causing adverse drug events. Due to narrow therapeutic range and significant side effects, warfarin dosage determination becomes a challenging task in clinical practice. For superior clinical decision making, this study attempts to build a warfarin dosage prediction model utilizing a number of supervised learning techniques.

Methods and materials

The data consists of complete historical records of 587 Taiwan clinical cases who received warfarin treatment as well as warfarin dose adjustment. A number of supervised learning techniques were investigated, including multilayer perceptron, model tree, k nearest neighbors, and support vector regression (SVR). To achieve higher prediction accuracy, we further consider both homogeneous and heterogeneous ensembles (i.e., bagging and voting). For performance evaluation, the initial dose of warfarin prescribed by clinicians is established as the baseline. The mean absolute error (MAE) and standard deviation of errors (σ(E)) are considered as evaluation indicators.

Results

The overall evaluation results show that all of the learning based systems are significantly more accurate than the baseline (MAE = 0.394, σ(E) = 0.558). Among all prediction models, both Bagged Voting (MAE = 0.210, σ(E) = 0.357) with four classifiers and Bagged SVR (MAE = 0.210, σ(E) = 0.366) are suggested as the two most effective prediction models due to their lower MAE and σ(E).

Conclusion

The investigated models can not only facilitate clinicians in dosage decision-making, but also help reduce patient risk from adverse drug events.

Introduction

Drug safety is a critical indicator of patient safety. Improper or wrong medication can lead to severe adverse events in patients and result in medical malpractice [1]. Of all medication, issues related to improper doses of high-alert medication are of great concern for the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) [2]. The U.S. Institute for Safe Medication Practices defines high-alert medications as medications that, under improper usage or management, might result in severe patient injury. Generally, subtherapeutic doses and overdoses are the main causes of improper dose decisions; for high-alert medications with narrow therapeutic ranges or that easily induce toxic effects, correct dose determination is critical [1]. In current clinical practice, many drug doses are determined according to the physician's personal experience; therefore, how to propose appropriate dose determination strategies for different patients is a critical factor in addressing drug-related issues.

Safety of anticoagulant administration has been a primary concern of the JCAHO. In 2010, JCAHO proposed the National Patient Safety Goals, and the third goal of which was to “improve the safety of medication use”. One of the main focuses of these goals was to lower the damage that may be caused by anticoagulant administration [2]. The disadvantages of anticoagulants are a narrow therapeutic range and significant side effects; more importantly, plasma levels of anticoagulants are affected by numerous factors, which result in difficulties in dose determination.

Among all anticoagulants, warfarin has long been listed among the top ten drugs causing adverse drug events (ADE) [3], [4]. In clinical practice, physicians normally test patients’ international normalized ratio (INR), and adjust medication dose according to therapeutic drug monitoring (TDM) and personal experience. However, for patients receiving such medications for the first time, physicians are unable to perform dose evaluation immediately and effectively via TDM. Subtherapeutic doses could result in insufficient thrombolytic effects, while overdose might cause abnormal bleeding and life-threatening results. These factors strongly influence the safety of warfarin administration.

In previous studies regarding warfarin dose determination, researchers have introduced computerized dosing nomograms of warfarin as reference for physicians [3]. However, dosing nomograms merely consider age and INR values. For a drug with the narrow therapeutic range and complex pharmacological properties as warfarin, it does not possess sufficient reference for dose adjustment. Recently, many researchers have applied statistical-based or machine learning-based techniques to construct warfarin dosage prediction model from clinical and genetic features [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15]. The result suggests that pharmacogenetic warfarin dosing, such as the considerations of two genetic polymorphisms in CYP2C9 and VKORD1, can generate more accurate predictions than considering clinical features only [8], [9], [10], [11], [12], [13]. The results show that 50–60% of warfarin dose variability can be explained by genetic features. However, due to high cost of genetic testing and difficulties in obtaining polymorphism results in a short period of time, it is still far away from clinical use [15]. Therefore, how to develop a robust warfarin dosage prediction model from clinical records without genetic testing is still a challenging task.

In this study, we collaborated with a medical center in Taiwan, and collected personal information of 587 inpatients who received warfarin treatment as well as historical data of warfarin dose adjustment by clinicians during the period of 2005–2009. A number of supervised machine learning techniques, including k-nearest neighbors (kNN) [16], support vector regression (SVR) [17], model tree (M5) [18], and multi-layer perceptron neural network with back-propagation (MLP) [19], [20], were used to construct prediction models for warfarin dosage. In addition, we applied various ensemble techniques to improve the accuracy of prediction results effectively, in the hope of establishing a highly reliable prediction model. To evaluate the performance of the prediction model, we used the patient's initial dose prescribed by the clinician as a baseline. All experiments applied the mean absolute error (MAE) and standard deviation of errors (σ(E)) as evaluation indicators. Results indicated that in comparison with warfarin dose administration according to physicians’ clinical experiences, adoption of various supervised learning techniques led to an effective decrease in dosage prediction error rate.

The remainder of this paper is organized as follows. Section 2 describes the anticoagulants warfarin. We also review previous research related to warfarin dosage determination. Section 3 details our warfarin-dosing prediction system as well as the prediction techniques for building the prediction model. Section 4 provides the preparation of data, experimental setup, and performance measures. Section 5 presents thorough experimental results and discussions. Section 6 concludes our study.

Section snippets

Warfarin

The main functions of anticoagulants are to prevent thrombosis. Anticoagulants are clinically used to treat or prevent thromboembolism and related diseases, and warfarin is the most commonly used such drug. According to statistics, in cardiovascular prescription drug rankings, warfarin ranks fourth [21]; the reason for its widespread usage is due to a high oral absorption rate, which provides favorable anticoagulant efficacy [22]. However, despite the therapeutic effect, warfarin has a narrow

Prediction model for warfarin dosing

The aim of prediction analysis is to construct a prediction model for uncovering the relationship between input and output variables from a set of instances by various supervised learning techniques. In this study, we adopt both single classifier techniques and classifier ensemble methods to build warfarin dosage prediction models. Four well-known single classifier techniques, including, kNN, SVR, M5, and MLP, as well as two classifier ensemble methods, Bagging and Voting, are used to compare

Data preparation

This study collected complete records of the inpatients who have received warfarin therapy from January 2005 to December 2009 in a medical center of Taiwan. To eliminate the influence of the warfarin from previous period of treatment and precisely evaluate the accuracy of the prediction model, a washout period of 3 months is considered. That is, the inpatients that ever have treatment with warfarin within 3 months before the hospitalization are not considered in this study.

Each clinical record

Results and discussion

Accurately predicting the dosage of a high-alert drug is a challenging task, especially for patients initially using warfarin. Except for having a narrow therapeutic range, warfarin results in DDI when combined any one of with more than 250 particular medicines. Moreover, clinicians usually rely on personal experience for the prescription of warfarin for first-time use patients. Therefore, the initial dose of warfarin prescribed by clinicians was established the baseline of our study. All the

Conclusion

Drug safety has been a critical issue for clinical study in recent years, especially for high-alert medications with narrow therapeutic indexes and toxic effects. Although warfarin has been recognized as being among the top ten drugs with the highest number of ADE in recent years, its dosage decisions for initial-use patients are usually made according to physicians’ clinical experience or dosing nomograms, resulting in improper dose of medications. This study applied several supervised

Acknowledgment

This research was supported in part by National Science Council of the Republic of China under the Grant NSC 98-2410-H-194-054.

References (38)

  • I. Solomon et al.

    Applying an artificial neural network to warfarin maintenance dose prediction

    Israel Medical Association Journal

    (2004)
  • E. Cosgun et al.

    High-dimensional pharmacogenetic prediction of a continuous traits using machine learning techniques with application to warfarin dosage prediction in African Americans

    Bioinformatics

    (2011)
  • L. Miao et al.

    Contribution of age, body weight, and CYP2C9 and VKORC1 genotype to the anticoagulant response to warfarin: proposal for a new dosing regimen in Chinese patients

    European Journal of Clinical Pharmacology

    (2007)
  • H. Schelleman et al.

    Dosing algorithms to predict warfarin maintenance dose in Caucasians and African Americans

    Clinical Pharmacology & Therapeutics

    (2008)
  • The International Warfarin Pharmacogenetics Consortium

    Estimation of the warfarin dose with clinical and pharmacogenetic data

    New England Journal of Medicine

    (2009)
  • H.I. Bussey et al.

    Genetic testing for warfarin dosing? Not yet ready for prime time

    Pharmacotherapy

    (2008)
  • D.W. Aha et al.

    Instance-based learning algorithms

    Journal of Machine Learning Research

    (1991)
  • S.K. Shevade et al.

    Improvements to the SMO algorithm for SVM regression

    IEEE Transactions on Neural Networks

    (2000)
  • J.R. Quinlan

    Learning with continuous classes

  • Cited by (45)

    • Machine learning in medication prescription: A systematic review

      2023, International Journal of Medical Informatics
    • Key use cases for artificial intelligence to reduce the frequency of adverse drug events: a scoping review

      2022, The Lancet Digital Health
      Citation Excerpt :

      The third use case was prediction of optimal medication dosing to balance therapeutic benefit with ADE risk related to a specific medication. Several studies (14 [21%] of 67) covered this use case63–76 with a focus on anticoagulants (five [36%] of 14 studies), cardiovascular medications (two [14%]), and antineoplastics (two [14%]). Eight studies assessed the performance of a single AI model, with six of these studies using Bayesian estimation.63,66,67,71,73,75

    View all citing articles on Scopus
    View full text