Abstract:
The automatic ICD-10 classification of medical documents is actually an unresolved issue, despite its crucial importance. The existence of machine learning approaches de-...Show MoreMetadata
Abstract:
The automatic ICD-10 classification of medical documents is actually an unresolved issue, despite its crucial importance. The existence of machine learning approaches de-voted to this task is in contrast with the lack of annotated resources, especially for languages different from English. Recent Transformer-based multilingual neural language models at scale have provided an innovative approach for dealing with cross lingual Natural Language Processing tasks. In this paper, we present a preliminary evaluation of the Cross-lingual Language Model (XLM) architecture, a recent multilingual Transformer-based model presented in literature, tested in the cross lingual ICD-10 multilabel classification of short medical notes. In detail, we analysed the performances obtained by fine tuning the XLM model on English language training data and tested for ICD-10 codes prediction of an Italian test set. The obtained results show that the use of the novel XLM multilingual neural language architecture is very promising and it can be very useful in case of low resource languages.
Date of Conference: 07-10 July 2020
Date Added to IEEE Xplore: 12 October 2020
ISBN Information: