Skip to main content

Leveraging Artificial Intelligence in Medicine Compliance Check

  • Conference paper
  • First Online:
  • 3185 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12783))

Abstract

This paper aims to utilize AI technology to solve the challenge of medicine compliance check. More specifically, we propose a Logic-BERT model to estimate whether certain medicine can be used in specific situations of a patient based on electronic medical record. We design a sentence level architecture that distill the text content by segmentation, selection and recombination to solve the length limitation of bidirectional encoder representations from transformers (BERT). We also apply data augmentation integrating logic rules to enhance the performance of our proposed model. Experiments based on real data have verified the effectiveness of our model.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Devlin, J., Chang, M.W., Lee, K., et al.: BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  2. Armstrong, N.: Overdiagnosis and overtreatment as a quality problem: insights from healthcare improvement research. BMJ Qual. Saf. 27(7), 571–575 (2018)

    Article  Google Scholar 

  3. Lyu, H., Xu, T., Brotman, D., et al.: Overtreatment in the United States. PLoS One, 12(9). e0181970 (2017)

    Google Scholar 

  4. Lenzer, J.: Experts consider how to tackle overtreatment in US healthcare (2012)

    Google Scholar 

  5. Heijnsdijk, E.A.M., der Kinderen, A., Wever, E.M., et al.: Overdetection, overtreatment and costs in prostate-specific antigen screening for prostate cancer. Br. J. Cancer 101(11), 1833–1838 (2009)

    Article  Google Scholar 

  6. McCoy, R.G., Van Houten, H.K., Ross, J.S., et al.: HbA1c overtesting and overtreatment among US adults with controlled type 2 diabetes, 2001–13: observational population based study. BMJ 351, h6138 (2015)

    Article  Google Scholar 

  7. Orish, V.N., Ansong, J.Y., Onyeabor, O.S., et al.: Overdiagnosis and overtreatment of malaria in children in a secondary healthcare centre in Sekondi-Takoradi. Ghana. Trop. Doc. 46(4), 191–198 (2016)

    Article  Google Scholar 

  8. Sun, W., Cai, Z., Li, Y., et al.: Data processing and text mining technologies on electronic medical records: a review. J. Healthc. Eng. (2018)

    Google Scholar 

  9. Zhou, X., Han, H., Chankai, I., et al.: Approaches to text mining for clinical medical records. In: Proceedings of the 2006 ACM Symposium on Applied Computing, pp. 235–239 (2006)

    Google Scholar 

  10. Erhan, D., Courville, A., Bengio, Y., et al.: Why does unsupervised pre-training help deep learning? In: Proceedings of the Thirteenth International Conference on artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, pp. 201–208 (2010)

    Google Scholar 

  11. Dai, A.M., Le, Q.V.: Semi-supervised sequence learning. arXiv preprint arXiv:1511.01432 (2015)

  12. Peters, M E., Neumann, M., Iyyer, M., et al.: Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018)

  13. Radford, A., Narasimhan, K., Salimans, T., et al.: Improving language understanding with unsupervised learning (2018)

    Google Scholar 

  14. Sun, F., Liu, J., Wu, J., Pei, C., Lin, X., Ou, W., Jiang, P.: BERT4Rec: sequential recommendation with bidirectional encoder representations from transformer. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 1441–1450 (2019)

    Google Scholar 

  15. Chao, G.L., Lane, I.: BERT-DST: scalable end-to-end dialogue state tracking with bidirectional encoder representations from transformer. arXiv preprint arXiv:1907.03040 (2019)

  16. Laskar, M.T.R., Hoque, E., Huang, J.X.: Utilizing bidirectional encoder representations from transformers for answer selection. arXiv preprint arXiv:2011.07208 (2020)

  17. Li, F., Jin, Y., Liu, W., Rawat, B.P.S., Cai, P., Yu, H.: Fine-tuning bidirectional encoder representations from transformers (BERT)–based models on large-scale electronic health record notes: an empirical study. JMIR Med. Inf. 7(3), e14830 (2019)

    Article  Google Scholar 

  18. Xu, D., Gopale, M., Zhang, J., Brown, K., Begoli, E., Bethard, S.: Unified medical language system resources improve sieve-based generation and Bidirectional encoder representations from transformers (BERT)–based ranking for concept normalization. J. Am. Med. Inform. Assoc. 27(10), 1510–1519 (2020)

    Article  Google Scholar 

  19. Ding, M., Zhou, C., Yang, H., Tang, J.: CogLTX: Applying BERT to long texts. advances in neural information processing systems, vol. 33 (2020)

    Google Scholar 

  20. Wang, Z., Ng, P., Ma, X., Nallapati, R., Xiang, B.: Multi-passage BERT: a globally normalized bert model for open-domain question answering. arXiv preprint arXiv:1908.08167 (2019)

  21. Wang, W., Yan, M., Wu, C.: Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering. arXiv preprint arXiv:1811.11934 (2018)

  22. Pappagari, R., Zelasko, P., Villalba, J., Carmiel, Y., Dehak, N.: Hierarchical Transformers for Long Document Classification. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) (pp. 838–844). IEEE(2019).

    Google Scholar 

  23. Rae, J.W., Potapenko, A., Jayakumar, S.M., Lillicrap, T.P.: Compressive transformers for long-range sequence modelling. arXiv preprint arXiv:1911.05507 (2019)

  24. Qiu, J., Ma, H., Levy, O., Yih, S.W.T., Wang, S., Tang, J.: Blockwise self-attention for long document understanding. arXiv preprint arXiv:1911.02972 (2019)

  25. Zhang, R., Wei, Z., Shi, Y., Chen, Y.: BERT-AL: BERT for arbitrarily long document understanding (2019)

    Google Scholar 

  26. Ding, M., Zhou, C., Yang, H., Tang, J.: CogLTX: applying BERT to long texts. advances in neural information processing systems, vol. 33 (2020)

    Google Scholar 

  27. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. Adv. Neural. Inf. Process. Syst. 27, 3104–3112 (2014)

    Google Scholar 

  28. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)

    Article  Google Scholar 

  29. Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A. Y., Potts, C.: Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631–1642(2013).

    Google Scholar 

  30. Wang, D., Liu, P., Zheng, Y., Qiu, X., Huang, X.: Heterogeneous graph neural networks for extractive document summarization. arXiv preprint arXiv:2004.12393 (2020)

  31. Perez, L., Wang, J.: The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621 (2017)

  32. Xie, Q., Dai, Z., Hovy, E., Luong, M.T., Le, Q.V.: Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848 (2019)

  33. Wei, J., Zou, K.: Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv preprint arXiv:1901.11196 (2019)

  34. Anaby-Tavor, A., Carmeli, B., Goldbraich, E., Kantor, A., Kour, G., Shlomov, S., et al.: Not enough data? Deep Learning to the Rescue! (2019)

    Google Scholar 

  35. Kobayashi, S.: Contextual augmentation: data augmentation by words with paradigmatic relations. arXiv preprint arXiv:1805.06201 (2018)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Zhu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jia, G., Zhu, W., Tang, J., Zhang, W. (2021). Leveraging Artificial Intelligence in Medicine Compliance Check. In: Nah, F.FH., Siau, K. (eds) HCI in Business, Government and Organizations. HCII 2021. Lecture Notes in Computer Science(), vol 12783. Springer, Cham. https://doi.org/10.1007/978-3-030-77750-0_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77750-0_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77749-4

  • Online ISBN: 978-3-030-77750-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics