Loading [a11y]/accessibility-menu.js
Adversarial Vulnerability of Deep Learning Models in Analyzing Next Generation Sequencing Data | IEEE Conference Publication | IEEE Xplore

Adversarial Vulnerability of Deep Learning Models in Analyzing Next Generation Sequencing Data


Abstract:

Deep Neural Networks (DNN) can be effectively used to accurately identify infectious pathogens. Unfortunately, DNNs can be exploited by bioterrorists, using adversarial a...Show More

Abstract:

Deep Neural Networks (DNN) can be effectively used to accurately identify infectious pathogens. Unfortunately, DNNs can be exploited by bioterrorists, using adversarial attacks, to stage a fake super-bug outbreak or to hide the extent of a super-bug outbreak. In this work, we show how a DNN that performs superb classification of cgMLST profiles can be exploited using adversarial attacks. To this end, we train a novel DNN model, Methicillin Resistance Classification Network (MRCN), which identifies strains of the Staph bacteria that are resistant to an antibiotic named methicillin with 93.8 percent accuracy, using Core Genome Multi-Locus Sequence Typing (cgMLST) profiles. To defend against this kind of exploitation, we train a second DNN model, Synthetic Profile Classifier (SPC), which can differentiate between natural Staph bacteria and generic synthetic Staph bacteria with 92.3 percent accuracy. Our experiments show that the MRCN model is highly susceptible to multiple adversarial attacks and that the defenses we propose are not able to provide effective protection against them. As a result, a bioterrorist would be able to utilize the compromised DNN model to inflict immense damage by staging a fake epidemic or delaying the detection of an epidemic, allowing it to proliferate undeterred.
Date of Conference: 16-19 December 2020
Date Added to IEEE Xplore: 13 January 2021
ISBN Information:
Conference Location: Seoul, Korea (South)

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.