Export Citations
- Sponsor:
- sigsac
It is our pleasure to welcome you to the 13th ACM Workshop on Artificial Intelligence and Security - AISec 2020. AISec, having been annually co-located with CCS for 13 consecutive years, is the premier meeting place for researchers interested in the intersection of security, privacy, AI, and machine learning. Its role as a venue has been to merge practical security problems with advances in AI and machine learning. In doing so, researchers also have been developing theory and analytics unique to this domain and have explored diverse topics such as learning in game-theoretic adversarial environments, privacy-preserving learning, and applications to malware, spam, and intrusion detection. AISec 2020 received 28 submissions, of which 11 (40%) were selected for publication and presentation as full papers. Submissions arrived from researchers in many different countries, from a wide variety of institutions, both academic and corporate.
Proceeding Downloads
Where Does the Robustness Come from?: A Study of the Transformation-based Ensemble Defence
This paper aims to provide a thorough study on the effectiveness of the transformation-based ensemble defence for image classification and its reasons. It has been empirically shown that they can enhance the robustness against evasion attacks, while ...
Towards Certifiable Adversarial Sample Detection
Convolutional Neural Networks (CNNs) are deployed in more and more classification systems, but adversarial samples can be maliciously crafted to trick them, and are becoming a real threat. There have been various proposals to improve CNNs' adversarial ...
E-ABS: Extending the Analysis-By-Synthesis Robust Classification Model to More Complex Image Domains
Conditional generative models, such as Schott et al.'s Analysis-by-Synthesis (ABS), have state-of-the-art robustness on MNIST, but fail in more challenging datasets. In this paper, we present E-ABS, an improvement on ABS that achieves state-of-the-art ...
SCRAP: Synthetically Composed Replay Attacks vs. Adversarial Machine Learning Attacks against Mouse-based Biometric Authentication
Adversarial attacks have gained popularity recently due to their simplicity and impact. Their applicability to diverse security scenarios is however less understood. In particular, in some scenarios, attackers may come up naturally with ad-hoc black-box ...
Mind the Gap: On Bridging the Semantic Gap between Machine Learning and Malware Analysis
- Michael R. Smith,
- Nicholas T. Johnson,
- Joe B. Ingram,
- Armida J. Carbajal,
- Bridget I. Haus,
- Eva Domschot,
- Ramyaa Ramyaa,
- Christopher C. Lamb,
- Stephen J. Verzi,
- W. Philip Kegelmeyer
Machine learning (ML) techniques are being used to detect increasing amounts of malware and variants. Despite successful applications of ML, we hypothesize that the full potential of ML is not realized in malware analysis (MA) due to a semantic gap ...
The Robust Malware Detection Challenge and Greedy Random Accelerated Multi-Bit Search
Training classifiers that are robust against adversarially modified examples is becoming increasingly important in practice. In the field of malware detection, adversaries modify malicious binary files to seem benign while preserving their malicious ...
Automatic Yara Rule Generation Using Biclustering
- Edward Raff,
- Richard Zak,
- Gary Lopez Munoz,
- William Fleming,
- Hyrum S. Anderson,
- Bobby Filar,
- Charles Nicholas,
- James Holt
Yara rules are a ubiquitous tool among cybersecurity practitioners and analysts. Developing high-quality Yara rules to detect a malware family of interest can be labor- and time-intensive, even for expert users. Few tools exist and relatively little ...
Flow-based Detection and Proxy-based Evasion of Encrypted Malware C2 Traffic
State of the art deep learning techniques are known to be vulnerable to evasion attacks where an adversarial sample is generated from a malign sample and misclassified as benign.
Detection of encrypted malware command and control traffic based on TCP/...
eNNclave: Offline Inference with Model Confidentiality
Outsourcing machine learning inference creates a confidentiality dilemma: either the client has to trust the server with potentially sensitive input data, or the server has to share his commercially valuable model. Known remedies include homomorphic ...
Risk-based Authentication Based on Network Latency Profiling
Impersonation attacks against web authentication servers have been increasing in complexity over the last decade. Tunnelling services, such as VPNs or proxies, can be for instance used to faithfully impersonate victims in foreign countries. In this ...
Disabling Backdoor and Identifying Poison Data by using Knowledge Distillation in Backdoor Attacks on Deep Neural Networks
Backdoor attacks are poisoning attacks and serious threats to deep neural networks. When an adversary mixes poison data into a training dataset, the training dataset is called a poison training dataset. A model trained with the poison training dataset ...
- Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security