skip to main content
10.1145/3411508acmconferencesBook PagePublication PagesccsConference Proceedingsconference-collections
AISec'20: Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security
ACM2020 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security Virtual Event USA 13 November 2020
ISBN:
978-1-4503-8094-2
Published:
09 November 2020
Sponsors:
Recommend ACM DL
ALREADY A SUBSCRIBER?SIGN IN
Next Conference
October 13 - 17, 2025
Taipei , Taiwan
Reflects downloads up to 03 Mar 2025Bibliometrics
Skip Abstract Section
Abstract

It is our pleasure to welcome you to the 13th ACM Workshop on Artificial Intelligence and Security - AISec 2020. AISec, having been annually co-located with CCS for 13 consecutive years, is the premier meeting place for researchers interested in the intersection of security, privacy, AI, and machine learning. Its role as a venue has been to merge practical security problems with advances in AI and machine learning. In doing so, researchers also have been developing theory and analytics unique to this domain and have explored diverse topics such as learning in game-theoretic adversarial environments, privacy-preserving learning, and applications to malware, spam, and intrusion detection. AISec 2020 received 28 submissions, of which 11 (40%) were selected for publication and presentation as full papers. Submissions arrived from researchers in many different countries, from a wide variety of institutions, both academic and corporate.

Skip Table Of Content Section
SESSION: Session 1: Adversarial Machine Learning
research-article
Where Does the Robustness Come from?: A Study of the Transformation-based Ensemble Defence

This paper aims to provide a thorough study on the effectiveness of the transformation-based ensemble defence for image classification and its reasons. It has been empirically shown that they can enhance the robustness against evasion attacks, while ...

research-article
Open Access
Towards Certifiable Adversarial Sample Detection

Convolutional Neural Networks (CNNs) are deployed in more and more classification systems, but adversarial samples can be maliciously crafted to trick them, and are becoming a real threat. There have been various proposals to improve CNNs' adversarial ...

research-article
Open Access
E-ABS: Extending the Analysis-By-Synthesis Robust Classification Model to More Complex Image Domains

Conditional generative models, such as Schott et al.'s Analysis-by-Synthesis (ABS), have state-of-the-art robustness on MNIST, but fail in more challenging datasets. In this paper, we present E-ABS, an improvement on ABS that achieves state-of-the-art ...

research-article
SCRAP: Synthetically Composed Replay Attacks vs. Adversarial Machine Learning Attacks against Mouse-based Biometric Authentication

Adversarial attacks have gained popularity recently due to their simplicity and impact. Their applicability to diverse security scenarios is however less understood. In particular, in some scenarios, attackers may come up naturally with ad-hoc black-box ...

SESSION: Session 2: Malware Detection
research-article
Public Access
Mind the Gap: On Bridging the Semantic Gap between Machine Learning and Malware Analysis

Machine learning (ML) techniques are being used to detect increasing amounts of malware and variants. Despite successful applications of ML, we hypothesize that the full potential of ML is not realized in malware analysis (MA) due to a semantic gap ...

research-article
The Robust Malware Detection Challenge and Greedy Random Accelerated Multi-Bit Search

Training classifiers that are robust against adversarially modified examples is becoming increasingly important in practice. In the field of malware detection, adversaries modify malicious binary files to seem benign while preserving their malicious ...

research-article
Automatic Yara Rule Generation Using Biclustering

Yara rules are a ubiquitous tool among cybersecurity practitioners and analysts. Developing high-quality Yara rules to detect a malware family of interest can be labor- and time-intensive, even for expert users. Few tools exist and relatively little ...

research-article
Open Access
Flow-based Detection and Proxy-based Evasion of Encrypted Malware C2 Traffic

State of the art deep learning techniques are known to be vulnerable to evasion attacks where an adversarial sample is generated from a malign sample and misclassified as benign.

Detection of encrypted malware command and control traffic based on TCP/...

SESSION: Session 3: Machine Learning for Security and Privacy
research-article
eNNclave: Offline Inference with Model Confidentiality

Outsourcing machine learning inference creates a confidentiality dilemma: either the client has to trust the server with potentially sensitive input data, or the server has to share his commercially valuable model. Known remedies include homomorphic ...

research-article
Risk-based Authentication Based on Network Latency Profiling

Impersonation attacks against web authentication servers have been increasing in complexity over the last decade. Tunnelling services, such as VPNs or proxies, can be for instance used to faithfully impersonate victims in foreign countries. In this ...

research-article
Disabling Backdoor and Identifying Poison Data by using Knowledge Distillation in Backdoor Attacks on Deep Neural Networks

Backdoor attacks are poisoning attacks and serious threats to deep neural networks. When an adversary mixes poison data into a training dataset, the training dataset is called a poison training dataset. A model trained with the poison training dataset ...

Contributors
  • University of South Florida, Tampa
  • University of South Florida, Tampa
  • University of California, Berkeley
  • DeepMind Technologies Limited
  • University of Cagliari

Recommendations

Acceptance Rates

Overall Acceptance Rate 94 of 231 submissions, 41%
YearSubmittedAcceptedRate
AISec '1832928%
AISec '17361131%
AISec '16381232%
AISec '15251144%
AISec '14241250%
AISec '13171059%
AISec '12241042%
AISec '10151067%
AISec '0820945%
Overall2319441%