- Sponsor:
- sigsac
It is our pleasure to welcome you to the 16th ACM Workshop on Artificial Intelligence and Security - AISec 2024. AISec, having been annually co-located with CCS for 17 consecutive years, is the premier meeting place for researchers interested in the intersection of security, privacy, AI, and machine learning. Its role as a venue has been to merge practical security problems with advances in AI and machine learning. In doing so, researchers have also been developing theories and analytics unique to this domain and have explored diverse topics such as learning in gametheoretic adversarial environments, privacy-preserving learning, and applications to malware, spam, and intrusion detection. AISec 2024 received 72 submissions, of which 18 (25%) were selected for publication and presentation as full papers. Submissions arrived from researchers in many different countries, and from a wide variety of institutions, both academic and corporate.
Proceeding Downloads
Efficient Model Extraction via Boundary Sampling
This paper introduces a novel data-free model extraction attack that significantly advances the current state-of-the-art in terms of efficiency, accuracy, and effectiveness. Traditional black-box methods rely on using the victim's model as an oracle to ...
Feature Selection from Differentially Private Correlations
Data scientists often seek to identify the most important features in high-dimensional datasets. This can be done through L 1-regularized regression, but this can become inefficient for very high-dimensional datasets. Additionally, high-dimensional ...
It's Our Loss: No Privacy Amplification for Hidden State DP-SGD With Non-Convex Loss
Differentially Private Stochastic Gradient Descent (DP-SGD) is a popular iterative algorithm used to train machine learning models while formally guaranteeing the privacy of users. However, the privacy analysis of DP-SGD makes the unrealistic assumption ...
Harmful Bias: A General Label-Leakage Attack on Federated Learning from Bias Gradients
Federated learning (FL) enables several users to train machine-learning models jointly without explicitly sharing data with one another. This regime is particularly helpful in cases where keeping the data private and secure is essential (e.g., medical ...
Semantic Stealth: Crafting Covert Adversarial Patches for Sentiment Classifiers Using Large Language Models
Deep learning models have been shown to be vulnerable to adversarial attacks, in which perturbations to their inputs cause the model to produce incorrect predictions. As opposed to adversarial attacks in computer vision, where small changes introduced to ...
Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness
Adversarial examples pose a security risk as they can alter decisions of a machine learning classifier through slight input perturbations. Certified robustness has been proposed as a mitigation where given an input x, a classifier returns a prediction ...
On the Robustness of Graph Reduction Against GNN Backdoor
Graph Neural Networks (GNNs) have been shown to be susceptible to backdoor poisoning attacks, which pose serious threats to real-world applications. Meanwhile, graph reduction techniques have recently emerged as effective methods for accelerating GNN ...
Adversarially Robust Anti-Backdoor Learning
Defending against data poisoning-based backdoors at training time is notoriously difficult due to the wide range of attack variants. Recent attacks use perturbations/triggers subtly entangled with the benign features, impeding the separation of poisonous ...
Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
We introduce a new family of prompt injection attacks, termed Neural Exec. Unlike known attacks that rely on handcrafted strings (e.g., "Ignore previous instructions and..."), we show that it is possible to conceptualize the creation of execution ...
Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep Learning via Adversarial Training
Deep learning models are continually improving in accuracy, but they remain vulnerable to adversarial attacks, often resulting in the misclassification of adversarial examples. Adversarial training can mitigate this problem by enhancing the model's ...
The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations
To help adversarial examples generalize from surrogate machine-learning (ML) models to targets, certain transferability-based black-box evasion attacks incorporate data augmentations (e.g., random resizing). Yet, prior work has explored limited ...
ELMs Under Siege: A Study on Backdoor Attacks on Extreme Learning Machines
Due to their computational efficiency and speed during training and inference, extreme learning machines are suitable for simple learning tasks on lightweight datasets. Examples of their real-world applications include healthcare and edge devices, where ...
EmoBack: Backdoor Attacks Against Speaker Identification Using Emotional Prosody
Speaker identification (SI) determines a speaker's identity based on their utterances. Previous work indicates that SI deep neural networks (DNNs) are vulnerable to backdoor attacks that embed a backdoor functionality in a DNN causing incorrect outputs ...
When Adversarial Perturbations meet Concept Drift: An Exploratory Analysis on ML-NIDS
We scrutinize the effects of "blind'' adversarial perturbations against machine learning (ML)-based network intrusion detection systems (NIDS) affected by concept drift. There may be cases in which a real attacker -- unable to access and hence unaware ...
Towards Robust, Explainable, and Privacy-Friendly Sybil Detection
Online Social Networks (OSN) are well established tools for cooperation and exchange of ideas between peers. The authenticity of peers cannot be verified easily but is derived from trust relations inside the OSN. With the help of sybil attacks, this can ...
Using LLM Embeddings with Similarity Search for Botnet TLS Certificate Detection
Modern botnets leverage TLS encryption to mask C&C server communications. TLS certificates used by botnets could exhibit subtle characteristics that facilitate detection. In this paper we investigate whether text features from TLS certificates can be ...
Offensive AI: Enhancing Directory Brute-forcing Attack with the Use of Language Models
Web Vulnerability Assessment and Penetration Testing (Web VAPT) is a comprehensive cybersecurity process that uncovers a range of vulnerabilities which, if exploited, could compromise the integrity of web applications. In a VAPT, it is common to perform ...
Music to My Ears: Turning GPU Sounds into Intellectual Property Gold
In this paper, we introduce an acoustic side-channel attack that extracts crucial information from Deep Neural Networks (DNNs) operating on GPUs. Utilizing a Micro-Electro-Mechanical Systems (MEMS) microphone with an extensive frequency range, we ...
Index Terms
- Proceedings of the 2024 Workshop on Artificial Intelligence and Security