skip to main content
10.1145/3689932acmconferencesBook PagePublication PagesccsConference Proceedingsconference-collections
AISec '24: Proceedings of the 2024 Workshop on Artificial Intelligence and Security
ACM2024 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
CCS '24: ACM SIGSAC Conference on Computer and Communications Security Salt Lake City UT USA October 14 - 18, 2024
ISBN:
979-8-4007-1228-9
Published:
22 November 2024
Sponsors:
Recommend ACM DL
ALREADY A SUBSCRIBER?SIGN IN
Next Conference
October 13 - 17, 2025
Taipei , Taiwan
Reflects downloads up to 18 Jan 2025Bibliometrics
Skip Abstract Section
Abstract

It is our pleasure to welcome you to the 16th ACM Workshop on Artificial Intelligence and Security - AISec 2024. AISec, having been annually co-located with CCS for 17 consecutive years, is the premier meeting place for researchers interested in the intersection of security, privacy, AI, and machine learning. Its role as a venue has been to merge practical security problems with advances in AI and machine learning. In doing so, researchers have also been developing theories and analytics unique to this domain and have explored diverse topics such as learning in gametheoretic adversarial environments, privacy-preserving learning, and applications to malware, spam, and intrusion detection. AISec 2024 received 72 submissions, of which 18 (25%) were selected for publication and presentation as full papers. Submissions arrived from researchers in many different countries, and from a wide variety of institutions, both academic and corporate.

Skip Table Of Content Section
SESSION: Session 1: Privacy-Preserving Machine Learning
research-article
Open Access
Efficient Model Extraction via Boundary Sampling

This paper introduces a novel data-free model extraction attack that significantly advances the current state-of-the-art in terms of efficiency, accuracy, and effectiveness. Traditional black-box methods rely on using the victim's model as an oracle to ...

research-article
Feature Selection from Differentially Private Correlations

Data scientists often seek to identify the most important features in high-dimensional datasets. This can be done through L 1-regularized regression, but this can become inefficient for very high-dimensional datasets. Additionally, high-dimensional ...

research-article
Open Access
It's Our Loss: No Privacy Amplification for Hidden State DP-SGD With Non-Convex Loss

Differentially Private Stochastic Gradient Descent (DP-SGD) is a popular iterative algorithm used to train machine learning models while formally guaranteeing the privacy of users. However, the privacy analysis of DP-SGD makes the unrealistic assumption ...

research-article
Open Access
Harmful Bias: A General Label-Leakage Attack on Federated Learning from Bias Gradients

Federated learning (FL) enables several users to train machine-learning models jointly without explicitly sharing data with one another. This regime is particularly helpful in cases where keeping the data private and secure is essential (e.g., medical ...

SESSION: Session 2: Machine Learning Security
research-article
Open Access
Semantic Stealth: Crafting Covert Adversarial Patches for Sentiment Classifiers Using Large Language Models

Deep learning models have been shown to be vulnerable to adversarial attacks, in which perturbations to their inputs cause the model to produce incorrect predictions. As opposed to adversarial attacks in computer vision, where small changes introduced to ...

research-article
Open Access
Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness

Adversarial examples pose a security risk as they can alter decisions of a machine learning classifier through slight input perturbations. Certified robustness has been proposed as a mitigation where given an input x, a classifier returns a prediction ...

research-article
On the Robustness of Graph Reduction Against GNN Backdoor

Graph Neural Networks (GNNs) have been shown to be susceptible to backdoor poisoning attacks, which pose serious threats to real-world applications. Meanwhile, graph reduction techniques have recently emerged as effective methods for accelerating GNN ...

research-article
Open Access
Adversarially Robust Anti-Backdoor Learning

Defending against data poisoning-based backdoors at training time is notoriously difficult due to the wide range of attack variants. Recent attacks use perturbations/triggers subtly entangled with the benign features, impeding the separation of poisonous ...

research-article
Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks

We introduce a new family of prompt injection attacks, termed Neural Exec. Unlike known attacks that rely on handcrafted strings (e.g., "Ignore previous instructions and..."), we show that it is possible to conceptualize the creation of execution ...

research-article
Open Access
Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep Learning via Adversarial Training

Deep learning models are continually improving in accuracy, but they remain vulnerable to adversarial attacks, often resulting in the misclassification of adversarial examples. Adversarial training can mitigate this problem by enhancing the model's ...

research-article
Open Access
The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations

To help adversarial examples generalize from surrogate machine-learning (ML) models to targets, certain transferability-based black-box evasion attacks incorporate data augmentations (e.g., random resizing). Yet, prior work has explored limited ...

research-article
Open Access
ELMs Under Siege: A Study on Backdoor Attacks on Extreme Learning Machines

Due to their computational efficiency and speed during training and inference, extreme learning machines are suitable for simple learning tasks on lightweight datasets. Examples of their real-world applications include healthcare and edge devices, where ...

research-article
Open Access
EmoBack: Backdoor Attacks Against Speaker Identification Using Emotional Prosody

Speaker identification (SI) determines a speaker's identity based on their utterances. Previous work indicates that SI deep neural networks (DNNs) are vulnerable to backdoor attacks that embed a backdoor functionality in a DNN causing incorrect outputs ...

SESSION: Session 3: System Security
research-article
When Adversarial Perturbations meet Concept Drift: An Exploratory Analysis on ML-NIDS

We scrutinize the effects of "blind'' adversarial perturbations against machine learning (ML)-based network intrusion detection systems (NIDS) affected by concept drift. There may be cases in which a real attacker -- unable to access and hence unaware ...

research-article
Towards Robust, Explainable, and Privacy-Friendly Sybil Detection

Online Social Networks (OSN) are well established tools for cooperation and exchange of ideas between peers. The authenticity of peers cannot be verified easily but is derived from trust relations inside the OSN. With the help of sybil attacks, this can ...

research-article
Using LLM Embeddings with Similarity Search for Botnet TLS Certificate Detection

Modern botnets leverage TLS encryption to mask C&C server communications. TLS certificates used by botnets could exhibit subtle characteristics that facilitate detection. In this paper we investigate whether text features from TLS certificates can be ...

research-article
Open Access
Offensive AI: Enhancing Directory Brute-forcing Attack with the Use of Language Models

Web Vulnerability Assessment and Penetration Testing (Web VAPT) is a comprehensive cybersecurity process that uncovers a range of vulnerabilities which, if exploited, could compromise the integrity of web applications. In a VAPT, it is common to perform ...

research-article
Open Access
Music to My Ears: Turning GPU Sounds into Intellectual Property Gold

In this paper, we introduce an acoustic side-channel attack that extracts crucial information from Deep Neural Networks (DNNs) operating on GPUs. Utilizing a Micro-Electro-Mechanical Systems (MEMS) microphone with an extensive frequency range, we ...

Contributors
  • University of Cagliari
  • DeepMind Technologies Limited
Index terms have been assigned to the content through auto-classification.

Recommendations

Acceptance Rates

Overall Acceptance Rate 94 of 231 submissions, 41%
YearSubmittedAcceptedRate
AISec '1832928%
AISec '17361131%
AISec '16381232%
AISec '15251144%
AISec '14241250%
AISec '13171059%
AISec '12241042%
AISec '10151067%
AISec '0820945%
Overall2319441%