17 th ACM Workshop on
Artificial Intelligence and Security
October 18th, 2024 — Salt Lake City Grand Ballroom, Salon E
co-located with the 31st ACM Conference on Computer and Communications Security
Photo: Wikipedia (License: CC BY 2.0 )

Keynotes

Title: On the Security and Privacy Risks of Generative AI Systems

Alina Oprea, Professor @ Northeastern University

Alina Oprea is a Professor at Northeastern University in the Khoury College of Computer Sciences. She joined Northeastern University in Fall 2016 after spending 9 years as a research scientist at RSA Laboratories. Her research interests in cyber security are broad, with a focus on AI security and privacy, ML-based threat detection, cloud security, and applied cryptography. She is the recipient of the Technology Review TR35 award for her research in cloud security in 2011, the Google Security and Privacy Award in 2019, the Ruth and Joel Spira Award for Excellence in Teaching in 2020, and the CMU Cylab Distinguished Alumni Award in 2024. Alina served as Program Committee co-chair of the flagship cyber security conference, the IEEE Security and Privacy Symposium in 2020 and 2021. She also served as Associate Editor of the ACM Transactions of Privacy and Security( TOPS) journal and the IEEE Security and Privacy Magazine. Her work was recognized with Best Paper Awards at NDSS 2005, AISEC in 2017, and GameSec in 2019..

In the last few years, we have seen tremendous progress on the capabilities of generative AI and large language models (LLMs). As model sizes have reached hundreds of billions of parameters, training models from scratch has become infeasible. Consequently, system developers typically leverage pre-trained LLMs, and later fine-tune them or augment them with external content to specialize them to new tasks. In this talk, we pose the question if these complex LLM deployment pipelines introduce new security and privacy risks for users. We discuss a new privacy attack on fine-tuned LLMs and a new poisoning attack for LLMs utilizing Retriever Augmented Generation (RAG). We also discuss the challenges of developing mitigations and highlight several open problems in securing AI systems.

Title: Challenges and Threats in Generative AI: Misuse and Exploits

Lea Schönherr, Tenure-track Faculty @ CISPA Helmholtz Center for Information Security

Lea Schönherr is a tenure-track faculty at CISPA Helmholtz Center for Information Security since 2022. Her research focuses on information security, particularly adversarial machine learning, trustworthy generative AI, and ML security applications. She is especially interested in language as an interface to machine learning models, including their cognitive representations and code generation with LLMs. She has published several papers on threat detection and defense of speech recognition systems, generative models, and preventing the misuse of generative AI. She obtained her PhD from Ruhr-Universität Bochum, Germany, in 2021 and is a recipient of two fellowships from UbiCrypt (DFG Graduate School) and Casa (DFG Cluster of Excellence).


Generative AI (genAI) is becoming more integrated into our daily lives, raising questions about potential threats within genAI systems and their outputs. In this talk, we will examine the resulting challenges and security threats associated with generative AI. In the first part of the talk, we look at threat scenarios in which generative models are utilized to produce content that is impossible to distinguish from human-generated content. This fake content is often used for fraudulent and manipulative purposes. As generative models evolve, the attacks are easier to automate and require less expertise, while detecting such activities will become increasingly difficult. This talk will provide an overview of our current challenges in detecting fake media in human and machine interactions. The second part will cover exploits of LLMs to disrupt alignment or to steal sensitive information. Existing attacks show that content filters of LLMs can be easily bypassed with specific inputs and that private information can be leaked. Also, established methods in the adversarial machine learning field cannot be easily transferred to generative models. From an alternative perspective, we show that an alternative way to protect intellectual property is by obfuscating prompts. We demonstrate that with only some overhead, we can achieve similar utility while protecting confidential data. The final part of the presentation will discuss the use of generative models in security applications. This includes benchmarking and fixing vulnerable code and understanding these models' capabilities by investigating their code deobfuscation abilities.

Slides

Title: A threat-centric look at Privacy-Preserving Machine Learning

Giovanni Cherubin, Senior Researcher @ Microsoft

Giovanni Cherubin is a Senior Researcher at Microsoft in Cambridge, working with the Microsoft Security Response Centre (MSRC). Before joining Microsoft, Giovanni held research positions at the Alan Turing Institute and EPFL. He obtained a PhD in Machine Learning and Cyber Security from Royal Holloway University of London. His research focuses on the privacy and security properties of machine learning models, as well as the theoretical and empirical study of their information leakage. Additionally, Giovanni works on distribution-free uncertainty estimation for machine learning, such as Conformal Prediction. He has received multiple awards for his contributions to security, privacy, and distribution-free inference.

Privacy-preserving Machine Learning (PPML) has the rare privilege among security research fields of having defences that are both practical and theoretically robust, thanks to over 20 years of progress. However, deployment of these defenses often sparks heated debates over how to tune their parameters. This is partially because these defences are typically designed to counter "any" attack, which can lead to overlooking the specific threats relevant to a particular deployment. This talk will cover the key advancements in PPML research through the principle of "first consider the threats, then pick a defence." By deliberately defining which attacks we consider to be a threat (and which ones we don't) before deploying a model, we can more effectively select concrete parameters for our defences, and better communicate the extent and limitations of the protection we've achieved.

Programme

The following times are on MDT (Mountain Daylight Time) UTC/GMT -6 hours.

09:00–9:15 Opening and Welcome
9:15–10:00 Keynote 1
On the Security and Privacy Risks of Generative AI Systems
Alina Oprea , Professor @ Northeastern University
10:00-10:30 Spotlights
Efficient Model Extraction via Boundary Sampling
Authors : Maor Biton Dor (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)
Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Authors : Dario Pasquini (George Mason University), Martin Strohmeier (Cyber-Defence Campus, armasuisse Science + Technology), Carmela Troncoso (EPFL)
Harmful Bias: A General Label-Leakage Attack on Federated Learning from Bias Gradients
Authors : Nadav Gat (Tel Aviv University), Mahmood Sharif (Tel Aviv University)
Offensive AI: Enhancing Directory Brute-forcing Attack with the Use of Language Models
Authors : Alberto Castagnaro (Delft University of Technology, The Netherlands), Mauro Conti (University of Padua, Italy), Luca Pajola (University of Padua, Italy)
10:30–11:00 Coffee break
11:00–12:00 Poster session 1 - Skylight room (2nd floor)
12:00–13:30 Lunch
13:30–14:15 Keynote 2
Challenges and Threats in Generative AI: Misuse and Exploits
Lea Schönherr , Tenure-track Faculty @ CISPA Helmholtz Center for Information Security
14:15–15:00 Keynote 3
A threat-centric look at Privacy-Preserving Machine Learning
Giovanni Cherubin , Senior Researcher @ Microsoft
15:00–15:30 Coffee break
15:30–16:30 Poster session 2 - Grand Ballroom E
16:30–16:45 Closing remarks

Best Paper Award

As in the previous editions of this workshop, we would honor outstanding contributions. To this end, we will award the best paper, selected by the reviewers among all the submitted papers.

The 2024 AISec Best Paper Award was given to:
Maor Biton Dor (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University), for the paper Efficient Model Extraction via Boundary Sampling .

Accepted Papers

Machine Learning Security (Poster session 1)
Semantic Stealth: Crafting Covert Adversarial Patches for Sentiment Classifiers Using Large Language Models
Authors : Camila Roa (Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN), Maria Mahbub (Center for Artificial Intelligence Security Research, Oak Ridge National Laboratory, Oak Ridge, TN), Sudarshan Srinivasan (Center for Artificial Intelligence Security Research, Oak Ridge National Laboratory, Oak Ridge, TN), Edmon Begoli (Center for Artificial Intelligence Security Research, Oak Ridge National Laboratory, Oak Ridge, TN), Amir Sadovnik (Center for Artificial Intelligence Security Research, Oak Ridge National Laboratory, Oak Ridge, TN)
Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness
Authors : Jiankai Jin (The University of Melbourne), Olga Ohrimenko (The University of Melbourne), Benjamin I. P. Rubinstein (The University of Melbourne)
On the Robustness of Graph Reduction Against GNN Backdoor
Authors : Yuxuan Zhu (Rensselaer Polytechnic Institute), Michael Mandulak (Rensselaer Polytechnic Institute), Kerui Wu (Rensselaer Polytechnic Institute), George Slota (Rensselaer Polytechnic Institute), Yuseok Jeon (Ulsan National Institute of Science and Technology), Ka-Ho Chow (The University of Hong Kong), Lei Yu (Rensselaer Polytechnic Institute)
Adversarially Robust Anti-Backdoor Learning
Authors : Qi Zhao (Karlsruhe Institute of Technology (KIT)), Christian Wressnegger (Karlsruhe Institute of Technology (KIT))
Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Authors : Dario Pasquini (George Mason University), Martin Strohmeier (Cyber-Defence Campus, armasuisse Science + Technology), Carmela Troncoso (EPFL)
Video
Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep Learning via Adversarial Training
Authors : Leo Hyun Park (Yonsei University), Jaeuk Kim (Yonsei University), Myung Gyo Oh (Yonsei University), Jaewoo Park (Yonsei University), Taekyoung Kwon (Yonsei University)
The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations
Authors : Zebin Yun (Tel Aviv University), Achi-Or Weingarten (Weizmann Institute of Science), Eyal Ronen (Tel Aviv University), Mahmood Sharif (Tel Aviv University)
Poster
ELMs Under Siege: A Study on Backdoor Attacks on Extreme Learning Machines
Authors : Behrad Tajalli (Radboud University), Stefanos Koffas (TU Delft), Gorka Abad (Radboud University & Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA)), Stjepan Picek (Radboud University)
EmoBack: Backdoor Attacks Against Speaker Identification Using Emotional Prosody
Authors : Coen Schoof (Radboud University), Stefanos Koffas (Delft University of Technology), Mauro Conti (University of Padua), Stjepan Picek (Radboud University)
Privacy-Preserving Machine Learning (Poster session 2)
Efficient Model Extraction via Boundary Sampling
Authors : Maor Biton Dor (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)
Feature Selection from Differentially Private Correlations
Authors : Ryan Swope (Booz Allen Hamilton), Amol Khanna (Booz Allen Hamilton), Philip Doldo (Booz Allen Hamilton), Saptarshi Roy (University of Michigan, Ann Arbor), Edward Raff (Booz Allen Hamiltion)
Poster
It's Our Loss: No Privacy Amplification for Hidden State DP-SGD With Non-Convex Loss
Authors : Meenatchi Sundaram Muthu Selva Annamalai ([email protected])
Harmful Bias: A General Label-Leakage Attack on Federated Learning from Bias Gradients
Authors : Nadav Gat (Tel Aviv University), Mahmood Sharif (Tel Aviv University)
Poster
System Security (Poster session 2)
When Adversarial Perturbations meet Concept Drift: an Exploratory Analysis on ML-NIDS
Authors : Giovanni Apruzzese (University of Liechtenstein), Aurore Fass (CISPA Helmholtz Center for Information Security), Fabio Pierazzi (King's College London)
Towards Robust, Explainable, and Privacy-Friendly Sybil Detection
Authors : Christian Bungartz (University of Bonn), Dr. Felix Boes (University of Bonn), Prof. Dr. Michael Meier (University of Bonn, Fraunhofer FKIE), Dr. Marc Ohm (University of Bonn, Fraunhofer FKIE)
Using LLM Embeddings with Similarity Search for Botnet TLS Certificate Detection
Authors : Kumar Shashwat (University of South Florida), Francis Hahn (University of South Florida), Stuart Millar (Rapid7 LLC), Xinming Ou (University of South Florida)
Poster
Offensive AI: Enhancing Directory Brute-forcing Attack with the Use of Language Models
Authors : Alberto Castagnaro (Delft University of Technology, The Netherlands), Mauro Conti (University of Padua, Italy), Luca Pajola (University of Padua, Italy)
Music to My Ears: Turning GPU Sounds into Intellectual Property Gold
Authors : Sayed Erfan Arefin (Texas Tech University), Abdul Serwadda (Texas Tech University)
Poster Video

Call for Papers

Important Dates

  • Paper submission deadline: June 21st July 7th, 2024, 11:59 PM (all deadlines are AoE, UTC-12)
  • Reviews due: July 19th July 31st, 2024
  • Review Released and Acceptance notification: August 2nd August 6th, 2024
  • Camera ready due: August 22nd September 20th, 2024
  • Workshop day: October 18th, 2024

Overview

Recent years have seen a dramatic increase in applications of Artificial Intelligence (AI), Machine Learning (ML), and data mining to security and privacy problems. The analytic tools and intelligent behavior provided by these techniques make AI and ML increasingly important for autonomous real-time analysis and decision making in domains with a wealth of data or that require quick reactions to constantly changing situations. The use of learning methods in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of “deep learning” techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. In addition, data mining and machine learning techniques create a wealth of privacy issues, due to the abundance and accessibility of data. The AISec workshop provides a venue for presenting and discussing new developments in the intersection of security and privacy with AI and ML.

Topics of Interest

Topics of interest include (but are not limited to):

Theoretical topics related to security

  • Adversarial learning
  • Security of deep learning systems
  • Robust statistics
  • Learning in games
  • Economics of security
  • Differential privacy

Security applications

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification and analysis
  • Data anonymization/de-anonymization
  • Security in social networks
  • Big data analytics for security
  • User authentication

Security-related AI problems

  • Distributed inference and decision making for security
  • Secure multiparty computation and cryptographic approaches
  • Model confidentiality
  • Privacy-preserving data mining
  • Adaptive side-channel attacks
  • Design and analysis of CAPTCHAs
  • AI approaches to trust and reputation
  • Vulnerability testing through intelligent probing (e.g. fuzzing)
  • Content-driven security policy management & access control
  • Techniques and methods for generating training and test sets
  • Anomalous behavior detection (e.g. for the purpose of fraud detection)
  • AI Misuse (e.g., Large Language Models for automated hacking, misinformation, deepfakes)
  • Safety and ethical issues of Generative AI

Submission Guidelines

We invite the following types of papers:

  • Original research papers on any topic in the intersection of AI or machine learning with security, privacy, or related areas.
  • Position and open-problem papers discussing the relationship of AI or machine learning to security or privacy. Submitted papers of this type may not substantially overlap with papers that have been published previously or that are simultaneously submitted to a journal or conference/workshop proceedings.
  • Systematization-of-knowledge papers , which should distill the AI or machine learning contributions of a previously-published series of security papers.

The authors can specify the paper type in the submission form. Paper submissions must be at most 10 pages in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall. Papers should be in LaTeX and we recommend using the ACM format. This format is required for the camera-ready version. Please follow the main CCS formatting instructions (except with page limits as described above). In particular, we recommend using the sigconf template, which can be downloaded from https://www.acm.org/publications/proceedings-template . Accepted papers will be published by the ACM Digital Library and/or ACM Press. Committee members are not required to read the appendices, so the paper should be intelligible without them. Submissions must be in English and properly anonymized.

Submission Site

Submission link: https://aisec2024.hotcrp.com .

All accepted submissions will be presented at the workshop as posters. Accepted papers will be selected for presentation as spotlights based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance and will be included in the ACM workshop proceedings.

One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

For any questions, please contact one the workshop organizers at [email protected]

Committee

Workshop Chairs

Steering Committee

Program Committee

Top Reviewers
  • Aideen Fay (Microsoft)
  • Andrew Cullen (University of Melbourne)
  • Andy Applebaum (Apple)
  • Angelo Sotgiu (University of Cagliari)
  • Balachandra Shanabhag (Cohesity)
  • Bhavna Soman (Amazon Web Services)
  • Boyang Zhang (CISPA Helmholtz Center for Information Security)
  • Brad Miller (X Corp)
  • Christian Wressnegger (Karlsruhe Institute of Technology)
  • Diego Soi (University of Cagliari)
  • Edward Raff (Booz Allen Hamiltion)
  • Erwin Quiring (Ruhr University Bochum and ICSI)
  • Eva Giboulot (Linkmedia - INRIA Rennes)
  • Fabio Brau (Scuola Superiore Sant'Anna)
  • Giorgio Piras (University of Cagliari)
  • Giulio Rossolini (Scuola Superiore Sant'Anna)
  • Ilias Tsingenopoulos (DistriNet, KU Leuven)
  • James Hu (University of Arizona)
  • Joel Frank (Meta)
  • John Holodnak (MIT Lincoln Laboratory)
  • Jonas Möller (TU Berlin)
  • Jose Maria de Fuentes (Universidad Carlos III de Madrid)
  • Kathrin Grosse (EPFL)
  • Lea Schönherr (CISPA Helmholtz Center for Information Security)
  • Lorenzo Cazzaro (Università Ca' Foscari Venezia)
  • Luca Demetrio (University of Genoa)
  • Maria Rigaki (Czech Technical University in Prague)
  • Maximilian Noppel (Karlsruhe Institute of Technology)
  • Patrick Dwyer (Apple, Inc)
  • Sam Bretheim (Craigslist)
  • Scott Coull (Google)
  • Shae McFadden (King's College London & The Alan Turing Institute)
  • Theo Chow (King's College London)
  • Thorsten Eisenhofer (TU Berlin)
  • Tobias Lorenz (CISPA Helmholtz Center for Information Security)
  • Wenjun Zhu (Zhejiang University)
  • Xiaoyu Ji (Zhejiang University)
  • Xin Fan Guo (King's College London)
  • Xinyue Shen (CISPA Helmholtz Center for Information Security)
  • Yue Zhao (Institute of Information Engineering, Chinese Academy of Sciences)
  • Zied Ben Houidi (Huawei Technologies Co. Ltd.)
  • Ziqi Yang (Zhejiang University)
Reviewers
  • Abbas Yazdinejad (University of Guelph, Canada)
  • Achin (Ace) Kulshrestha (Google Inc.)
  • Alessandro Brighente (University of Padova)
  • Alessandro Erba (Karlsruhe Institute of Technology)
  • Alessandro Sanna (University Of Cagliari)
  • Ambrish Rawat (IBM Research)
  • Annalisa Appice (University of Bari Aldo Moro)
  • Anshuman Suri (University of Virginia)
  • Antonio Emanuele Cinà (University of Genoa)
  • Arjun Bhagoji (University of Chicago)
  • Arnav Garg (Microsoft)
  • Azqa Nadeem (University of Twente)
  • Bailey Kacsmar (University of Alberta)
  • Benjamin M. Ampel (Georgia State University)
  • Bobby Filar (Sublime Security)
  • Chawin Sitawarin (Meta)
  • Clarence Chio (UC Berkeley)
  • Daniel Gibert (University College Dublin, CeADAR)
  • Daniele Canavese (IRIT)
  • Daniele Friolo (Sapienza University of Rome)
  • Daniele Angioni (University of Cagliari)
  • David Pape (CISPA Helmholtz Center for Information Security)
  • Dongdong She (Hong Kong University of Science and Technology)
  • Dorjan Hitaj (Sapienza University of Rome)
  • Edoardo Debenedetti (ETH Zurich)
  • Fabio De Gaspari (Sapienza University of Rome)
  • Francesco Flammini (IDSIA USI-SUPSI)
  • Giorgio Severi (Northeastern University)
  • Giovanni Cherubin (Microsoft)
  • Giovanni Apruzzese (University of Liechtenstein)
  • Giulio Zizzo (IBM Research)
  • Giuseppina Andresini (University of Bari Aldo Moro)
  • Hamid Bostani (Radboud University, The Netherlands)
  • Hari Venugopalan (University of California, Davis)
  • Javier Carnerero Cano (IBM Research Europe/Imperial College London)
  • Jonas Ricker (Ruhr University Bochum)
  • Julien Piet (UC Berkeley)
  • Junhao Dong (Nanyang Technological University)
  • Kexin Pei (The University of Chicago)
  • Konrad Rieck (TU Berlin)
  • LE MERRER Erwan (Inria, France)
  • Lei Ma (The University of Tokyo / University of Alberta)
  • Leonardo Regano (University Of Cagliari)
  • Lorenzo Pisu (University Of Cagliari)
  • Luis Muñoz-González (Telefónica Research)
  • Luke Richards (University of Maryland, Baltimore County)
  • Markus Dürmuth (Leibniz University Hannover)
  • Marta Catillo (Università degli Studi del Sannio)
  • Matthew Jagielski (Google Research)
  • Maura Pintor (University of Cagliari)
  • Mauro Conti (University of Padua)
  • Melody Wolk (Apple)
  • Milenko Drinic (Microsoft Corporation)
  • Muhammad Zaid Hameed (IBM Research Europe, Ireland)
  • Ozan Özdenizci (Montanuniversität Leoben)
  • Pablo Moriano (Oak Ridge National Laboratory)
  • Pavel Laskov (University of Liechtenstein)
  • Pooria Madani (Ontario Tech University)
  • Pratyusa K. Manadhata (Meta)
  • Quan Le (CeADAR, University College Dublin)
  • SHRIKANT TANGADE (University of Padova, Italy & CHRIST University, India)
  • Sahar Abdelnabi (Microsoft)
  • Sanghyun Hong (Oregon State University)
  • Savino Dambra (Norton Research Group)
  • Shujiang Wu (F5. Inc)
  • Silvia Lucia Sanna (University of Cagliari)
  • Simon Oya (The University of British Columbia (UBC))
  • Simos Gerasimou (University of York. UK)
  • Sivanarayana Gaddam (Cohesity Inc)
  • Sizhe Chen (UC Berkeley)
  • Tianhao Wang (University of Virginia)
  • Vera Rimmer (KU Leuven)
  • Vikash Sehwag (Sony AI)
  • Vinod P. (University of Padua, Italy)
  • Wenxin Ding (University of Chicago)
  • Xiaofei Xie (Singapore Management University)
  • Yang Zhang (CISPA Helmholtz Center for Information Security)
  • Yash Vekaria (University of California, Davis)
  • Yufei Han (INRIA)
  • Zeliang Kan (King's College London)

Thanks for those who contacted us to help with the reviews!