skip to main content
10.1145/3411501acmconferencesBook PagePublication PagesccsConference Proceedingsconference-collections
PPMLP'20: Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice
ACM2020 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security Virtual Event USA 9 November 2020
ISBN:
978-1-4503-8088-1
Published:
09 November 2020
Sponsors:
Next Conference
October 14 - 18, 2024
Salt Lake City , UT , USA
Bibliometrics
Skip Abstract Section
Abstract

It is our great pleasure to welcome you to the PPMLP 2020: Workshop on Privacy-Preserving Machine Learning in Practice. Protecting data privacy from multiple data owners while utilizing these data for joint model creation and joint data analysis has become a more and more practical problem, given that GDPR and other national laws have come into force. Academic researchers from different areas have proposed multiple ideas to attack the aforementioned challenges from different perspectives. Researchers and engineers in industries also implement various improvements on the internal AI systems for privacy and security enhancement. However, we need more opportunities to connect researchers from different backgrounds and domains together to exchange the problem formulations in practice and research advances in different principals. PPMLP will provide a forum for both machine learning and security researchers/industrial practitioners to jointly review the recent academic progress on privacypreserving machine learning and data analysis techniques and valuable lessons learned from the real-world applications.

Skip Table Of Content Section
SESSION: Keynote Talks
keynote
Introduction to Secure Collaborative Intelligence (SCI) Lab

With the rapid development of technology, user privacy and data security are drawing much attention over the recent years. On one hand, how to protect user privacy while making use of customers? data is a challenging task. On the other hand, data silos ...

keynote
Engineering Privacy-Preserving Machine Learning Protocols

Privacy-preserving machine learning (PPML) protocols allow to privately evaluate or even train machine learning (ML) models on sensitive data while simultaneously protecting the data and the model. So far, most of these protocols were built and ...

keynote
MC2: A Secure Collaborative Computation Platform

Multiple organizations often wish to aggregate their sensitive data and learn from it, but they cannot do so because they cannot share their data. For example, banks wish to train models jointly over their aggregate transaction data to detect money ...

keynote
Zero-Knowledge Proofs for Machine Learning

Machine learning has become increasingly prominent and is widely used in various applications in practice. Despite its great success, the integrity of machine learning predictions and accuracy is a rising concern. The reproducibility of machine learning ...

SESSION: Session 1: Full Paper Presentations
research-article
CryptoSPN: Expanding PPML beyond Neural Networks

The ubiquitous deployment of machine learning (ML) technologies has certainly improved many applications but also raised challenging privacy concerns, as sensitive client data is usually processed remotely at the discretion of a service provider. ...

research-article
Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy

Deployment of deep learning in different fields and industries is growing day by day due to its performance, which relies on the availability of data and compute. Data is often crowd-sourced and contains sensitive information about its contributors, ...

research-article
Open Access
Secure Collaborative Training and Inference for XGBoost

In recent years, gradient boosted decision tree learning has proven to be an effective method of training robust models. Moreover, collaborative learning among multiple parties has the potential to greatly benefit all parties involved, but organizations ...

research-article
Open Access
Delphi: A Cryptographic Inference System for Neural Networks

Many companies provide neural network prediction services to users for a wide range of applications. However, current prediction systems compromise one party's privacy: either the user has to send sensitive inputs to the service provider for ...

research-article
Information Leakage by Model Weights on Federated Learning

Federated learning aggregates data from multiple sources while protecting privacy, which makes it possible to train efficient models in real scenes. However, although federated learning uses encrypted security aggregation, its decentralised nature makes ...

research-article
Adversarial Detection on Graph Structured Data

Graph Neural Networks (GNNs) has achieved tremendous development on perceptual tasks in recent years, such as node classification, graph classification, link prediction, etc. However, recent studies show that deep learning models of GNNs are incredibly ...

SESSION: Session 3: Spotlight Presentations
short-paper
MP2ML: A Mixed-Protocol Machine Learning Framework for Private Inference

We present an extended abstract of MP2ML, a machine learning framework which integrates Intel nGraph-HE, a homomorphic encryption (HE) framework, and the secure two-party computation framework ABY, to enable data scientists to perform private inference ...

short-paper
Faster Secure Multiparty Computation of Adaptive Gradient Descent

Most of the secure multi-party computation (MPC) machine learning methods can only afford simple gradient descent (sGD 1) optimizers, and are unable to benefit from the recent progress of adaptive GD optimizers (e.g., Adagrad, Adam and their variants), ...

short-paper
SVM Learning for Default Prediction of Credit Card under Differential Privacy

Currently, financial institutions utilize personal sensitive information extensively in machine learning. It results in significant privacy risks to customers. As an essential standard of privacy, differential privacy is often applied to machine ...

short-paper
A Systematic Comparison of Encrypted Machine Learning Solutions for Image Classification

This work provides a comprehensive review of existing frameworks based on secure computing techniques in the context of private image classification. The in-depth analysis of these approaches is followed by careful examination of their performance costs,...

short-paper
Privacy-Preserving in Defending against Membership Inference Attacks

The membership inference attack refers to the attacker's purpose to infer whether the data sample is in the target classifier training dataset. The ability of an adversary to ascertain the presence of an individual constitutes an obvious privacy threat ...

short-paper
Open Access
TinyGarble2: Smart, Efficient, and Scalable Yao's Garble Circuit

We present TinyGarble2 -- a C++ framework for privacy-preserving computation through the Yao's Garbled Circuit (GC) protocol in both the honest-but-curious and the malicious security models. TinyGarble2 provides a rich library with arithmetic and logic ...

Contributors
  • Ant group
  • University of California, Berkeley
  • Stanford University
  • Texas A&M University
  • Zhejiang University
Index terms have been assigned to the content through auto-classification.

Recommendations