Loading [a11y]/accessibility-menu.js
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses | IEEE Journals & Magazine | IEEE Xplore

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses


Abstract:

As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order ...Show More

Abstract:

As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order to achieve state-of-the-art performance. The absence of trustworthy human supervision over the data collection process exposes organizations to security vulnerabilities; training data can be manipulated to control and degrade the downstream behaviors of learned models. The goal of this work is to systematically categorize and discuss a wide range of dataset vulnerabilities and exploits, approaches for defending against these threats, and an array of open problems in this space.
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: 45, Issue: 2, 01 February 2023)
Page(s): 1563 - 1580
Date of Publication: 25 March 2022

ISSN Information:

PubMed ID: 35333711

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.