×
The poisoning attack is a method to reduce the accuracy of DNN by adding malicious training data during DNN training process.
[9] proposed systematic poisoning attacks in healthcare. With their method, they demonstrated a poisoning attack on a healthcare dataset by extending the domain ...
This paper proposes a selective poisoning attack that reduces the accuracy of only the chosen class in the model by training malicious data corresponding to ...
The adversarial example and poisoning attacks differ in their target attacking methods. The adversarial example modulates the test data while a poisoning attack ...
Dive into the research topics of 'Selective poisoning attack on deep neural network to induce fine-grained recognition error'. Together they form a unique ...
Jul 1, 2019 · introduces and describes poisoning attacks. 2.2. Exploratory Attack. An exploratory attack exploits the misclassification ...
However, a poisoning attack is a serious threat to a DNN's security. A poisoning attack reduces the accuracy of a DNN by adding malicious training data during ...
Zhu et al. [69] proposed a clean-label transferable poisoning attack in which the poison images are designed to be convex around the targeted image in feature ...
Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error. Hyun Kwon, Hyunsoo Yoon, Ki-Woong Park.
This work first examines the possibility of applying traditional gradient-based method to generate poisoned data against NNs by leveraging the gradient of ...