PoiCGAN: A Targeted Poisoning Based on Feature-Label Joint Perturbation in Federated Learning
arXiv:2603.23574v1 Announce Type: new Abstract: Federated Learning (FL), as a popular distributed learning paradigm, has shown outstanding performance in improving computational efficiency and protecting data privacy, and is widely applied in industrial image classification. However, due to its distributed nature, FL is vulnerable to threats from malicious clients, with poisoning attacks being a common threat. A major limitation of existing poisoning attack methods is their difficulty in bypassing model performance tests and defense mechanisms based on model anomaly detection. This often results in the detection and removal of poisoned models, which undermines their practical utility. To ensure both the performance of industrial image classification and attacks, we propose a targeted poisoning attack, PoiCGAN, based on feature-label collaborative perturbation. Our method modifies the inputs of the discriminator and generator in the Conditional Generative Adversarial Network (CGAN) to
arXiv:2603.23574v1 Announce Type: new Abstract: Federated Learning (FL), as a popular distributed learning paradigm, has shown outstanding performance in improving computational efficiency and protecting data privacy, and is widely applied in industrial image classification. However, due to its distributed nature, FL is vulnerable to threats from malicious clients, with poisoning attacks being a common threat. A major limitation of existing poisoning attack methods is their difficulty in bypassing model performance tests and defense mechanisms based on model anomaly detection. This often results in the detection and removal of poisoned models, which undermines their practical utility. To ensure both the performance of industrial image classification and attacks, we propose a targeted poisoning attack, PoiCGAN, based on feature-label collaborative perturbation. Our method modifies the inputs of the discriminator and generator in the Conditional Generative Adversarial Network (CGAN) to influence the training process, generating an ideal poison generator. This generator not only produces specific poisoned samples but also automatically performs label flipping. Experiments across various datasets show that our method achieves an attack success rate 83.97% higher than baseline methods, with a less than 8.87% reduction in the main task's accuracy. Moreover, the poisoned samples and malicious models exhibit high stealthiness.
Executive Summary
This article proposes a targeted poisoning attack method, PoiCGAN, in the context of Federated Learning (FL). PoiCGAN utilizes a Conditional Generative Adversarial Network (CGAN) to generate ideal poison samples that evade detection by model anomaly detection mechanisms. The method achieves an 83.97% higher attack success rate than baseline methods, with a less than 8.87% reduction in the main task's accuracy. PoiCGAN's ability to automatically perform label flipping and generate high-stealthiness poisoned samples makes it a formidable threat. However, the article's focus on attack methods raises concerns about the security of FL systems. As FL is widely applied in industrial image classification, the development of robust defense mechanisms against poisoning attacks is crucial.
Key Points
- ▸ PoiCGAN is a targeted poisoning attack method for FL that utilizes a CGAN to generate ideal poison samples.
- ▸ PoiCGAN achieves a higher attack success rate than baseline methods while maintaining a relatively low reduction in accuracy.
- ▸ PoiCGAN's ability to automatically perform label flipping and generate high-stealthiness poisoned samples makes it a formidable threat.
Merits
Strength in Adversarial Attack
PoiCGAN's ability to bypass model performance tests and defense mechanisms based on model anomaly detection makes it a strong adversarial attack method.
Demerits
Vulnerability to Robust Defense Mechanisms
PoiCGAN's reliance on a CGAN to generate ideal poison samples makes it vulnerable to robust defense mechanisms that can detect and prevent such attacks.
Expert Commentary
The development of PoiCGAN highlights the ongoing cat-and-mouse game between attack and defense mechanisms in FL systems. While PoiCGAN is a formidable threat, it also underscores the need for more robust defense mechanisms. As FL systems continue to gain widespread adoption, it is essential to prioritize the development of secure FL systems that can withstand poisoning attacks. The article's focus on attack methods serves as a reminder that the security of FL systems is an ongoing concern that requires continuous attention and innovation.
Recommendations
- ✓ Develop and deploy robust defense mechanisms against poisoning attacks in FL systems.
- ✓ Implement policies to ensure the development and deployment of secure FL systems.
Sources
Original: arXiv - cs.LG