Boosting the Transferability of Adversarial Attacks in Deep Neural Networks

Authors

  • Xiaotang Xu School of Computer Science, Nanjing University of Posts and Telecommunications
  • Kangxin Wei Qian Weichang College, Shanghai University
  • Jin Xu College of Electronic Science and Engineering, Jilin University
  • Fangyi Zhu School of Big Data and Software Engineering, Chongqing University
  • Jiayuan Zhang School of Information Science and Engineering, East China University of Science and Technology

DOI:

https://doi.org/10.61603/ceas.v1i2.19

Abstract

This article presents a means of boosting the transferability of adversarial attacks in deep neural networks. The research includes background, methodologies, and outcomes, encompassing single-attack approaches and ensemble attack strategies such as I-FGSM and MI-FGSM. We delve into the notion of retraining using adversarial examples. Our contributions reveal the limitations of single-attack methods regarding transferability and demonstrate the superiority of ensemble attack methods. We highlight how algorithm selection impacts attack effectiveness and how model variations enhance transferability. Through these investigations, we offer valuable insights for bolstering deep neural networks’ adversarial robustness while acknowledging existing constraints.

Downloads

Published

2023-12-22

Issue

Section

Articles

How to Cite

Boosting the Transferability of Adversarial Attacks in Deep Neural Networks. (2023). Cambridge Explorations in Arts and Sciences, 1(2). https://doi.org/10.61603/ceas.v1i2.19