期刊文献+

Combining Innovative CVTNet and Regularization Loss for Robust Adversarial Defense

原文传递
导出
摘要 Deep neural networks(DNNs)are vulnerable to elaborately crafted and imperceptible adversarial perturbations.With the continuous development of adversarial attack methods,existing defense algorithms can no longer defend against them proficiently.Meanwhile,numerous studies have shown that vision transformer(ViT)has stronger robustness and generalization performance than the convolutional neural network(CNN)in various domains.Moreover,because the standard denoiser is subject to the error amplification effect,the prediction network cannot correctly classify all reconstruction examples.Firstly,this paper proposes a defense network(CVTNet)that combines CNNs and ViTs that is appended in front of the prediction network.CVTNet can effectively eliminate adversarial perturbations and maintain high robustness.Furthermore,this paper proposes a regularization loss(L_(CPL)),which optimizes the CVTNet by computing different losses for the correct prediction set(CPS)and the wrong prediction set(WPS)of the reconstruction examples,respectively.The evaluation results on several standard benchmark datasets show that CVTNet performs better robustness than other advanced methods.Compared with state-of-the-art algorithms,the proposed CVTNet defense improves the average accuracy of pixel-constrained attack examples generated on the CIFAR-10 dataset by 24.25%and spatially-constrained attack examples by 14.06%.Moreover,CVTNet shows excellent generalizability in cross-model protection.
作者 Wei-Dong Wang Zhi Li Li Zhang 王卫东;李智;张丽(Laboratory of Public Big Data,School of Computer Science and Technology,Guizhou University,Guiyang 550025,China)
出处 《Journal of Computer Science & Technology》 SCIE EI CSCD 2024年第5期1078-1093,共16页 计算机科学技术学报(英文版)
基金 supported by the National Natural Science Foundation of China under Grant No.62062023.
  • 相关文献

参考文献2

共引文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部