The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natura...The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples.展开更多
A novel compact electromagnetic bandgap (EBG) structure constructed by etching two reverse split rings (RSRs) and inserting interleaving edge (IE) on the patch of conventional mushroom-like EBG (CML-EBG) is in...A novel compact electromagnetic bandgap (EBG) structure constructed by etching two reverse split rings (RSRs) and inserting interleaving edge (IE) on the patch of conventional mushroom-like EBG (CML-EBG) is investigated. Simulated dispersion diagrams show that the proposed EBG structure presents a 13.6% size reduction in the center frequency of the bandgap. Two comparisons have been carried out for the analysis of the effect of the RSRs and IE configuration. Then, a sample of this novel EBG is fabricated and tested, further experimental data agree well with the simulated results. Thus, this EBG structure makes a good candidate to decrease mutual coupling in compact microstrip patch array.展开更多
文摘The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples.
文摘A novel compact electromagnetic bandgap (EBG) structure constructed by etching two reverse split rings (RSRs) and inserting interleaving edge (IE) on the patch of conventional mushroom-like EBG (CML-EBG) is investigated. Simulated dispersion diagrams show that the proposed EBG structure presents a 13.6% size reduction in the center frequency of the bandgap. Two comparisons have been carried out for the analysis of the effect of the RSRs and IE configuration. Then, a sample of this novel EBG is fabricated and tested, further experimental data agree well with the simulated results. Thus, this EBG structure makes a good candidate to decrease mutual coupling in compact microstrip patch array.