As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ...As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.展开更多
Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based att...Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based attack methods are chosen to add perturbations on each pixel of the original image with the same weight,resulting in redundant noise in the adversarial examples,which makes them easier to be detected.Given this deliberation,a novel attentionguided sparse adversarial attack strategy with gradient dropout that can be readily incorporated with existing gradient-based methods is introduced to minimize the intensity and the scale of perturbations and ensure the effectiveness of adversarial examples at the same time.Specifically,in the gradient dropout phase,some relatively unimportant gradient information is randomly discarded to limit the intensity of the perturbation.In the attentionguided phase,the influence of each pixel on the model output is evaluated by using a soft mask-refined attention mechanism,and the perturbation of those pixels with smaller influence is limited to restrict the scale of the perturbation.After conducting thorough experiments on the NeurIPS 2017 adversarial dataset and the ILSVRC 2012 validation dataset,the proposed strategy holds the potential to significantly diminish the superfluous noise present in adversarial examples,all while keeping their attack efficacy intact.For instance,in attacks on adversarially trained models,upon the integration of the strategy,the average level of noise injected into images experiences a decline of 8.32%.However,the average attack success rate decreases by only 0.34%.Furthermore,the competence is possessed to substantially elevate the attack success rate by merely introducing a slight degree of perturbation.展开更多
Recommender systems are very useful for people to explore what they really need.Academic papers are important achievements for researchers and they often have a great deal of choice to submit their papers.In order to ...Recommender systems are very useful for people to explore what they really need.Academic papers are important achievements for researchers and they often have a great deal of choice to submit their papers.In order to improve the efficiency of selecting the most suitable journals for publishing their works,journal recommender systems(JRS)can automatically provide a small number of candidate journals based on key information such as the title and the abstract.However,users or journal owners may attack the system for their own purposes.In this paper,we discuss about the adversarial attacks against content-based filtering JRS.We propose both targeted attack method that makes some target journals appear more often in the system and non-targeted attack method that makes the system provide incorrect recommendations.We also conduct extensive experiments to validate the proposed methods.We hope this paper could help improve JRS by realizing the existence of such adversarial attacks.展开更多
These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications li...These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods.展开更多
Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassificatio...Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training.展开更多
Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromi...Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods.展开更多
In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classificat...In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classification,natural language processing,information hiding,multimedia synthesis,and so on,the performance of deep learning has far exceeded the traditional algorithms.However,researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks,the model is vulnerable to the example which is modified artificially.This technology is called adversarial attacks,while the examples are called adversarial examples.The existence of adversarial attacks poses a great threat to the security of the neural network.Based on the brief introduction of the concept and causes of adversarial example,this paper analyzes the main ideas of adversarial attacks,studies the representative classical adversarial attack methods and the detection and defense methods.展开更多
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor...With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area.展开更多
The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the rob...The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the robustness of the deep learning based spectrum sensing model.PUAA adds a care-fully manufactured perturbation to the benign primary user signal,which greatly reduces the probability of detection of the spectrum sensing model.We design three PUAA methods in black box scenario.In or-der to defend against PUAA,we propose a defense method based on autoencoder named DeepFilter.We apply the long short-term memory network and the convolutional neural network together to DeepFilter,so that it can extract the temporal and local features of the input signal at the same time to achieve effective defense.Extensive experiments are conducted to eval-uate the attack effect of the designed PUAA method and the defense effect of DeepFilter.Results show that the three PUAA methods designed can greatly reduce the probability of detection of the deep learning-based spectrum sensing model.In addition,the experimen-tal results of the defense effect of DeepFilter show that DeepFilter can effectively defend against PUAA with-out affecting the detection performance of the model.展开更多
Detecting malicious Uniform Resource Locators(URLs)is crucially important to prevent attackers from committing cybercrimes.Recent researches have investigated the role of machine learning(ML)models to detect malicious...Detecting malicious Uniform Resource Locators(URLs)is crucially important to prevent attackers from committing cybercrimes.Recent researches have investigated the role of machine learning(ML)models to detect malicious URLs.By using ML algorithms,rst,the features of URLs are extracted,and then different ML models are trained.The limitation of this approach is that it requires manual feature engineering and it does not consider the sequential patterns in the URL.Therefore,deep learning(DL)models are used to solve these issues since they are able to perform featureless detection.Furthermore,DL models give better accuracy and generalization to newly designed URLs;however,the results of our study show that these models,such as any other DL models,can be susceptible to adversarial attacks.In this paper,we examine the robustness of these models and demonstrate the importance of considering this susceptibility before applying such detection systems in real-world solutions.We propose and demonstrate a black-box attack based on scoring functions with greedy search for the minimum number of perturbations leading to a misclassication.The attack is examined against different types of convolutional neural networks(CNN)-based URL classiers and it causes a tangible decrease in the accuracy with more than 56%reduction in the accuracy of the best classier(among the selected classiers for this work).Moreover,adversarial training shows promising results in reducing the inuence of the attack on the robustness of the model to less than 7%on average.展开更多
Deep neural networks are vulnerable to attacks from adversarial inputs.Corresponding attack research on human pose estimation(HPE),particularly for body joint detection,has been largely unexplored.Transferring classif...Deep neural networks are vulnerable to attacks from adversarial inputs.Corresponding attack research on human pose estimation(HPE),particularly for body joint detection,has been largely unexplored.Transferring classification-based attack methods to body joint regression tasks is not straightforward.Another issue is that the attack effectiveness and imperceptibility contradict each other.To solve these issues,we propose local imperceptible attacks on HPE networks.In particular,we reformulate imperceptible attacks on body joint regression into a constrained maximum allowable attack.Furthermore,we approximate the solution using iterative gradient-based strength refinement and greedy-based pixel selection.Our method crafts effective perceptual adversarial attacks that consider both human perception and attack effectiveness.We conducted a series of imperceptible attacks against state-of-the-art HPE methods,including HigherHRNet,DEKR,and ViTPose.The experimental results demonstrate that the proposed method achieves excellent imperceptibility while maintaining attack effectiveness by significantly reducing the number of perturbed pixels.Approximately 4%of the pixels can achieve sufficient attacks on HPE.展开更多
As more business transactions and information services have been implemented via communication networks,both personal and organization assets encounter a higher risk of attacks.To safeguard these,a perimeter defence l...As more business transactions and information services have been implemented via communication networks,both personal and organization assets encounter a higher risk of attacks.To safeguard these,a perimeter defence likeNIDS(network-based intrusion detection system)can be effective for known intrusions.There has been a great deal of attention within the joint community of security and data science to improve machine-learning based NIDS such that it becomes more accurate for adversarial attacks,where obfuscation techniques are applied to disguise patterns of intrusive traffics.The current research focuses on non-payload connections at the TCP(transmission control protocol)stack level that is applicable to different network applications.In contrary to the wrapper method introduced with the benchmark dataset,three new filter models are proposed to transform the feature space without knowledge of class labels.These ECT(ensemble clustering based transformation)techniques,i.e.,ECT-Subspace,ECT-Noise and ECT-Combined,are developed using the concept of ensemble clustering and three different ensemble generation strategies,i.e.,random feature subspace,feature noise injection and their combinations.Based on the empirical study with published dataset and four classification algorithms,new models usually outperform that original wrapper and other filter alternatives found in the literature.This is similarly summarized from the first experiment with basic classification of legitimate and direct attacks,and the second that focuses on recognizing obfuscated intrusions.In addition,analysis of algorithmic parameters,i.e.,ensemble size and level of noise,is provided as a guideline for a practical use.展开更多
Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,i...Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,in case of adversarial attacks,that cause misclassification by introducing imperceptible perturbation on input samples,performance of machine learning-based intrusion detection systems is greatly affected.Though such problems have widely been discussed in image processing domain,very few studies have investigated network intrusion detection systems and proposed corresponding defence.In this paper,we attempt to fill this gap by using adversarial attacks on standard intrusion detection datasets and then using adversarial samples to train various machine learning algorithms(adversarial training)to test their defence performance.This is achieved by first creating adversarial sample based on Jacobian-based Saliency Map Attack(JSMA)and Fast Gradient Sign Attack(FGSM)using NSLKDD,UNSW-NB15 and CICIDS17 datasets.The study then trains and tests JSMA and FGSM based adversarial examples in seen(where model has been trained on adversarial samples)and unseen(where model is unaware of adversarial packets)attacks.The experiments includes multiple machine learning classifiers to evaluate their performance against adversarial attacks.The performance parameters include Accuracy,F1-Score and Area under the receiver operating characteristic curve(AUC)Score.展开更多
Models based on MLP-Mixer architecture are becoming popular,but they still sufer from adversarial examples.Although it has been shown that MLP-Mixer is more robust to adversarial attacks compared to convolutional neur...Models based on MLP-Mixer architecture are becoming popular,but they still sufer from adversarial examples.Although it has been shown that MLP-Mixer is more robust to adversarial attacks compared to convolutional neural networks(CNNs),there has been no research on adversarial attacks tailored to its architecture.In this paper,we fll this gap.We propose a dedicated attack framework called Maxwell’s demon Attack(MA).Specifcally,we break the chan‑nel-mixing and token-mixing mechanisms of the MLP-Mixer by perturbing inputs of each Mixer layer to achieve high transferability.We demonstrate that disrupting the MLP-Mixer’s capture of the main information of images by mask‑ing its inputs can generate adversarial examples with cross-architectural transferability.Extensive evaluations show the efectiveness and superior performance of MA.Perturbations generated based on masked inputs obtain a higher success rate of black-box attacks than existing transfer attacks.Moreover,our approach can be easily combined with existing methods to improve the transferability both within MLP-Mixer based models and to models with difer‑ent architectures.We achieve up to 55.9%attack performance improvement.Our work exploits the true generaliza‑tion potential of the MLP-Mixer adversarial space and helps make it more robust for future deployments.展开更多
Recently,studies show that deep learning-based automatic speech recognition(ASR)systems are vulnerable to adversarial examples(AEs),which add a small amount of noise to the original audio examples.These AE attacks pos...Recently,studies show that deep learning-based automatic speech recognition(ASR)systems are vulnerable to adversarial examples(AEs),which add a small amount of noise to the original audio examples.These AE attacks pose new challenges to deep learning security and have raised significant concerns about deploying ASR systems and devices.The existing defense methods are either limited in application or only defend on results,but not on process.In this work,we propose a novel method to infer the adversary intent and discover audio adversarial examples based on the AEs generation process.The insight of this method is based on the observation:many existing audio AE attacks utilize query-based methods,which means the adversary must send continuous and similar queries to target ASR models during the audio AE generation process.Inspired by this observation,We propose a memory mechanism by adopting audio fingerprint technology to analyze the similarity of the current query with a certain length of memory query.Thus,we can identify when a sequence of queries appears to be suspectable to generate audio AEs.Through extensive evaluation on four state-of-the-art audio AE attacks,we demonstrate that on average our defense identify the adversary’s intent with over 90%accuracy.With careful regard for robustness evaluations,we also analyze our proposed defense and its strength to withstand two adaptive attacks.Finally,our scheme is available out-of-the-box and directly compatible with any ensemble of ASR defense models to uncover audio AE attacks effectively without model retraining.展开更多
Dynamic graph neural networks(DGNNs)have demonstrated their extraordinary value in many practical applications.Nevertheless,the vulnerability of DNNs is a serious hidden danger as a small disturbance added to the mode...Dynamic graph neural networks(DGNNs)have demonstrated their extraordinary value in many practical applications.Nevertheless,the vulnerability of DNNs is a serious hidden danger as a small disturbance added to the model can markedly reduce its performance.At the same time,current adversarial attack schemes are implemented on static graphs,and the variability of attack models prevents these schemes from transferring to dynamic graphs.In this paper,we use the diffused attack of node injection to attack the DGNNs,and first propose the node injection attack based on structural fragility against DGNNs,named Structural Fragility-based Dynamic Graph Node Injection Attack(SFIA).SFIA firstly determines the target time based on the period weight.Then,it introduces a structural fragile edge selection strategy to establish the target nodes set and link them with the malicious node using serial inject.Finally,an optimization function is designed to generate adversarial features for malicious nodes.Experiments on datasets from four different fields show that SFIA is significantly superior to many comparative approaches.When the graph is injected with 1%of the original total number of nodes through SFIA,the link prediction Recall and MRR of the target DGNN link decrease by 17.4%and 14.3%respectively,and the accuracy of node classification decreases by 8.7%.展开更多
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ...Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.展开更多
Physiological computing uses human physiological data as system inputs in real time.It includes,or significantly overlaps with,brain-computer interfaces,affective computing,adaptive automation,health informatics,and p...Physiological computing uses human physiological data as system inputs in real time.It includes,or significantly overlaps with,brain-computer interfaces,affective computing,adaptive automation,health informatics,and physiological signal based biometrics.Physiological computing increases the communication bandwidth from the user to the computer,but is also subject to various types of adversarial attacks,in which the attacker deliberately manipulates the training and/or test examples to hijack the machine learning algorithm output,leading to possible user confusion,frustration,injury,or even death.However,the vulnerability of physiological computing systems has not been paid enough attention to,and there does not exist a comprehensive review on adversarial attacks to them.This study fills this gap,by providing a systematic review on the main research areas of physiological computing,different types of adversarial attacks and their applications to physiological computing,and the corresponding defense strategies.We hope this review will attract more research interests on the vulnerability of physiological computing systems,and more importantly,defense strategies to make them more secure.展开更多
Despite its great success,deep learning severely suffers from robustness;i.e.,deep neural networks are very vulnerable to adversarial attacks,even the simplest ones.Inspired by recent advances in brain science,we prop...Despite its great success,deep learning severely suffers from robustness;i.e.,deep neural networks are very vulnerable to adversarial attacks,even the simplest ones.Inspired by recent advances in brain science,we propose the denoised internal models(DIM),a novel generative autoencoder-based model to tackle this challenge.Simulating the pipeline in the human brain for visual signal processing,DIM adopts a two-stage approach.In the first stage,DIM uses a denoiser to reduce the noise and the dimensions of inputs,reflecting the information pre-processing in the thalamus.Inspired by the sparse coding of memory-related traces in the primary visual cortex,the second stage produces a set of internal models,one for each category.We evaluate DIM over 42 adversarial attacks,showing that DIM effectively defenses against all the attacks and outperforms the SOTA on the overall robustness on the MNIST(Modified National Institute of Standards and Technology)dataset.展开更多
In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are...In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are becoming increasingly popular,they are now the primary target of mobile malware,which could lead to both privacy leakage and property loss.To address the rapidly deteriorating security issues caused by mobile malware,various research efforts have been made to develop novel and effective detection mechanisms to identify and combat them.Nevertheless,in order to avoid being caught by these malware detection mechanisms,malware authors are inclined to initiate adversarial example attacks by tampering with mobile applications.In this paper,several types of adversarial example attacks are investigated and a feasible approach is proposed to fight against them.First,we look at adversarial example attacks on the Android system and prior solutions that have been proposed to address these attacks.Then,we specifically focus on the data poisoning attack and evasion attack models,which may mutate various application features,such as API calls,permissions and the class label,to produce adversarial examples.Then,we propose and design a malware detection approach that is resistant to adversarial examples.To observe and investigate how the malware detection system is influenced by the adversarial example attacks,we conduct experiments on some real Android application datasets which are composed of both malware and benign applications.Experimental results clearly indicate that the performance of Android malware detection is severely degraded when facing adversarial example attacks.展开更多
基金supported by the National Natural Science Foundation of China(61771154)the Fundamental Research Funds for the Central Universities(3072022CF0601)supported by Key Laboratory of Advanced Marine Communication and Information Technology,Ministry of Industry and Information Technology,Harbin Engineering University,Harbin,China.
文摘As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.
基金Fundamental Research Funds for the Central Universities,China(No.2232021A-10)Shanghai Sailing Program,China(No.22YF1401300)+1 种基金Natural Science Foundation of Shanghai,China(No.20ZR1400400)Shanghai Pujiang Program,China(No.22PJ1423400)。
文摘Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based attack methods are chosen to add perturbations on each pixel of the original image with the same weight,resulting in redundant noise in the adversarial examples,which makes them easier to be detected.Given this deliberation,a novel attentionguided sparse adversarial attack strategy with gradient dropout that can be readily incorporated with existing gradient-based methods is introduced to minimize the intensity and the scale of perturbations and ensure the effectiveness of adversarial examples at the same time.Specifically,in the gradient dropout phase,some relatively unimportant gradient information is randomly discarded to limit the intensity of the perturbation.In the attentionguided phase,the influence of each pixel on the model output is evaluated by using a soft mask-refined attention mechanism,and the perturbation of those pixels with smaller influence is limited to restrict the scale of the perturbation.After conducting thorough experiments on the NeurIPS 2017 adversarial dataset and the ILSVRC 2012 validation dataset,the proposed strategy holds the potential to significantly diminish the superfluous noise present in adversarial examples,all while keeping their attack efficacy intact.For instance,in attacks on adversarially trained models,upon the integration of the strategy,the average level of noise injected into images experiences a decline of 8.32%.However,the average attack success rate decreases by only 0.34%.Furthermore,the competence is possessed to substantially elevate the attack success rate by merely introducing a slight degree of perturbation.
基金This work is supported by the National Natural Science Foundation of China under Grant Nos.U1636215,61902082the Guangdong Key R&D Program of China 2019B010136003Guangdong Province Universities and Colleges Pearl River Scholar Funded Scheme(2019).
文摘Recommender systems are very useful for people to explore what they really need.Academic papers are important achievements for researchers and they often have a great deal of choice to submit their papers.In order to improve the efficiency of selecting the most suitable journals for publishing their works,journal recommender systems(JRS)can automatically provide a small number of candidate journals based on key information such as the title and the abstract.However,users or journal owners may attack the system for their own purposes.In this paper,we discuss about the adversarial attacks against content-based filtering JRS.We propose both targeted attack method that makes some target journals appear more often in the system and non-targeted attack method that makes the system provide incorrect recommendations.We also conduct extensive experiments to validate the proposed methods.We hope this paper could help improve JRS by realizing the existence of such adversarial attacks.
文摘These days,deep learning and computer vision are much-growing fields in this modern world of information technology.Deep learning algorithms and computer vision have achieved great success in different applications like image classification,speech recognition,self-driving vehicles,disease diagnostics,and many more.Despite success in various applications,it is found that these learning algorithms face severe threats due to adversarial attacks.Adversarial examples are inputs like images in the computer vision field,which are intentionally slightly changed or perturbed.These changes are humanly imperceptible.But are misclassified by a model with high probability and severely affects the performance or prediction.In this scenario,we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method.In the end,we have compared our method to other state-ofthe-art defense methods and proved that our results are better than other rival methods.
基金Taif University,Taif,Saudi Arabia through Taif University Researchers Supporting Project Number(TURSP-2020/115).
文摘Neural networks play a significant role in the field of image classification.When an input image is modified by adversarial attacks,the changes are imperceptible to the human eye,but it still leads to misclassification of the images.Researchers have demonstrated these attacks to make production self-driving cars misclassify StopRoad signs as 45 Miles Per Hour(MPH)road signs and a turtle being misclassified as AK47.Three primary types of defense approaches exist which can safeguard against such attacks i.e.,Gradient Masking,Robust Optimization,and Adversarial Example Detection.Very few approaches use Generative Adversarial Networks(GAN)for Defense against Adversarial Attacks.In this paper,we create a new approach to defend against adversarial attacks,dubbed Chained Dual-Generative Adversarial Network(CD-GAN)that tackles the defense against adversarial attacks by minimizing the perturbations of the adversarial image using iterative oversampling and undersampling using GANs.CD-GAN is created using two GANs,i.e.,CDGAN’s Sub-ResolutionGANandCDGAN’s Super-ResolutionGAN.The first is CDGAN’s Sub-Resolution GAN which takes the original resolution input image and oversamples it to generate a lower resolution neutralized image.The second is CDGAN’s Super-Resolution GAN which takes the output of the CDGAN’s Sub-Resolution and undersamples,it to generate the higher resolution image which removes any remaining perturbations.Chained Dual GAN is formed by chaining these two GANs together.Both of these GANs are trained independently.CDGAN’s Sub-Resolution GAN is trained using higher resolution adversarial images as inputs and lower resolution neutralized images as output image examples.Hence,this GAN downscales the image while removing adversarial attack noise.CDGAN’s Super-Resolution GAN is trained using lower resolution adversarial images as inputs and higher resolution neutralized images as output images.Because of this,it acts as an Upscaling GAN while removing the adversarial attak noise.Furthermore,CD-GAN has a modular design such that it can be prefixed to any existing classifier without any retraining or extra effort,and 2542 CMC,2023,vol.74,no.2 can defend any classifier model against adversarial attack.In this way,it is a Generalized Defense against adversarial attacks,capable of defending any classifier model against any attacks.This enables the user to directly integrate CD-GANwith an existing production deployed classifier smoothly.CD-GAN iteratively removes the adversarial noise using a multi-step approach in a modular approach.It performs comparably to the state of the arts with mean accuracy of 33.67 while using minimal compute resources in training.
基金funded by Institutional Fund Projects under Grant No.(IFPIP:329-611-1443)the technical and financial support provided by the Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods.
文摘In recent years,machine learning has become more and more popular,especially the continuous development of deep learning technology,which has brought great revolutions to many fields.In tasks such as image classification,natural language processing,information hiding,multimedia synthesis,and so on,the performance of deep learning has far exceeded the traditional algorithms.However,researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks,the model is vulnerable to the example which is modified artificially.This technology is called adversarial attacks,while the examples are called adversarial examples.The existence of adversarial attacks poses a great threat to the security of the neural network.Based on the brief introduction of the concept and causes of adversarial example,this paper analyzes the main ideas of adversarial attacks,studies the representative classical adversarial attack methods and the detection and defense methods.
基金Ant Financial,Zhejiang University Financial Technology Research Center.
文摘With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area.
基金the National Nat-ural Science Foundation of China under Grant No.62072406,No.U19B2016,No.U20B2038 and No.61871398the Natural Science Foundation of Zhejiang Province under Grant No.LY19F020025the Major Special Funding for“Science and Tech-nology Innovation 2025”in Ningbo under Grant No.2018B10063.
文摘The spectrum sensing model based on deep learning has achieved satisfying detection per-formence,but its robustness has not been verified.In this paper,we propose primary user adversarial attack(PUAA)to verify the robustness of the deep learning based spectrum sensing model.PUAA adds a care-fully manufactured perturbation to the benign primary user signal,which greatly reduces the probability of detection of the spectrum sensing model.We design three PUAA methods in black box scenario.In or-der to defend against PUAA,we propose a defense method based on autoencoder named DeepFilter.We apply the long short-term memory network and the convolutional neural network together to DeepFilter,so that it can extract the temporal and local features of the input signal at the same time to achieve effective defense.Extensive experiments are conducted to eval-uate the attack effect of the designed PUAA method and the defense effect of DeepFilter.Results show that the three PUAA methods designed can greatly reduce the probability of detection of the deep learning-based spectrum sensing model.In addition,the experimen-tal results of the defense effect of DeepFilter show that DeepFilter can effectively defend against PUAA with-out affecting the detection performance of the model.
基金supported by Korea Electric Power Corporation(Grant Number:R18XA02).
文摘Detecting malicious Uniform Resource Locators(URLs)is crucially important to prevent attackers from committing cybercrimes.Recent researches have investigated the role of machine learning(ML)models to detect malicious URLs.By using ML algorithms,rst,the features of URLs are extracted,and then different ML models are trained.The limitation of this approach is that it requires manual feature engineering and it does not consider the sequential patterns in the URL.Therefore,deep learning(DL)models are used to solve these issues since they are able to perform featureless detection.Furthermore,DL models give better accuracy and generalization to newly designed URLs;however,the results of our study show that these models,such as any other DL models,can be susceptible to adversarial attacks.In this paper,we examine the robustness of these models and demonstrate the importance of considering this susceptibility before applying such detection systems in real-world solutions.We propose and demonstrate a black-box attack based on scoring functions with greedy search for the minimum number of perturbations leading to a misclassication.The attack is examined against different types of convolutional neural networks(CNN)-based URL classiers and it causes a tangible decrease in the accuracy with more than 56%reduction in the accuracy of the best classier(among the selected classiers for this work).Moreover,adversarial training shows promising results in reducing the inuence of the attack on the robustness of the model to less than 7%on average.
基金National Natural Science Foundation of China,No.61972458Natural Science Foundation of Zhejiang Province,No.LZ23F020002.
文摘Deep neural networks are vulnerable to attacks from adversarial inputs.Corresponding attack research on human pose estimation(HPE),particularly for body joint detection,has been largely unexplored.Transferring classification-based attack methods to body joint regression tasks is not straightforward.Another issue is that the attack effectiveness and imperceptibility contradict each other.To solve these issues,we propose local imperceptible attacks on HPE networks.In particular,we reformulate imperceptible attacks on body joint regression into a constrained maximum allowable attack.Furthermore,we approximate the solution using iterative gradient-based strength refinement and greedy-based pixel selection.Our method crafts effective perceptual adversarial attacks that consider both human perception and attack effectiveness.We conducted a series of imperceptible attacks against state-of-the-art HPE methods,including HigherHRNet,DEKR,and ViTPose.The experimental results demonstrate that the proposed method achieves excellent imperceptibility while maintaining attack effectiveness by significantly reducing the number of perturbed pixels.Approximately 4%of the pixels can achieve sufficient attacks on HPE.
文摘As more business transactions and information services have been implemented via communication networks,both personal and organization assets encounter a higher risk of attacks.To safeguard these,a perimeter defence likeNIDS(network-based intrusion detection system)can be effective for known intrusions.There has been a great deal of attention within the joint community of security and data science to improve machine-learning based NIDS such that it becomes more accurate for adversarial attacks,where obfuscation techniques are applied to disguise patterns of intrusive traffics.The current research focuses on non-payload connections at the TCP(transmission control protocol)stack level that is applicable to different network applications.In contrary to the wrapper method introduced with the benchmark dataset,three new filter models are proposed to transform the feature space without knowledge of class labels.These ECT(ensemble clustering based transformation)techniques,i.e.,ECT-Subspace,ECT-Noise and ECT-Combined,are developed using the concept of ensemble clustering and three different ensemble generation strategies,i.e.,random feature subspace,feature noise injection and their combinations.Based on the empirical study with published dataset and four classification algorithms,new models usually outperform that original wrapper and other filter alternatives found in the literature.This is similarly summarized from the first experiment with basic classification of legitimate and direct attacks,and the second that focuses on recognizing obfuscated intrusions.In addition,analysis of algorithmic parameters,i.e.,ensemble size and level of noise,is provided as a guideline for a practical use.
文摘Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,in case of adversarial attacks,that cause misclassification by introducing imperceptible perturbation on input samples,performance of machine learning-based intrusion detection systems is greatly affected.Though such problems have widely been discussed in image processing domain,very few studies have investigated network intrusion detection systems and proposed corresponding defence.In this paper,we attempt to fill this gap by using adversarial attacks on standard intrusion detection datasets and then using adversarial samples to train various machine learning algorithms(adversarial training)to test their defence performance.This is achieved by first creating adversarial sample based on Jacobian-based Saliency Map Attack(JSMA)and Fast Gradient Sign Attack(FGSM)using NSLKDD,UNSW-NB15 and CICIDS17 datasets.The study then trains and tests JSMA and FGSM based adversarial examples in seen(where model has been trained on adversarial samples)and unseen(where model is unaware of adversarial packets)attacks.The experiments includes multiple machine learning classifiers to evaluate their performance against adversarial attacks.The performance parameters include Accuracy,F1-Score and Area under the receiver operating characteristic curve(AUC)Score.
文摘Models based on MLP-Mixer architecture are becoming popular,but they still sufer from adversarial examples.Although it has been shown that MLP-Mixer is more robust to adversarial attacks compared to convolutional neural networks(CNNs),there has been no research on adversarial attacks tailored to its architecture.In this paper,we fll this gap.We propose a dedicated attack framework called Maxwell’s demon Attack(MA).Specifcally,we break the chan‑nel-mixing and token-mixing mechanisms of the MLP-Mixer by perturbing inputs of each Mixer layer to achieve high transferability.We demonstrate that disrupting the MLP-Mixer’s capture of the main information of images by mask‑ing its inputs can generate adversarial examples with cross-architectural transferability.Extensive evaluations show the efectiveness and superior performance of MA.Perturbations generated based on masked inputs obtain a higher success rate of black-box attacks than existing transfer attacks.Moreover,our approach can be easily combined with existing methods to improve the transferability both within MLP-Mixer based models and to models with difer‑ent architectures.We achieve up to 55.9%attack performance improvement.Our work exploits the true generaliza‑tion potential of the MLP-Mixer adversarial space and helps make it more robust for future deployments.
基金supported in part by NSFC No.62202275,Shandong-SF No.ZR2022QF012 projects.
文摘Recently,studies show that deep learning-based automatic speech recognition(ASR)systems are vulnerable to adversarial examples(AEs),which add a small amount of noise to the original audio examples.These AE attacks pose new challenges to deep learning security and have raised significant concerns about deploying ASR systems and devices.The existing defense methods are either limited in application or only defend on results,but not on process.In this work,we propose a novel method to infer the adversary intent and discover audio adversarial examples based on the AEs generation process.The insight of this method is based on the observation:many existing audio AE attacks utilize query-based methods,which means the adversary must send continuous and similar queries to target ASR models during the audio AE generation process.Inspired by this observation,We propose a memory mechanism by adopting audio fingerprint technology to analyze the similarity of the current query with a certain length of memory query.Thus,we can identify when a sequence of queries appears to be suspectable to generate audio AEs.Through extensive evaluation on four state-of-the-art audio AE attacks,we demonstrate that on average our defense identify the adversary’s intent with over 90%accuracy.With careful regard for robustness evaluations,we also analyze our proposed defense and its strength to withstand two adaptive attacks.Finally,our scheme is available out-of-the-box and directly compatible with any ensemble of ASR defense models to uncover audio AE attacks effectively without model retraining.
基金supported by the National Natural Science Foundation of China(NSFC)(62172377,61872205)the Shandong Provincial Natural Science Foundation,China(ZR2019MF018)the Startup Research Foundation for Distinguished Scholars(202112016).
文摘Dynamic graph neural networks(DGNNs)have demonstrated their extraordinary value in many practical applications.Nevertheless,the vulnerability of DNNs is a serious hidden danger as a small disturbance added to the model can markedly reduce its performance.At the same time,current adversarial attack schemes are implemented on static graphs,and the variability of attack models prevents these schemes from transferring to dynamic graphs.In this paper,we use the diffused attack of node injection to attack the DGNNs,and first propose the node injection attack based on structural fragility against DGNNs,named Structural Fragility-based Dynamic Graph Node Injection Attack(SFIA).SFIA firstly determines the target time based on the period weight.Then,it introduces a structural fragile edge selection strategy to establish the target nodes set and link them with the malicious node using serial inject.Finally,an optimization function is designed to generate adversarial features for malicious nodes.Experiments on datasets from four different fields show that SFIA is significantly superior to many comparative approaches.When the graph is injected with 1%of the original total number of nodes through SFIA,the link prediction Recall and MRR of the target DGNN link decrease by 17.4%and 14.3%respectively,and the accuracy of node classification decreases by 8.7%.
文摘Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.
基金supported by the Open Research Projects of Zhejiang Lab(2021KE0AB04)the Technology Innovation Project of Hubei Province of China(2019AEA171)+1 种基金the National Social Science Foundation of China(19ZDA104 and 20AZD089)the Independent Innovation Research Fund of Huazhong University of Science and Technology(2020WKZDJC004).Author contributions。
文摘Physiological computing uses human physiological data as system inputs in real time.It includes,or significantly overlaps with,brain-computer interfaces,affective computing,adaptive automation,health informatics,and physiological signal based biometrics.Physiological computing increases the communication bandwidth from the user to the computer,but is also subject to various types of adversarial attacks,in which the attacker deliberately manipulates the training and/or test examples to hijack the machine learning algorithm output,leading to possible user confusion,frustration,injury,or even death.However,the vulnerability of physiological computing systems has not been paid enough attention to,and there does not exist a comprehensive review on adversarial attacks to them.This study fills this gap,by providing a systematic review on the main research areas of physiological computing,different types of adversarial attacks and their applications to physiological computing,and the corresponding defense strategies.We hope this review will attract more research interests on the vulnerability of physiological computing systems,and more importantly,defense strategies to make them more secure.
基金supported by the Science and Technology Innovation 2030 Project of China(Nos.2021ZD02023501 and 2021ZD0202600)National Science Foundation of China(NSFC)(Nos.31970903,31671104,31371059 and 32225023)+1 种基金Shanghai Ministry of Science and Technology(No.19ZR1477400)NSFC and the German Research Foundation(DFG)in Project Crossmodal Learning(No.62061136001/TRR-169)。
文摘Despite its great success,deep learning severely suffers from robustness;i.e.,deep neural networks are very vulnerable to adversarial attacks,even the simplest ones.Inspired by recent advances in brain science,we propose the denoised internal models(DIM),a novel generative autoencoder-based model to tackle this challenge.Simulating the pipeline in the human brain for visual signal processing,DIM adopts a two-stage approach.In the first stage,DIM uses a denoiser to reduce the noise and the dimensions of inputs,reflecting the information pre-processing in the thalamus.Inspired by the sparse coding of memory-related traces in the primary visual cortex,the second stage produces a set of internal models,one for each category.We evaluate DIM over 42 adversarial attacks,showing that DIM effectively defenses against all the attacks and outperforms the SOTA on the overall robustness on the MNIST(Modified National Institute of Standards and Technology)dataset.
文摘In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are becoming increasingly popular,they are now the primary target of mobile malware,which could lead to both privacy leakage and property loss.To address the rapidly deteriorating security issues caused by mobile malware,various research efforts have been made to develop novel and effective detection mechanisms to identify and combat them.Nevertheless,in order to avoid being caught by these malware detection mechanisms,malware authors are inclined to initiate adversarial example attacks by tampering with mobile applications.In this paper,several types of adversarial example attacks are investigated and a feasible approach is proposed to fight against them.First,we look at adversarial example attacks on the Android system and prior solutions that have been proposed to address these attacks.Then,we specifically focus on the data poisoning attack and evasion attack models,which may mutate various application features,such as API calls,permissions and the class label,to produce adversarial examples.Then,we propose and design a malware detection approach that is resistant to adversarial examples.To observe and investigate how the malware detection system is influenced by the adversarial example attacks,we conduct experiments on some real Android application datasets which are composed of both malware and benign applications.Experimental results clearly indicate that the performance of Android malware detection is severely degraded when facing adversarial example attacks.