We first put forward the idea of a positive extension matrix (PEM) on paper. Then, an algorithm, AE_ 11, was built with the aid of the PEM. Finally, we made the comparisons of our experimental results and the final re...We first put forward the idea of a positive extension matrix (PEM) on paper. Then, an algorithm, AE_ 11, was built with the aid of the PEM. Finally, we made the comparisons of our experimental results and the final result was fairly satisfying.展开更多
Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plen...Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plenty of methods have been pro-posed to defend against adversarial examples.How-ever,the majority of them are suffering the follow-ing weaknesses:1)lack of generalization and prac-ticality.2)fail to deal with unknown attacks.To ad-dress the above issues,we design the adversarial na-ture eraser(ANE)and feature map detector(FMD)to detect fragile and high-intensity adversarial examples,respectively.Then,we apply the ensemble learning method to compose our detector,dealing with adver-sarial examples with diverse magnitudes in a divide-and-conquer manner.Experimental results show that our approach achieves 99.30%and 99.62%Area un-der Curve(AUC)scores on average when tested with various Lp norm-based attacks on CIFAR-10 and Im-ageNet,respectively.Furthermore,our approach also shows its potential in detecting unknown attacks.展开更多
Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware ...Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers.展开更多
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor...With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area.展开更多
With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase ch...With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase chain reaction(rRT-PCR)test that is currently used does not provide such high accuracy or speed in the screening process.Among the good choices for an accurate and fast test to screen COVID-19 are deep learning techniques.In this study,a new convolutional neural network(CNN)framework for COVID-19 detection using computed tomography(CT)images is proposed.The EfficientNet architecture is applied as the backbone structure of the proposed network,in which feature maps with different scales are extracted from the input CT scan images.In addition,atrous convolution at different rates is applied to these multi-scale feature maps to generate denser features,which facilitates in obtaining COVID-19 findings in CT scan images.The proposed framework is also evaluated in this study using a public CT dataset containing 2482 CT scan images from patients of both classes(i.e.,COVID-19 and non-COVID-19).To augment the dataset using additional training examples,adversarial examples generation is performed.The proposed system validates its superiority over the state-of-the-art methods with values exceeding 99.10%in terms of several metrics,such as accuracy,precision,recall,and F1.The proposed system also exhibits good robustness,when it is trained using a small portion of data(20%),with an accuracy of 96.16%.展开更多
Research on the wave field evolution law is highly significant to the fields of offshore engineering and marine resource development.Numerical simulations have been conducted for high-precision wave field evolution,th...Research on the wave field evolution law is highly significant to the fields of offshore engineering and marine resource development.Numerical simulations have been conducted for high-precision wave field evolution,thus providing short-term wave field prediction.However,its evolution occurs over a long period of time,and its accuracy is difficult to improve.In recent years,the use of machine learning methods to study the evolution of wave field has received increasing attention from researchers.This paper proposes a wave field evolution method based on deep convolutional neural networks.This method can effectively correlate the spa-tiotemporal characteristics of wave data via convolution operation and directly obtain the offshore forecast results of the Bohai Sea and the Yellow Sea.The attention mechanism,multi-scale path design,and hard example mining training strategy are introduced to suppress the interference caused by Weibull distributed wave field data and improve the accuracy of the proposed wave field evolu-tion.The 72-and 480-h evolution experiment results in the Bohai Sea and the Yellow Sea show that the proposed method in this pa-per has excellent forecast accuracy and timeliness.展开更多
Mobile malware occupies a considerable proportion of cyberattacks.With the update of mobile device operating systems and the development of software technology,more and more new malware keep appearing.The emergence of...Mobile malware occupies a considerable proportion of cyberattacks.With the update of mobile device operating systems and the development of software technology,more and more new malware keep appearing.The emergence of new malware makes the identification accuracy of existing methods lower and lower.There is an urgent need for more effective malware detection models.In this paper,we propose a new approach to mobile malware detection that is able to detect newly-emerged malware instances.Firstly,we build and train the LSTM-based model on original benign and malware samples investigated by both static and dynamic analysis techniques.Then,we build a generative adversarial network to generate augmented examples,which can emulate the characteristics of newly-emerged malware.At last,we use the augmented examples to retrain the 4th and 5th layers of the LSTM network and the last fully connected layer so that it can discriminate against newly-emerged malware.Actual experiments show that our malware detection achieved a classification accuracy of 99.94%when tested on augmented samples and 86.5%with the samples of newly-emerged malware on real data.展开更多
Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they ...Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%.展开更多
In recent years,many adversarial malware examples with different feature strategies,especially GAN and its variants,have been introduced to handle the security threats,e.g.,evading the detection of machine learning de...In recent years,many adversarial malware examples with different feature strategies,especially GAN and its variants,have been introduced to handle the security threats,e.g.,evading the detection of machine learning detectors.However,these solutions still suffer from problems of complicated deployment or long running time.In this paper,we propose an n-gram MalGAN method to solve these problems.We borrow the idea of n-gram from the Natural Language Processing(NLP)area to expand feature sources for adversarial malware examples in MalGAN.Generally,the n-gram MalGAN obtains the feature vector directly from the hexadecimal bytecodes of the executable file.It can be implemented easily and conveniently with a simple program language(e.g.,C++),with no need for any prior knowledge of the executable file or any professional feature extraction tools.These features are functionally independent and thus can be added to the non-functional area of the malicious program to maintain its original executability.In this way,the n-gram could make the adversarial attack easier and more convenient.Experimental results show that the evasion rate of the n-gram MalGAN is at least 88.58%to attack different machine learning algorithms under an appropriate group rate,growing to even 100%for the Random Forest algorithm.展开更多
A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model wil...A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model will greatly improve its robustness.The existing method is to use automatic differentials or finite difference to obtain a gradient and use it to construct adversarial examples.This paper proposes an innovative method for constructing adversarial examples of quantum variational circuits.In this method,the gradient can be obtained by measuring the expected value of a quantum bit respectively in a series quantum circuit.This method can be used to construct the adversarial examples for a quantum variational circuit classifier.The implementation results prove the effectiveness of the proposed method.Compared with the existing method,our method requires fewer resources and is more efficient.展开更多
The performance of deep learning on many tasks has been impressive.However,recent studies have shown that deep learning systems are vulnerable to small specifically crafted perturbations imperceptible to humans.Images...The performance of deep learning on many tasks has been impressive.However,recent studies have shown that deep learning systems are vulnerable to small specifically crafted perturbations imperceptible to humans.Images with such perturbations are called adversarial examples.They have been proven to be an indisputable threat to deep neural networks(DNNs)based applications,but DNNs have yet to be fully elucidated,consequently preventing the development of efficient defenses against adversarial examples.This study proposes a two-stream architecture to protect convolutional neural networks(CNNs)from attacks by adversarial examples.Our model applies the idea of“two-stream”used in the security field.Thus,it successfully defends different kinds of attack methods because of differences in“high-resolution”and“low-resolution”networks in feature extraction.This study experimentally demonstrates that our two-stream architecture is difficult to be defeated with state-of-the-art attacks.Our two-stream architecture is also robust to adversarial examples built by currently known attacking algorithms.展开更多
Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can qu...Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model.展开更多
In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are...In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are becoming increasingly popular,they are now the primary target of mobile malware,which could lead to both privacy leakage and property loss.To address the rapidly deteriorating security issues caused by mobile malware,various research efforts have been made to develop novel and effective detection mechanisms to identify and combat them.Nevertheless,in order to avoid being caught by these malware detection mechanisms,malware authors are inclined to initiate adversarial example attacks by tampering with mobile applications.In this paper,several types of adversarial example attacks are investigated and a feasible approach is proposed to fight against them.First,we look at adversarial example attacks on the Android system and prior solutions that have been proposed to address these attacks.Then,we specifically focus on the data poisoning attack and evasion attack models,which may mutate various application features,such as API calls,permissions and the class label,to produce adversarial examples.Then,we propose and design a malware detection approach that is resistant to adversarial examples.To observe and investigate how the malware detection system is influenced by the adversarial example attacks,we conduct experiments on some real Android application datasets which are composed of both malware and benign applications.Experimental results clearly indicate that the performance of Android malware detection is severely degraded when facing adversarial example attacks.展开更多
The overall research in Reinforcement Learning (RL) concentrates on discrete sets of actions, but for certain real-world problems it is important to have methods which are able to find good strategies using actions dr...The overall research in Reinforcement Learning (RL) concentrates on discrete sets of actions, but for certain real-world problems it is important to have methods which are able to find good strategies using actions drawn from continuous sets. This paper describes a simple control task called direction finder and its known optimal solution for both discrete and continuous actions. It allows for comparison of RL solution methods based on their value functions. In order to solve the control task for continuous actions, a simple idea for generalising them by means of feature vectors is presented. The resulting algorithm is applied using different choices of feature calculations. For comparing their performance a simple measure is展开更多
In recent years,deep learning has become a hotspot and core method in the field of machine learning.In the field of machine vision,deep learning has excellent performance in feature extraction and feature representati...In recent years,deep learning has become a hotspot and core method in the field of machine learning.In the field of machine vision,deep learning has excellent performance in feature extraction and feature representation,making it widely used in directions such as self-driving cars and face recognition.Although deep learning can solve large-scale complex problems very well,the latest research shows that the deep learning network model is very vulnerable to the adversarial attack.Add a weak perturbation to the original input will lead to the wrong output of the neural network,but for the human eye,the difference between origin images and disturbed images is hardly to be notice.In this paper,we summarize the research of adversarial examples in the field of image processing.Firstly,we introduce the background and representative models of deep learning,then introduce the main methods of the generation of adversarial examples and how to defend against adversarial attack,finally,we put forward some thoughts and future prospects for adversarial examples.展开更多
文摘We first put forward the idea of a positive extension matrix (PEM) on paper. Then, an algorithm, AE_ 11, was built with the aid of the PEM. Finally, we made the comparisons of our experimental results and the final result was fairly satisfying.
基金This work was partly supported by the National Natural Science Foundation of China under No.62372334,61876134,and U1836112.
文摘Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plenty of methods have been pro-posed to defend against adversarial examples.How-ever,the majority of them are suffering the follow-ing weaknesses:1)lack of generalization and prac-ticality.2)fail to deal with unknown attacks.To ad-dress the above issues,we design the adversarial na-ture eraser(ANE)and feature map detector(FMD)to detect fragile and high-intensity adversarial examples,respectively.Then,we apply the ensemble learning method to compose our detector,dealing with adver-sarial examples with diverse magnitudes in a divide-and-conquer manner.Experimental results show that our approach achieves 99.30%and 99.62%Area un-der Curve(AUC)scores on average when tested with various Lp norm-based attacks on CIFAR-10 and Im-ageNet,respectively.Furthermore,our approach also shows its potential in detecting unknown attacks.
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)Grant funded by the Korea government,Ministry of Science and ICT(MSIT)(No.2017-0-00168,Automatic Deep Malware Analysis Technology for Cyber Threat Intelligence).
文摘Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers.
基金Ant Financial,Zhejiang University Financial Technology Research Center.
文摘With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area.
基金support provided from the Deanship of Scientific Research at King Saud University through the,Research Group No.(RG-1435-050.)。
文摘With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase chain reaction(rRT-PCR)test that is currently used does not provide such high accuracy or speed in the screening process.Among the good choices for an accurate and fast test to screen COVID-19 are deep learning techniques.In this study,a new convolutional neural network(CNN)framework for COVID-19 detection using computed tomography(CT)images is proposed.The EfficientNet architecture is applied as the backbone structure of the proposed network,in which feature maps with different scales are extracted from the input CT scan images.In addition,atrous convolution at different rates is applied to these multi-scale feature maps to generate denser features,which facilitates in obtaining COVID-19 findings in CT scan images.The proposed framework is also evaluated in this study using a public CT dataset containing 2482 CT scan images from patients of both classes(i.e.,COVID-19 and non-COVID-19).To augment the dataset using additional training examples,adversarial examples generation is performed.The proposed system validates its superiority over the state-of-the-art methods with values exceeding 99.10%in terms of several metrics,such as accuracy,precision,recall,and F1.The proposed system also exhibits good robustness,when it is trained using a small portion of data(20%),with an accuracy of 96.16%.
基金supported by the National Key Research and Development Project(No.2018YFC1407001).
文摘Research on the wave field evolution law is highly significant to the fields of offshore engineering and marine resource development.Numerical simulations have been conducted for high-precision wave field evolution,thus providing short-term wave field prediction.However,its evolution occurs over a long period of time,and its accuracy is difficult to improve.In recent years,the use of machine learning methods to study the evolution of wave field has received increasing attention from researchers.This paper proposes a wave field evolution method based on deep convolutional neural networks.This method can effectively correlate the spa-tiotemporal characteristics of wave data via convolution operation and directly obtain the offshore forecast results of the Bohai Sea and the Yellow Sea.The attention mechanism,multi-scale path design,and hard example mining training strategy are introduced to suppress the interference caused by Weibull distributed wave field data and improve the accuracy of the proposed wave field evolu-tion.The 72-and 480-h evolution experiment results in the Bohai Sea and the Yellow Sea show that the proposed method in this pa-per has excellent forecast accuracy and timeliness.
基金Funding Statement:This work was supported by the National Nature Science Foundation of China(Nos.U1836110,1836208).
文摘Mobile malware occupies a considerable proportion of cyberattacks.With the update of mobile device operating systems and the development of software technology,more and more new malware keep appearing.The emergence of new malware makes the identification accuracy of existing methods lower and lower.There is an urgent need for more effective malware detection models.In this paper,we propose a new approach to mobile malware detection that is able to detect newly-emerged malware instances.Firstly,we build and train the LSTM-based model on original benign and malware samples investigated by both static and dynamic analysis techniques.Then,we build a generative adversarial network to generate augmented examples,which can emulate the characteristics of newly-emerged malware.At last,we use the augmented examples to retrain the 4th and 5th layers of the LSTM network and the last fully connected layer so that it can discriminate against newly-emerged malware.Actual experiments show that our malware detection achieved a classification accuracy of 99.94%when tested on augmented samples and 86.5%with the samples of newly-emerged malware on real data.
基金supported in part by the Natural Science Foundation of Hunan Province under Grant Nos.2023JJ30316 and 2022JJ2029in part by a project supported by Scientific Research Fund of Hunan Provincial Education Department under Grant No.22A0686+1 种基金in part by the National Natural Science Foundation of China under Grant No.62172058Researchers Supporting Project(No.RSP2023R102)King Saud University,Riyadh,Saudi Arabia.
文摘Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%.
基金supported in part by National Natural Science Foundation of China(No.61802383)Research Project of Pazhou Lab for Excellent Young Scholars(No.PZL2021KF0024)+3 种基金Guangzhou Science and Technology Project Basic Research Plan(No.202201010330,202201020162)Guangdong Philosophy and Social Science Planning Project(No.GD19YYJ02)Research on the Supporting Technologies of the Metaverse in Cultural Media(No.PT252022039)National Undergraduate Training Platform for Innovation and Entrepreneurship(No.202111078029).
文摘In recent years,many adversarial malware examples with different feature strategies,especially GAN and its variants,have been introduced to handle the security threats,e.g.,evading the detection of machine learning detectors.However,these solutions still suffer from problems of complicated deployment or long running time.In this paper,we propose an n-gram MalGAN method to solve these problems.We borrow the idea of n-gram from the Natural Language Processing(NLP)area to expand feature sources for adversarial malware examples in MalGAN.Generally,the n-gram MalGAN obtains the feature vector directly from the hexadecimal bytecodes of the executable file.It can be implemented easily and conveniently with a simple program language(e.g.,C++),with no need for any prior knowledge of the executable file or any professional feature extraction tools.These features are functionally independent and thus can be added to the non-functional area of the malicious program to maintain its original executability.In this way,the n-gram could make the adversarial attack easier and more convenient.Experimental results show that the evasion rate of the n-gram MalGAN is at least 88.58%to attack different machine learning algorithms under an appropriate group rate,growing to even 100%for the Random Forest algorithm.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62076042 and 62102049)the Natural Science Foundation of Sichuan Province(Grant No.2022NSFSC0535)+2 种基金the Key Research and Development Project of Sichuan Province(Grant Nos.2021YFSY0012 and 2021YFG0332)the Key Research and Development Project of Chengdu(Grant No.2021-YF05-02424-GX)the Innovation Team of Quantum Security Communication of Sichuan Province(Grant No.17TD0009).
文摘A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model will greatly improve its robustness.The existing method is to use automatic differentials or finite difference to obtain a gradient and use it to construct adversarial examples.This paper proposes an innovative method for constructing adversarial examples of quantum variational circuits.In this method,the gradient can be obtained by measuring the expected value of a quantum bit respectively in a series quantum circuit.This method can be used to construct the adversarial examples for a quantum variational circuit classifier.The implementation results prove the effectiveness of the proposed method.Compared with the existing method,our method requires fewer resources and is more efficient.
基金supported by the Ph.D.Programs Foundation of Ministry of Education of China under Grant No.20130185130001.
文摘The performance of deep learning on many tasks has been impressive.However,recent studies have shown that deep learning systems are vulnerable to small specifically crafted perturbations imperceptible to humans.Images with such perturbations are called adversarial examples.They have been proven to be an indisputable threat to deep neural networks(DNNs)based applications,but DNNs have yet to be fully elucidated,consequently preventing the development of efficient defenses against adversarial examples.This study proposes a two-stream architecture to protect convolutional neural networks(CNNs)from attacks by adversarial examples.Our model applies the idea of“two-stream”used in the security field.Thus,it successfully defends different kinds of attack methods because of differences in“high-resolution”and“low-resolution”networks in feature extraction.This study experimentally demonstrates that our two-stream architecture is difficult to be defeated with state-of-the-art attacks.Our two-stream architecture is also robust to adversarial examples built by currently known attacking algorithms.
文摘Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model.
文摘In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are becoming increasingly popular,they are now the primary target of mobile malware,which could lead to both privacy leakage and property loss.To address the rapidly deteriorating security issues caused by mobile malware,various research efforts have been made to develop novel and effective detection mechanisms to identify and combat them.Nevertheless,in order to avoid being caught by these malware detection mechanisms,malware authors are inclined to initiate adversarial example attacks by tampering with mobile applications.In this paper,several types of adversarial example attacks are investigated and a feasible approach is proposed to fight against them.First,we look at adversarial example attacks on the Android system and prior solutions that have been proposed to address these attacks.Then,we specifically focus on the data poisoning attack and evasion attack models,which may mutate various application features,such as API calls,permissions and the class label,to produce adversarial examples.Then,we propose and design a malware detection approach that is resistant to adversarial examples.To observe and investigate how the malware detection system is influenced by the adversarial example attacks,we conduct experiments on some real Android application datasets which are composed of both malware and benign applications.Experimental results clearly indicate that the performance of Android malware detection is severely degraded when facing adversarial example attacks.
文摘The overall research in Reinforcement Learning (RL) concentrates on discrete sets of actions, but for certain real-world problems it is important to have methods which are able to find good strategies using actions drawn from continuous sets. This paper describes a simple control task called direction finder and its known optimal solution for both discrete and continuous actions. It allows for comparison of RL solution methods based on their value functions. In order to solve the control task for continuous actions, a simple idea for generalising them by means of feature vectors is presented. The resulting algorithm is applied using different choices of feature calculations. For comparing their performance a simple measure is
基金supported by National Natural Science Foundation of China(62072250).
文摘In recent years,deep learning has become a hotspot and core method in the field of machine learning.In the field of machine vision,deep learning has excellent performance in feature extraction and feature representation,making it widely used in directions such as self-driving cars and face recognition.Although deep learning can solve large-scale complex problems very well,the latest research shows that the deep learning network model is very vulnerable to the adversarial attack.Add a weak perturbation to the original input will lead to the wrong output of the neural network,but for the human eye,the difference between origin images and disturbed images is hardly to be notice.In this paper,we summarize the research of adversarial examples in the field of image processing.Firstly,we introduce the background and representative models of deep learning,then introduce the main methods of the generation of adversarial examples and how to defend against adversarial attack,finally,we put forward some thoughts and future prospects for adversarial examples.