期刊文献+
共找到415篇文章
< 1 2 21 >
每页显示 20 50 100
The algorithm AE_(11) of learning from examples
1
作者 ZHANG Hai-yi BI Jian-dong 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2006年第2期226-232,共7页
We first put forward the idea of a positive extension matrix (PEM) on paper. Then, an algorithm, AE_ 11, was built with the aid of the PEM. Finally, we made the comparisons of our experimental results and the final re... We first put forward the idea of a positive extension matrix (PEM) on paper. Then, an algorithm, AE_ 11, was built with the aid of the PEM. Finally, we made the comparisons of our experimental results and the final result was fairly satisfying. 展开更多
关键词 learning from examples concept acquisition inductive learning knowledge acquisition
在线阅读 下载PDF
Omni-Detection of Adversarial Examples with Diverse Magnitudes
2
作者 Ke Jianpeng Wang Wenqi +3 位作者 Yang Kang Wang Lina Ye Aoshuang Wang Run 《China Communications》 SCIE CSCD 2024年第12期139-151,共13页
Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plen... Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plenty of methods have been pro-posed to defend against adversarial examples.How-ever,the majority of them are suffering the follow-ing weaknesses:1)lack of generalization and prac-ticality.2)fail to deal with unknown attacks.To ad-dress the above issues,we design the adversarial na-ture eraser(ANE)and feature map detector(FMD)to detect fragile and high-intensity adversarial examples,respectively.Then,we apply the ensemble learning method to compose our detector,dealing with adver-sarial examples with diverse magnitudes in a divide-and-conquer manner.Experimental results show that our approach achieves 99.30%and 99.62%Area un-der Curve(AUC)scores on average when tested with various Lp norm-based attacks on CIFAR-10 and Im-ageNet,respectively.Furthermore,our approach also shows its potential in detecting unknown attacks. 展开更多
关键词 adversarial example detection ensemble learning feature maps fragile and high-intensity ad-versarial examples
在线阅读 下载PDF
An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection
3
作者 Younghoon Ban Myeonghyun Kim Haehyun Cho 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3535-3563,共29页
Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware ... Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers. 展开更多
关键词 Malware classification machine learning adversarial examples evasion attack CYBERSECURITY
在线阅读 下载PDF
Adversarial Attacks and Defenses in Deep Learning 被引量:21
4
作者 Kui Ren Tianhang Zheng +1 位作者 Zhan Qin Xue Liu 《Engineering》 SCIE EI 2020年第3期346-360,共15页
With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algor... With the rapid developments of artificial intelligence(AI)and deep learning(DL)techniques,it is critical to ensure the security and robustness of the deployed algorithms.Recently,the security vulnerability of DL algorithms to adversarial samples has been widely recognized.The fabricated samples can lead to various misbehaviors of the DL models while being perceived as benign by humans.Successful implementations of adversarial attacks in real physical-world scenarios further demonstrate their practicality.Hence,adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.In this paper,we first introduce the theoretical foundations,algorithms,and applications of adversarial attack techniques.We then describe a few research efforts on the defense techniques,which cover the broad frontier in the field.Several open problems and challenges are subsequently discussed,which we hope will provoke further research efforts in this critical area. 展开更多
关键词 Machine learning Deep neural network Adversarial example Adversarial attack Adversarial defense
在线阅读 下载PDF
Deep Learning Approach for COVID-19 Detection in Computed Tomography Images 被引量:2
5
作者 Mohamad Mahmoud Al Rahhal Yakoub Bazi +2 位作者 Rami M.Jomaa Mansour Zuair Naif Al Ajlan 《Computers, Materials & Continua》 SCIE EI 2021年第5期2093-2110,共18页
With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase ch... With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase chain reaction(rRT-PCR)test that is currently used does not provide such high accuracy or speed in the screening process.Among the good choices for an accurate and fast test to screen COVID-19 are deep learning techniques.In this study,a new convolutional neural network(CNN)framework for COVID-19 detection using computed tomography(CT)images is proposed.The EfficientNet architecture is applied as the backbone structure of the proposed network,in which feature maps with different scales are extracted from the input CT scan images.In addition,atrous convolution at different rates is applied to these multi-scale feature maps to generate denser features,which facilitates in obtaining COVID-19 findings in CT scan images.The proposed framework is also evaluated in this study using a public CT dataset containing 2482 CT scan images from patients of both classes(i.e.,COVID-19 and non-COVID-19).To augment the dataset using additional training examples,adversarial examples generation is performed.The proposed system validates its superiority over the state-of-the-art methods with values exceeding 99.10%in terms of several metrics,such as accuracy,precision,recall,and F1.The proposed system also exhibits good robustness,when it is trained using a small portion of data(20%),with an accuracy of 96.16%. 展开更多
关键词 COVID-19 deep learning computed tomography multi-scale features atrous convolution adversarial examples
在线阅读 下载PDF
Learning the Spatiotemporal Evolution Law of Wave Field Based on Convolutional Neural Network 被引量:1
6
作者 LIU Xing GAO Zhiyi +1 位作者 HOU Fang SUN Jinggao 《Journal of Ocean University of China》 SCIE CAS CSCD 2022年第5期1109-1117,共9页
Research on the wave field evolution law is highly significant to the fields of offshore engineering and marine resource development.Numerical simulations have been conducted for high-precision wave field evolution,th... Research on the wave field evolution law is highly significant to the fields of offshore engineering and marine resource development.Numerical simulations have been conducted for high-precision wave field evolution,thus providing short-term wave field prediction.However,its evolution occurs over a long period of time,and its accuracy is difficult to improve.In recent years,the use of machine learning methods to study the evolution of wave field has received increasing attention from researchers.This paper proposes a wave field evolution method based on deep convolutional neural networks.This method can effectively correlate the spa-tiotemporal characteristics of wave data via convolution operation and directly obtain the offshore forecast results of the Bohai Sea and the Yellow Sea.The attention mechanism,multi-scale path design,and hard example mining training strategy are introduced to suppress the interference caused by Weibull distributed wave field data and improve the accuracy of the proposed wave field evolu-tion.The 72-and 480-h evolution experiment results in the Bohai Sea and the Yellow Sea show that the proposed method in this pa-per has excellent forecast accuracy and timeliness. 展开更多
关键词 wave evolution machine learning convolutional neural network hard example mining
在线阅读 下载PDF
An LSTM-Based Malware Detection Using Transfer Learning 被引量:1
7
作者 Zhangjie Fu Yongjie Ding Musaazi Godfrey 《Journal of Cyber Security》 2021年第1期11-28,共18页
Mobile malware occupies a considerable proportion of cyberattacks.With the update of mobile device operating systems and the development of software technology,more and more new malware keep appearing.The emergence of... Mobile malware occupies a considerable proportion of cyberattacks.With the update of mobile device operating systems and the development of software technology,more and more new malware keep appearing.The emergence of new malware makes the identification accuracy of existing methods lower and lower.There is an urgent need for more effective malware detection models.In this paper,we propose a new approach to mobile malware detection that is able to detect newly-emerged malware instances.Firstly,we build and train the LSTM-based model on original benign and malware samples investigated by both static and dynamic analysis techniques.Then,we build a generative adversarial network to generate augmented examples,which can emulate the characteristics of newly-emerged malware.At last,we use the augmented examples to retrain the 4th and 5th layers of the LSTM network and the last fully connected layer so that it can discriminate against newly-emerged malware.Actual experiments show that our malware detection achieved a classification accuracy of 99.94%when tested on augmented samples and 86.5%with the samples of newly-emerged malware on real data. 展开更多
关键词 Malware detection long short term memory networks generative adversarial networks transfer learning augmented examples
在线阅读 下载PDF
An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments
8
作者 Weizheng Wang Xiangqi Wang +5 位作者 Xianmin Pan Xingxing Gong Jian Liang Pradip Kumar Sharma Osama Alfarraj Wael Said 《Computers, Materials & Continua》 SCIE EI 2023年第9期3859-3876,共18页
Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they ... Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%. 展开更多
关键词 Deep neural networks adversarial example image denoising adversarial example detection machine learning adversarial attack
在线阅读 下载PDF
N-gram MalGAN:Evading machine learning detection via feature n-gram
9
作者 Enmin Zhu Jianjie Zhang +2 位作者 Jijie Yan Kongyang Chen Chongzhi Gao 《Digital Communications and Networks》 SCIE CSCD 2022年第4期485-491,共7页
In recent years,many adversarial malware examples with different feature strategies,especially GAN and its variants,have been introduced to handle the security threats,e.g.,evading the detection of machine learning de... In recent years,many adversarial malware examples with different feature strategies,especially GAN and its variants,have been introduced to handle the security threats,e.g.,evading the detection of machine learning detectors.However,these solutions still suffer from problems of complicated deployment or long running time.In this paper,we propose an n-gram MalGAN method to solve these problems.We borrow the idea of n-gram from the Natural Language Processing(NLP)area to expand feature sources for adversarial malware examples in MalGAN.Generally,the n-gram MalGAN obtains the feature vector directly from the hexadecimal bytecodes of the executable file.It can be implemented easily and conveniently with a simple program language(e.g.,C++),with no need for any prior knowledge of the executable file or any professional feature extraction tools.These features are functionally independent and thus can be added to the non-functional area of the malicious program to maintain its original executability.In this way,the n-gram could make the adversarial attack easier and more convenient.Experimental results show that the evasion rate of the n-gram MalGAN is at least 88.58%to attack different machine learning algorithms under an appropriate group rate,growing to even 100%for the Random Forest algorithm. 展开更多
关键词 Machine learning N-GRAM MalGAN Adversarial examples
在线阅读 下载PDF
A new method of constructing adversarial examplesfor quantum variational circuits
10
作者 颜金歌 闫丽丽 张仕斌 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第7期268-272,共5页
A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model wil... A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model will greatly improve its robustness.The existing method is to use automatic differentials or finite difference to obtain a gradient and use it to construct adversarial examples.This paper proposes an innovative method for constructing adversarial examples of quantum variational circuits.In this method,the gradient can be obtained by measuring the expected value of a quantum bit respectively in a series quantum circuit.This method can be used to construct the adversarial examples for a quantum variational circuit classifier.The implementation results prove the effectiveness of the proposed method.Compared with the existing method,our method requires fewer resources and is more efficient. 展开更多
关键词 quantum variational circuit adversarial examples quantum machine learning quantum circuit
在线阅读 下载PDF
Two-Stream Architecture as a Defense against Adversarial Example
11
作者 Hao Ge Xiao-Guang Tu +1 位作者 Mei Xie Zheng Ma 《Journal of Electronic Science and Technology》 CAS CSCD 2022年第1期81-91,共11页
The performance of deep learning on many tasks has been impressive.However,recent studies have shown that deep learning systems are vulnerable to small specifically crafted perturbations imperceptible to humans.Images... The performance of deep learning on many tasks has been impressive.However,recent studies have shown that deep learning systems are vulnerable to small specifically crafted perturbations imperceptible to humans.Images with such perturbations are called adversarial examples.They have been proven to be an indisputable threat to deep neural networks(DNNs)based applications,but DNNs have yet to be fully elucidated,consequently preventing the development of efficient defenses against adversarial examples.This study proposes a two-stream architecture to protect convolutional neural networks(CNNs)from attacks by adversarial examples.Our model applies the idea of“two-stream”used in the security field.Thus,it successfully defends different kinds of attack methods because of differences in“high-resolution”and“low-resolution”networks in feature extraction.This study experimentally demonstrates that our two-stream architecture is difficult to be defeated with state-of-the-art attacks.Our two-stream architecture is also robust to adversarial examples built by currently known attacking algorithms. 展开更多
关键词 Adversarial example deep learning neural network
在线阅读 下载PDF
Defending Adversarial Examples by a Clipped Residual U-Net Model
12
作者 Kazim Ali Adnan N.Qureshi +2 位作者 Muhammad Shahid Bhatti Abid Sohail Mohammad Hijji 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期2237-2256,共20页
Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can qu... Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model. 展开更多
关键词 Adversarial examples adversarial attacks defense method residual learning u-net cgan cru-et model
在线阅读 下载PDF
DroidEnemy: Battling adversarial example attacks for Android malware detection
13
作者 Neha Bala Aemun Ahmar +3 位作者 Wenjia Li Fernanda Tovar Arpit Battu Prachi Bambarkar 《Digital Communications and Networks》 SCIE CSCD 2022年第6期1040-1047,共8页
In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are... In recent years,we have witnessed a surge in mobile devices such as smartphones,tablets,smart watches,etc.,most of which are based on the Android operating system.However,because these Android-based mobile devices are becoming increasingly popular,they are now the primary target of mobile malware,which could lead to both privacy leakage and property loss.To address the rapidly deteriorating security issues caused by mobile malware,various research efforts have been made to develop novel and effective detection mechanisms to identify and combat them.Nevertheless,in order to avoid being caught by these malware detection mechanisms,malware authors are inclined to initiate adversarial example attacks by tampering with mobile applications.In this paper,several types of adversarial example attacks are investigated and a feasible approach is proposed to fight against them.First,we look at adversarial example attacks on the Android system and prior solutions that have been proposed to address these attacks.Then,we specifically focus on the data poisoning attack and evasion attack models,which may mutate various application features,such as API calls,permissions and the class label,to produce adversarial examples.Then,we propose and design a malware detection approach that is resistant to adversarial examples.To observe and investigate how the malware detection system is influenced by the adversarial example attacks,we conduct experiments on some real Android application datasets which are composed of both malware and benign applications.Experimental results clearly indicate that the performance of Android malware detection is severely degraded when facing adversarial example attacks. 展开更多
关键词 Security Malware detection Adversarial example attack Data poisoning attack Evasi on attack Machine learning ANDROID
在线阅读 下载PDF
Control Task for Reinforcement Learning with Known Optimal Solution for Discrete and Continuous Actions
14
作者 Michael C. ROTTGER Andreas W. LIEHR 《Journal of Intelligent Learning Systems and Applications》 2009年第1期28-41,共14页
The overall research in Reinforcement Learning (RL) concentrates on discrete sets of actions, but for certain real-world problems it is important to have methods which are able to find good strategies using actions dr... The overall research in Reinforcement Learning (RL) concentrates on discrete sets of actions, but for certain real-world problems it is important to have methods which are able to find good strategies using actions drawn from continuous sets. This paper describes a simple control task called direction finder and its known optimal solution for both discrete and continuous actions. It allows for comparison of RL solution methods based on their value functions. In order to solve the control task for continuous actions, a simple idea for generalising them by means of feature vectors is presented. The resulting algorithm is applied using different choices of feature calculations. For comparing their performance a simple measure is 展开更多
关键词 comparison CONTINUOUS ACTIONS example problem REINFORCEMENT learning performance
在线阅读 下载PDF
A Survey on Adversarial Example
15
作者 Jiawei Zhang Jinwei Wang 《Journal of Information Hiding and Privacy Protection》 2020年第1期47-57,共11页
In recent years,deep learning has become a hotspot and core method in the field of machine learning.In the field of machine vision,deep learning has excellent performance in feature extraction and feature representati... In recent years,deep learning has become a hotspot and core method in the field of machine learning.In the field of machine vision,deep learning has excellent performance in feature extraction and feature representation,making it widely used in directions such as self-driving cars and face recognition.Although deep learning can solve large-scale complex problems very well,the latest research shows that the deep learning network model is very vulnerable to the adversarial attack.Add a weak perturbation to the original input will lead to the wrong output of the neural network,but for the human eye,the difference between origin images and disturbed images is hardly to be notice.In this paper,we summarize the research of adversarial examples in the field of image processing.Firstly,we introduce the background and representative models of deep learning,then introduce the main methods of the generation of adversarial examples and how to defend against adversarial attack,finally,we put forward some thoughts and future prospects for adversarial examples. 展开更多
关键词 Neural network deep learning adversarial example SURVEY
在线阅读 下载PDF
基于YOLOv8目标检测器的对抗攻击方案设计
16
作者 李秀滢 赵海淇 +2 位作者 陈雪松 张健毅 赵成 《信息安全研究》 北大核心 2025年第3期221-230,共10页
目前,基于人工智能目标检测技术的摄像头得到了广泛的应用.而在现实世界中,基于人工智能的目标检测模型容易受到对抗样本攻击.现有的对抗样本攻击方案都是针对早版本的目标检测模型而设计的,利用这些方案去攻击最新的YOLOv8目标检测器... 目前,基于人工智能目标检测技术的摄像头得到了广泛的应用.而在现实世界中,基于人工智能的目标检测模型容易受到对抗样本攻击.现有的对抗样本攻击方案都是针对早版本的目标检测模型而设计的,利用这些方案去攻击最新的YOLOv8目标检测器并不能取得很好的攻击效果.为解决这一问题,针对YOLOv8目标检测器设计了一个全新的对抗补丁攻击方案.该方案在最小化置信度输出的基础上,引入了EMA注意力机制强化补丁生成时的特征提取,进而增强了攻击效果.实验证明该方案具有较优异的攻击效果和迁移性,将该方案形成的对抗补丁打印在衣服上进行验证测试,同样获得较优异的攻击效果,表明该方案具有较强的实用性. 展开更多
关键词 深度学习 对抗样本 YOLOv8 目标检测 对抗补丁
在线阅读 下载PDF
教师教学经验概念化表达:内涵、逻辑及路径——基于经验学习圈理论的视角 被引量:1
17
作者 覃千钟 魏宏聚 《教育理论与实践》 北大核心 2025年第7期34-41,共8页
教师教学经验的价值需要通过概念化的形式来实现。教学经验概念化指在教学实践中分析、提炼教学经验,使之结构化、系统化和操作化。借鉴库伯的经验学习圈理论,教学经验概念化遵循“呈现教学经验—定性教学经验—归纳教学经验—印证教学... 教师教学经验的价值需要通过概念化的形式来实现。教学经验概念化指在教学实践中分析、提炼教学经验,使之结构化、系统化和操作化。借鉴库伯的经验学习圈理论,教学经验概念化遵循“呈现教学经验—定性教学经验—归纳教学经验—印证教学经验”的实践逻辑。教师应以教学切片为工具,截选关键的课例知识事件作为经验切片,定性经验切片中的经验属性和类别,归纳经验切片中的经验操作结构,检验经验切片中的经验实践效果,概念化教学经验,生成教师信奉的教学“使用理论”。 展开更多
关键词 教师 教学经验概念化 经验学习圈理论 课例 教学切片
在线阅读 下载PDF
基于脆弱性感知的增强对抗训练鲁棒性方法
18
作者 贾婧玥 金澎 +1 位作者 王兵 陈兴元 《计算机工程与设计》 北大核心 2025年第1期230-236,共7页
为防止脆弱样本降低对抗训练模型的鲁棒性和准确率,提出一种从决策边界角度重新加权训练数据的方法。通过迭代搜索获取决策边界附近的对抗样本,由于熵越小,样本的脆弱性越大,为避免扰动干扰和错误分类,提出使用熵评估样本的脆弱性。根... 为防止脆弱样本降低对抗训练模型的鲁棒性和准确率,提出一种从决策边界角度重新加权训练数据的方法。通过迭代搜索获取决策边界附近的对抗样本,由于熵越小,样本的脆弱性越大,为避免扰动干扰和错误分类,提出使用熵评估样本的脆弱性。根据预测分布的熵,按合适的惩罚因子调整每个对抗训练样本的损失,提升脆弱训练样本的训练强度,提升模型的鲁棒性。实验结果表明,所提算法在保持模型准确率的同时,能够显著提高模型的对抗鲁棒性。 展开更多
关键词 对抗样本 决策边界 对抗训练 鲁棒性 准确率 脆弱性 深度学习
在线阅读 下载PDF
基于引导扩散模型的自然对抗补丁生成方法 被引量:1
19
作者 何琨 佘计思 +3 位作者 张子君 陈晶 汪欣欣 杜瑞颖 《电子学报》 EI CAS CSCD 北大核心 2024年第2期564-573,共10页
近年来,物理世界中的对抗补丁攻击因其对深度学习模型安全的影响而引起了广泛关注.现有的工作主要集中在生成在物理世界中攻击性能良好的对抗补丁,没有考虑到对抗补丁图案与自然图像的差别,因此生成的对抗补丁往往不自然且容易被观察者... 近年来,物理世界中的对抗补丁攻击因其对深度学习模型安全的影响而引起了广泛关注.现有的工作主要集中在生成在物理世界中攻击性能良好的对抗补丁,没有考虑到对抗补丁图案与自然图像的差别,因此生成的对抗补丁往往不自然且容易被观察者发现.为了解决这个问题,本文提出了一种基于引导的扩散模型的自然对抗补丁生成方法.具体而言,本文通过解析目标检测器的输出构建预测对抗补丁攻击成功率的预测器,利用该预测器的梯度作为条件引导预训练的扩散模型的逆扩散过程,从而生成自然度更高且保持高攻击成功率的对抗补丁.本文在数字世界和物理世界中进行了广泛的实验,评估了对抗补丁针对各种目标检测模型的攻击效果以及对抗补丁的自然度.实验结果表明,通过将所构建的攻击成功率预测器与扩散模型相结合,本文的方法能够生成比现有方案更自然的对抗补丁,同时保持攻击性能. 展开更多
关键词 目标检测 对抗补丁 扩散模型 对抗样本 对抗攻击 深度学习
在线阅读 下载PDF
打造新型阅读空间,创建未来学习中心--以高校图书馆空间再造为例 被引量:1
20
作者 袁艳 华春花 包信欣 《新世纪图书馆》 CSSCI 2024年第8期48-53,共6页
高等教育的变革影响和引导着高校图书馆空间形态的创新转变和角色功能的重新定位。高校图书馆未来学习中心的空间再造具有系统性、开放性、融合性、多元性和场景化特征,再造后的空间支持自主学习、教育教学、创新创造、数字学术、文化... 高等教育的变革影响和引导着高校图书馆空间形态的创新转变和角色功能的重新定位。高校图书馆未来学习中心的空间再造具有系统性、开放性、融合性、多元性和场景化特征,再造后的空间支持自主学习、教育教学、创新创造、数字学术、文化阅读等。高校图书馆空间再造不是原有空间的简单升级,而需从顶层设计、技术支持、场景构建、人员培养等方面进行思考和探索。 展开更多
关键词 未来学习中心 空间建设 高校图书馆
在线阅读 下载PDF
上一页 1 2 21 下一页 到第
使用帮助 返回顶部