Climate downscaling is used to transform large-scale meteorological data into small-scale data with enhanced detail,which finds wide applications in climate modeling,numerical weather forecasting,and renewable energy....Climate downscaling is used to transform large-scale meteorological data into small-scale data with enhanced detail,which finds wide applications in climate modeling,numerical weather forecasting,and renewable energy.Although deeplearning-based downscaling methods effectively capture the complex nonlinear mapping between meteorological data of varying scales,the supervised deep-learning-based downscaling methods suffer from insufficient high-resolution data in practice,and unsupervised methods struggle with accurately inferring small-scale specifics from limited large-scale inputs due to small-scale uncertainty.This article presents DualDS,a dual-learning framework utilizing a Generative Adversarial Network–based neural network and subgrid-scale auxiliary information for climate downscaling.Such a learning method is unified in a two-stream framework through up-and downsamplers,where the downsampler is used to simulate the information loss process during the upscaling,and the upsampler is used to reconstruct lost details and correct errors incurred during the upscaling.This dual learning strategy can eliminate the dependence on high-resolution ground truth data in the training process and refine the downscaling results by constraining the mapping process.Experimental findings demonstrate that DualDS is comparable to several state-of-the-art deep learning downscaling approaches,both qualitatively and quantitatively.Specifically,for a single surface-temperature data downscaling task,our method is comparable with other unsupervised algorithms with the same dataset,and we can achieve a 0.469 dB higher peak signal-to-noise ratio,0.017 higher structural similarity,0.08 lower RMSE,and the best correlation coefficient.In summary,this paper presents a novel approach to addressing small-scale uncertainty issues in unsupervised downscaling processes.展开更多
Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlo...Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlooked challenge is their demand for considerable run-to-failure data for training.Collection of such training data leads to prohibitive testing efforts as the run-to-failure tests can last for years.Here,we propose a semi-supervised representation learning method to enhance prediction accuracy by learning from data without RUL labels.Our approach builds on a sophisticated deep neural network that comprises an encoder and three decoder heads to extract time-dependent representation features from short-term battery operating data regardless of the existence of RUL labels.The approach is validated using three datasets collected from 34 batteries operating under various conditions,encompassing over 19,900 charge and discharge cycles.Our method achieves a root mean squared error(RMSE)within 25 cycles,even when only 1/50 of the training dataset is labelled,representing a reduction of 48%compared to the conventional approach.We also demonstrate the method's robustness with varying numbers of labelled data and different weights assigned to the three decoder heads.The projection of extracted features in low space reveals that our method effectively learns degradation features from unlabelled data.Our approach highlights the promise of utilising semi-supervised learning to reduce the data demand for reliability monitoring of energy devices.展开更多
The unsupervised vehicle re-identification task aims at identifying specific vehicles in surveillance videos without utilizing annotation information.Due to the higher similarity in appearance between vehicles compare...The unsupervised vehicle re-identification task aims at identifying specific vehicles in surveillance videos without utilizing annotation information.Due to the higher similarity in appearance between vehicles compared to pedestrians,pseudo-labels generated through clustering are ineffective in mitigating the impact of noise,and the feature distance between inter-class and intra-class has not been adequately improved.To address the aforementioned issues,we design a dual contrastive learning method based on knowledge distillation.During each iteration,we utilize a teacher model to randomly partition the entire dataset into two sub-domains based on clustering pseudo-label categories.By conducting contrastive learning between the two student models,we extract more discernible vehicle identity cues to improve the problem of imbalanced data distribution.Subsequently,we propose a context-aware pseudo label refinement strategy that leverages contextual features by progressively associating granularity information from different bottleneck blocks.To produce more trustworthy pseudo-labels and lessen noise interference during the clustering process,the context-aware scores are obtained by calculating the similarity between global features and contextual ones,which are subsequently added to the pseudo-label encoding process.The proposed method has achieved excellent performance in overcoming label noise and optimizing data distribution through extensive experimental results on publicly available datasets.展开更多
Underwater image enhancement aims to restore a clean appearance and thus improves the quality of underwater degraded images.Current methods feed the whole image directly into the model for enhancement.However,they ign...Underwater image enhancement aims to restore a clean appearance and thus improves the quality of underwater degraded images.Current methods feed the whole image directly into the model for enhancement.However,they ignored that the R,G and B channels of underwater degraded images present varied degrees of degradation,due to the selective absorption for the light.To address this issue,we propose an unsupervised multi-expert learning model by considering the enhancement of each color channel.Specifically,an unsupervised architecture based on generative adversarial network is employed to alleviate the need for paired underwater images.Based on this,we design a generator,including a multi-expert encoder,a feature fusion module and a feature fusion-guided decoder,to generate the clear underwater image.Accordingly,a multi-expert discriminator is proposed to verify the authenticity of the R,G and B channels,respectively.In addition,content perceptual loss and edge loss are introduced into the loss function to further improve the content and details of the enhanced images.Extensive experiments on public datasets demonstrate that our method achieves more pleasing results in vision quality.Various metrics(PSNR,SSIM,UIQM and UCIQE) evaluated on our enhanced images have been improved obviously.展开更多
Hybrid precoding is considered as a promising low-cost technique for millimeter wave(mm-wave)massive Multi-Input Multi-Output(MIMO)systems.In this work,referring to the time-varying propagation circumstances,with semi...Hybrid precoding is considered as a promising low-cost technique for millimeter wave(mm-wave)massive Multi-Input Multi-Output(MIMO)systems.In this work,referring to the time-varying propagation circumstances,with semi-supervised Incremental Learning(IL),we propose an online hybrid beamforming scheme.Firstly,given the constraint of constant modulus on analog beamformer and combiner,we propose a new broadnetwork-based structure for the design model of hybrid beamforming.Compared with the existing network structure,the proposed network structure can achieve better transmission performance and lower complexity.Moreover,to enhance the efficiency of IL further,by combining the semi-supervised graph with IL,we propose a hybrid beamforming scheme based on chunk-by-chunk semi-supervised learning,where only few transmissions are required to calculate the label and all other unlabelled transmissions would also be put into a training data chunk.Unlike the existing single-by-single approach where transmissions during the model update are not taken into the consideration of model update,all transmissions,even the ones during the model update,would make contributions to model update in the proposed method.During the model update,the amount of unlabelled transmissions is very large and they also carry some information,the prediction performance can be enhanced to some extent by these unlabelled channel data.Simulation results demonstrate the spectral efficiency of the proposed method outperforms that of the existing single-by-single approach.Besides,we prove the general complexity of the proposed method is lower than that of the existing approach and give the condition under which its absolute complexity outperforms that of the existing approach.展开更多
Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to bes...Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to best improve performance while limiting the number of new labels."Model Change"active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s).We pair this idea with graph-based semi-supervised learning(SSL)methods,that use the spectrum of the graph Laplacian matrix,which can be truncated to avoid prohibitively large computational and storage costs.We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.We show a variety of multiclass examples that illustrate improved performance over prior state-of-art.展开更多
With the rapid development of Internet of Things(IoT)technology,IoT systems have been widely applied in health-care,transportation,home,and other fields.However,with the continuous expansion of the scale and increasin...With the rapid development of Internet of Things(IoT)technology,IoT systems have been widely applied in health-care,transportation,home,and other fields.However,with the continuous expansion of the scale and increasing complexity of IoT systems,the stability and security issues of IoT systems have become increasingly prominent.Thus,it is crucial to detect anomalies in the collected IoT time series from various sensors.Recently,deep learning models have been leveraged for IoT anomaly detection.However,owing to the challenges associated with data labeling,most IoT anomaly detection methods resort to unsupervised learning techniques.Nevertheless,the absence of accurate abnormal information in unsupervised learning methods limits their performance.To address these problems,we propose AS-GCN-MTM,an adaptive structural Graph Convolutional Networks(GCN)-based framework using a mean-teacher mechanism(AS-GCN-MTM)for anomaly identification.It performs better than unsupervised methods using only a small amount of labeled data.Mean Teachers is an effective semi-supervised learning method that utilizes unlabeled data for training to improve the generalization ability and performance of the model.However,the dependencies between data are often unknown in time series data.To solve this problem,we designed a graph structure adaptive learning layer based on neural networks,which can automatically learn the graph structure from time series data.It not only better captures the relationships between nodes but also enhances the model’s performance by augmenting key data.Experiments have demonstrated that our method improves the baseline model with the highest F1 value by 10.4%,36.1%,and 5.6%,respectively,on three real datasets with a 10%data labeling rate.展开更多
Quantifying the number of individuals in images or videos to estimate crowd density is a challenging yet crucial task with significant implications for fields such as urban planning and public safety.Crowd counting ha...Quantifying the number of individuals in images or videos to estimate crowd density is a challenging yet crucial task with significant implications for fields such as urban planning and public safety.Crowd counting has attracted considerable attention in the field of computer vision,leading to the development of numerous advanced models and methodologies.These approaches vary in terms of supervision techniques,network architectures,and model complexity.Currently,most crowd counting methods rely on fully supervised learning,which has proven to be effective.However,this approach presents challenges in real-world scenarios,where labeled data and ground-truth annotations are often scarce.As a result,there is an increasing need to explore unsupervised and semi-supervised methods to effectively address crowd counting tasks in practical applications.This paper offers a comprehensive review of crowd counting models,with a particular focus on semi-supervised and unsupervised approaches based on their supervision paradigms.We summarize and critically analyze the key methods in these two categories,highlighting their strengths and limitations.Furthermore,we provide a comparative analysis of prominent crowd counting methods using widely adopted benchmark datasets.We believe that this survey will offer valuable insights and guide future advancements in crowd counting technology.展开更多
The aim of this paper is to broaden the application of Stochastic Configuration Network (SCN) in the semi-supervised domain by utilizing common unlabeled data in daily life. It can enhance the classification accuracy ...The aim of this paper is to broaden the application of Stochastic Configuration Network (SCN) in the semi-supervised domain by utilizing common unlabeled data in daily life. It can enhance the classification accuracy of decentralized SCN algorithms while effectively protecting user privacy. To this end, we propose a decentralized semi-supervised learning algorithm for SCN, called DMT-SCN, which introduces teacher and student models by combining the idea of consistency regularization to improve the response speed of model iterations. In order to reduce the possible negative impact of unsupervised data on the model, we purposely change the way of adding noise to the unlabeled data. Simulation results show that the algorithm can effectively utilize unlabeled data to improve the classification accuracy of SCN training and is robust under different ground simulation environments.展开更多
Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)t...Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.展开更多
Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,l...Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,labeled data is very scarce due to patient privacy concerns.For researchers,obtaining high-quality labeled images is exceedingly challenging because it involves manual annotation and clinical understanding.In addition,skin datasets are highly suitable for medical image classification studies due to the inter-class relationships and the inter-class similarities of skin lesions.In this paper,we propose a model called Coalition Sample Relation Consistency(CSRC),a consistency-based method that leverages Canonical Correlation Analysis(CCA)to capture the intrinsic relationships between samples.Considering that traditional consistency-based models only focus on the consistency of prediction,we additionally explore the similarity between features by using CCA.We enforce feature relation consistency based on traditional models,encouraging the model to learn more meaningful information from unlabeled data.Finally,considering that cross-entropy loss is not as suitable as the supervised loss when studying with imbalanced datasets(i.e.,ISIC 2017 and ISIC 2018),we improve the supervised loss to achieve better classification accuracy.Our study shows that this model performs better than many semi-supervised methods.展开更多
Unsupervised vehicle re-identification(Re-ID)methods have garnered widespread attention due to their potential in real-world traffic monitoring.However,existing unsupervised domain adaptation techniques often rely on ...Unsupervised vehicle re-identification(Re-ID)methods have garnered widespread attention due to their potential in real-world traffic monitoring.However,existing unsupervised domain adaptation techniques often rely on pseudo-labels generated from the source domain,which struggle to effectively address the diversity and dynamic nature of real-world scenarios.Given the limited variety of common vehicle types,enhancing the model’s generalization capability across these types is crucial.To this end,an innovative approach called meta-type generalization(MTG)is proposed.By dividing the training data into meta-train and meta-test sets based on vehicle type information,a novel gradient interaction computation strategy is designed to enhance the model’s ability to learn typeinvariant features.Integrated into the ResNet50 backbone,the MTG model achieves improvements of 4.50%and 12.04%on the Veri-776 and VRAI datasets,respectively,compared with traditional unsupervised algorithms,and surpasses current state-of-the-art methods.This achievement holds promise for application in intelligent traffic systems,enabling more efficient urban traffic solutions.展开更多
Deep Learning(DL)is such a powerful tool that we have seen tremendous success in areas such as Computer Vision,Speech Recognition,and Natural Language Processing.Since Automated Modulation Classification(AMC)is an imp...Deep Learning(DL)is such a powerful tool that we have seen tremendous success in areas such as Computer Vision,Speech Recognition,and Natural Language Processing.Since Automated Modulation Classification(AMC)is an important part in Cognitive Radio Networks,we try to explore its potential in solving signal modulation recognition problem.It cannot be overlooked that DL model is a complex model,thus making them prone to over-fitting.DL model requires many training data to combat with over-fitting,but adding high quality labels to training data manually is not always cheap and accessible,especially in real-time system,which may counter unprecedented data in dataset.Semi-supervised Learning is a way to exploit unlabeled data effectively to reduce over-fitting in DL.In this paper,we extend Generative Adversarial Networks(GANs)to the semi-supervised learning will show it is a method can be used to create a more dataefficient classifier.展开更多
The performance of traditional vibration based fault diagnosis methods greatly depends on those hand- crafted features extracted using signal processing algo- rithms, which require significant amounts of domain knowle...The performance of traditional vibration based fault diagnosis methods greatly depends on those hand- crafted features extracted using signal processing algo- rithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised represen- tation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal struc- tures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at dif- ferent scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multi- scale representations. Finally, the multiscale representa- tions are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.展开更多
The majority of big data analytics applied to transportation datasets suffer from being too domain-specific,that is,they draw conclusions for a dataset based on analytics on the same dataset.This makes models trained ...The majority of big data analytics applied to transportation datasets suffer from being too domain-specific,that is,they draw conclusions for a dataset based on analytics on the same dataset.This makes models trained from one domain(e.g.taxi data)applies badly to a different domain(e.g.Uber data).To achieve accurate analyses on a new domain,substantial amounts of data must be available,which limits practical applications.To remedy this,we propose to use semi-supervised and active learning of big data to accomplish the domain adaptation task:Selectively choosing a small amount of datapoints from a new domain while achieving comparable performances to using all the datapoints.We choose the New York City(NYC)transportation data of taxi and Uber as our dataset,simulating different domains with 90%as the source data domain for training and the remaining 10%as the target data domain for evaluation.We propose semi-supervised and active learning strategies and apply it to the source domain for selecting datapoints.Experimental results show that our adaptation achieves a comparable performance of using all datapoints while using only a fraction of them,substantially reducing the amount of data required.Our approach has two major advantages:It can make accurate analytics and predictions when big datasets are not available,and even if big datasets are available,our approach chooses the most informative datapoints out of the dataset,making the process much more efficient without having to process huge amounts of data.展开更多
It is crucial to maintain the safe and stable operation of distribution transformers,which constitute a key part of power systems.In the event of transformer failure,the fault type must be diagnosed in a timely and ac...It is crucial to maintain the safe and stable operation of distribution transformers,which constitute a key part of power systems.In the event of transformer failure,the fault type must be diagnosed in a timely and accurate manner.To this end,a transformer fault diagnosis method based on infrared image processing and semi-supervised learning is proposed herein.First,we perform feature extraction on the collected infrared-image data to extract temperature,texture,and shape features as the model reference vectors.Then,a generative adversarial network(GAN)is constructed to generate synthetic samples for the minority subset of labelled samples.The proposed method can learn information from unlabeled sample data,unlike conventional supervised learning methods.Subsequently,a semi-supervised graph model is trained on the entire dataset,i.e.,both labeled and unlabeled data.Finally,we test the proposed model on an actual dataset collected from a Chinese electricity provider.The experimental results show that the use of feature extraction,sample generation,and semi-supervised learning model can improve the accuracy of transformer fault classification.This verifies the effectiveness of the proposed method.展开更多
Direct online measurement on product quality of industrial processes is difficult to be realized,which leads to a large number of unlabeled samples in modeling data.Therefore,it needs to employ semi-supervised learnin...Direct online measurement on product quality of industrial processes is difficult to be realized,which leads to a large number of unlabeled samples in modeling data.Therefore,it needs to employ semi-supervised learning(SSL)method to establish the soft sensor model of product quality.Considering the slow time-varying characteristic of industrial processes,the model parameters should be updated smoothly.According to this characteristic,this paper proposes an online adaptive semi-supervised learning algorithm based on random vector functional link network(RVFLN),denoted as OAS-RVFLN.By introducing a L2-fusion term that can be seen a weight deviation constraint,the proposed algorithm unifies the offline and online learning,and achieves smoothness of model parameter update.Empirical evaluations both on benchmark testing functions and datasets reveal that the proposed OAS-RVFLN can outperform the conventional methods in learning speed and accuracy.Finally,the OAS-RVFLN is applied to the coal dense medium separation process in coal industry to estimate the ash content of coal product,which further verifies its effectiveness and potential of industrial application.展开更多
In this paper we present a CNN based approach for a real time 3 D-hand pose estimation from the depth sequence.Prior discriminative approaches have achieved remarkable success but are facing two main challenges:Firstl...In this paper we present a CNN based approach for a real time 3 D-hand pose estimation from the depth sequence.Prior discriminative approaches have achieved remarkable success but are facing two main challenges:Firstly,the methods are fully supervised hence require large numbers of annotated training data to extract the dynamic information from a hand representation.Secondly,unreliable hand detectors based on strong assumptions or a weak detector which often fail in several situations like complex environment and multiple hands.In contrast to these methods,this paper presents an approach that can be considered as semi-supervised by performing predictive coding of image sequences of hand poses in order to capture latent features underlying a given image without supervision.The hand is modelled using a novel latent tree dependency model(LDTM)which transforms internal joint location to an explicit representation.Then the modeled hand topology is integrated with the pose estimator using data dependent method to jointly learn latent variables of the posterior pose appearance and the pose configuration respectively.Finally,an unsupervised error term which is a part of the recurrent architecture ensures smooth estimations of the final pose.Experiments on three challenging public datasets,ICVL,MSRA,and NYU demonstrate the significant performance of the proposed method which is comparable or better than state-of-the-art approaches.展开更多
Intelligent seismic facies identification based on deep learning can alleviate the time-consuming and labor-intensive problem of manual interpretation,which has been widely applied.Supervised learning can realize faci...Intelligent seismic facies identification based on deep learning can alleviate the time-consuming and labor-intensive problem of manual interpretation,which has been widely applied.Supervised learning can realize facies identification with high efficiency and accuracy;however,it depends on the usage of a large amount of well-labeled data.To solve this issue,we propose herein an incremental semi-supervised method for intelligent facies identification.Our method considers the continuity of the lateral variation of strata and uses cosine similarity to quantify the similarity of the seismic data feature domain.The maximum-diff erence sample in the neighborhood of the currently used training data is then found to reasonably expand the training sets.This process continuously increases the amount of training data and learns its distribution.We integrate old knowledge while absorbing new ones to realize incremental semi-supervised learning and achieve the purpose of evolving the network models.In this work,accuracy and confusion matrix are employed to jointly control the predicted results of the model from both overall and partial aspects.The obtained values are then applied to a three-dimensional(3D)real dataset and used to quantitatively evaluate the results.Using unlabeled data,our proposed method acquires more accurate and stable testing results compared to conventional supervised learning algorithms that only use well-labeled data.A considerable improvement for small-sample categories is also observed.Using less than 1%of the training data,the proposed method can achieve an average accuracy of over 95%on the 3D dataset.In contrast,the conventional supervised learning algorithm achieved only approximately 85%.展开更多
Anomaly detection in high dimensional data is a critical research issue with serious implication in the real-world problems.Many issues in this field still unsolved,so several modern anomaly detection methods struggle...Anomaly detection in high dimensional data is a critical research issue with serious implication in the real-world problems.Many issues in this field still unsolved,so several modern anomaly detection methods struggle to maintain adequate accuracy due to the highly descriptive nature of big data.Such a phenomenon is referred to as the“curse of dimensionality”that affects traditional techniques in terms of both accuracy and performance.Thus,this research proposed a hybrid model based on Deep Autoencoder Neural Network(DANN)with five layers to reduce the difference between the input and output.The proposed model was applied to a real-world gas turbine(GT)dataset that contains 87620 columns and 56 rows.During the experiment,two issues have been investigated and solved to enhance the results.The first is the dataset class imbalance,which solved using SMOTE technique.The second issue is the poor performance,which can be solved using one of the optimization algorithms.Several optimization algorithms have been investigated and tested,including stochastic gradient descent(SGD),RMSprop,Adam and Adamax.However,Adamax optimization algorithm showed the best results when employed to train theDANNmodel.The experimental results show that our proposed model can detect the anomalies by efficiently reducing the high dimensionality of dataset with accuracy of 99.40%,F1-score of 0.9649,Area Under the Curve(AUC)rate of 0.9649,and a minimal loss function during the hybrid model training.展开更多
基金supported by the following funding bodies:the National Key Research and Development Program of China(Grant No.2020YFA0608000)National Science Foundation of China(Grant Nos.42075142,42375148,42125503+2 种基金42130608)FY-APP-2022.0609,Sichuan Province Key Tech nology Research and Development project(Grant Nos.2024ZHCG0168,2024ZHCG0176,2023YFG0305,2023YFG-0124,and 23ZDYF0091)the CUIT Science and Technology Innovation Capacity Enhancement Program project(Grant No.KYQN202305)。
文摘Climate downscaling is used to transform large-scale meteorological data into small-scale data with enhanced detail,which finds wide applications in climate modeling,numerical weather forecasting,and renewable energy.Although deeplearning-based downscaling methods effectively capture the complex nonlinear mapping between meteorological data of varying scales,the supervised deep-learning-based downscaling methods suffer from insufficient high-resolution data in practice,and unsupervised methods struggle with accurately inferring small-scale specifics from limited large-scale inputs due to small-scale uncertainty.This article presents DualDS,a dual-learning framework utilizing a Generative Adversarial Network–based neural network and subgrid-scale auxiliary information for climate downscaling.Such a learning method is unified in a two-stream framework through up-and downsamplers,where the downsampler is used to simulate the information loss process during the upscaling,and the upsampler is used to reconstruct lost details and correct errors incurred during the upscaling.This dual learning strategy can eliminate the dependence on high-resolution ground truth data in the training process and refine the downscaling results by constraining the mapping process.Experimental findings demonstrate that DualDS is comparable to several state-of-the-art deep learning downscaling approaches,both qualitatively and quantitatively.Specifically,for a single surface-temperature data downscaling task,our method is comparable with other unsupervised algorithms with the same dataset,and we can achieve a 0.469 dB higher peak signal-to-noise ratio,0.017 higher structural similarity,0.08 lower RMSE,and the best correlation coefficient.In summary,this paper presents a novel approach to addressing small-scale uncertainty issues in unsupervised downscaling processes.
基金supported by the National Natural Science Foundation of China(No.52207229)the Key Research and Development Program of Ningxia Hui Autonomous Region of China(No.2024BEE02003)+1 种基金the financial support from the AEGiS Research Grant 2024,University of Wollongong(No.R6254)the financial support from the China Scholarship Council(No.202207550010).
文摘Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlooked challenge is their demand for considerable run-to-failure data for training.Collection of such training data leads to prohibitive testing efforts as the run-to-failure tests can last for years.Here,we propose a semi-supervised representation learning method to enhance prediction accuracy by learning from data without RUL labels.Our approach builds on a sophisticated deep neural network that comprises an encoder and three decoder heads to extract time-dependent representation features from short-term battery operating data regardless of the existence of RUL labels.The approach is validated using three datasets collected from 34 batteries operating under various conditions,encompassing over 19,900 charge and discharge cycles.Our method achieves a root mean squared error(RMSE)within 25 cycles,even when only 1/50 of the training dataset is labelled,representing a reduction of 48%compared to the conventional approach.We also demonstrate the method's robustness with varying numbers of labelled data and different weights assigned to the three decoder heads.The projection of extracted features in low space reveals that our method effectively learns degradation features from unlabelled data.Our approach highlights the promise of utilising semi-supervised learning to reduce the data demand for reliability monitoring of energy devices.
基金supported by the National Natural Science Foundation of China under Grant Nos.62461037,62076117 and 62166026the Jiangxi Provincial Natural Science Foundation under Grant Nos.20224BAB212011,20232BAB202051,20232BAB212008 and 20242BAB25078the Jiangxi Provincial Key Laboratory of Virtual Reality under Grant No.2024SSY03151.
文摘The unsupervised vehicle re-identification task aims at identifying specific vehicles in surveillance videos without utilizing annotation information.Due to the higher similarity in appearance between vehicles compared to pedestrians,pseudo-labels generated through clustering are ineffective in mitigating the impact of noise,and the feature distance between inter-class and intra-class has not been adequately improved.To address the aforementioned issues,we design a dual contrastive learning method based on knowledge distillation.During each iteration,we utilize a teacher model to randomly partition the entire dataset into two sub-domains based on clustering pseudo-label categories.By conducting contrastive learning between the two student models,we extract more discernible vehicle identity cues to improve the problem of imbalanced data distribution.Subsequently,we propose a context-aware pseudo label refinement strategy that leverages contextual features by progressively associating granularity information from different bottleneck blocks.To produce more trustworthy pseudo-labels and lessen noise interference during the clustering process,the context-aware scores are obtained by calculating the similarity between global features and contextual ones,which are subsequently added to the pseudo-label encoding process.The proposed method has achieved excellent performance in overcoming label noise and optimizing data distribution through extensive experimental results on publicly available datasets.
基金supported in part by the National Key Research and Development Program of China(2020YFB1313002)the National Natural Science Foundation of China(62276023,U22B2055,62222302,U2013202)+1 种基金the Fundamental Research Funds for the Central Universities(FRF-TP-22-003C1)the Postgraduate Education Reform Project of Henan Province(2021SJGLX260Y)。
文摘Underwater image enhancement aims to restore a clean appearance and thus improves the quality of underwater degraded images.Current methods feed the whole image directly into the model for enhancement.However,they ignored that the R,G and B channels of underwater degraded images present varied degrees of degradation,due to the selective absorption for the light.To address this issue,we propose an unsupervised multi-expert learning model by considering the enhancement of each color channel.Specifically,an unsupervised architecture based on generative adversarial network is employed to alleviate the need for paired underwater images.Based on this,we design a generator,including a multi-expert encoder,a feature fusion module and a feature fusion-guided decoder,to generate the clear underwater image.Accordingly,a multi-expert discriminator is proposed to verify the authenticity of the R,G and B channels,respectively.In addition,content perceptual loss and edge loss are introduced into the loss function to further improve the content and details of the enhanced images.Extensive experiments on public datasets demonstrate that our method achieves more pleasing results in vision quality.Various metrics(PSNR,SSIM,UIQM and UCIQE) evaluated on our enhanced images have been improved obviously.
基金supported by the National Science Foundation of China under Grant No.62101467.
文摘Hybrid precoding is considered as a promising low-cost technique for millimeter wave(mm-wave)massive Multi-Input Multi-Output(MIMO)systems.In this work,referring to the time-varying propagation circumstances,with semi-supervised Incremental Learning(IL),we propose an online hybrid beamforming scheme.Firstly,given the constraint of constant modulus on analog beamformer and combiner,we propose a new broadnetwork-based structure for the design model of hybrid beamforming.Compared with the existing network structure,the proposed network structure can achieve better transmission performance and lower complexity.Moreover,to enhance the efficiency of IL further,by combining the semi-supervised graph with IL,we propose a hybrid beamforming scheme based on chunk-by-chunk semi-supervised learning,where only few transmissions are required to calculate the label and all other unlabelled transmissions would also be put into a training data chunk.Unlike the existing single-by-single approach where transmissions during the model update are not taken into the consideration of model update,all transmissions,even the ones during the model update,would make contributions to model update in the proposed method.During the model update,the amount of unlabelled transmissions is very large and they also carry some information,the prediction performance can be enhanced to some extent by these unlabelled channel data.Simulation results demonstrate the spectral efficiency of the proposed method outperforms that of the existing single-by-single approach.Besides,we prove the general complexity of the proposed method is lower than that of the existing approach and give the condition under which its absolute complexity outperforms that of the existing approach.
基金supported by the DOD National Defense Science and Engineering Graduate(NDSEG)Research Fellowshipsupported by the NGA under Contract No.HM04762110003.
文摘Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to best improve performance while limiting the number of new labels."Model Change"active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s).We pair this idea with graph-based semi-supervised learning(SSL)methods,that use the spectrum of the graph Laplacian matrix,which can be truncated to avoid prohibitively large computational and storage costs.We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.We show a variety of multiclass examples that illustrate improved performance over prior state-of-art.
基金This research is partially supported by the National Natural Science Foundation of China under Grant No.62376043Science and Technology Program of Sichuan Province under Grant Nos.2020JDRC0067,2023JDRC0087,and 24NSFTD0025.
文摘With the rapid development of Internet of Things(IoT)technology,IoT systems have been widely applied in health-care,transportation,home,and other fields.However,with the continuous expansion of the scale and increasing complexity of IoT systems,the stability and security issues of IoT systems have become increasingly prominent.Thus,it is crucial to detect anomalies in the collected IoT time series from various sensors.Recently,deep learning models have been leveraged for IoT anomaly detection.However,owing to the challenges associated with data labeling,most IoT anomaly detection methods resort to unsupervised learning techniques.Nevertheless,the absence of accurate abnormal information in unsupervised learning methods limits their performance.To address these problems,we propose AS-GCN-MTM,an adaptive structural Graph Convolutional Networks(GCN)-based framework using a mean-teacher mechanism(AS-GCN-MTM)for anomaly identification.It performs better than unsupervised methods using only a small amount of labeled data.Mean Teachers is an effective semi-supervised learning method that utilizes unlabeled data for training to improve the generalization ability and performance of the model.However,the dependencies between data are often unknown in time series data.To solve this problem,we designed a graph structure adaptive learning layer based on neural networks,which can automatically learn the graph structure from time series data.It not only better captures the relationships between nodes but also enhances the model’s performance by augmenting key data.Experiments have demonstrated that our method improves the baseline model with the highest F1 value by 10.4%,36.1%,and 5.6%,respectively,on three real datasets with a 10%data labeling rate.
基金supported by Research Project Support Program for Excellence Institute(2022,ESL)in Incheon National University.
文摘Quantifying the number of individuals in images or videos to estimate crowd density is a challenging yet crucial task with significant implications for fields such as urban planning and public safety.Crowd counting has attracted considerable attention in the field of computer vision,leading to the development of numerous advanced models and methodologies.These approaches vary in terms of supervision techniques,network architectures,and model complexity.Currently,most crowd counting methods rely on fully supervised learning,which has proven to be effective.However,this approach presents challenges in real-world scenarios,where labeled data and ground-truth annotations are often scarce.As a result,there is an increasing need to explore unsupervised and semi-supervised methods to effectively address crowd counting tasks in practical applications.This paper offers a comprehensive review of crowd counting models,with a particular focus on semi-supervised and unsupervised approaches based on their supervision paradigms.We summarize and critically analyze the key methods in these two categories,highlighting their strengths and limitations.Furthermore,we provide a comparative analysis of prominent crowd counting methods using widely adopted benchmark datasets.We believe that this survey will offer valuable insights and guide future advancements in crowd counting technology.
文摘The aim of this paper is to broaden the application of Stochastic Configuration Network (SCN) in the semi-supervised domain by utilizing common unlabeled data in daily life. It can enhance the classification accuracy of decentralized SCN algorithms while effectively protecting user privacy. To this end, we propose a decentralized semi-supervised learning algorithm for SCN, called DMT-SCN, which introduces teacher and student models by combining the idea of consistency regularization to improve the response speed of model iterations. In order to reduce the possible negative impact of unsupervised data on the model, we purposely change the way of adding noise to the unlabeled data. Simulation results show that the algorithm can effectively utilize unlabeled data to improve the classification accuracy of SCN training and is robust under different ground simulation environments.
基金supported by the Natural Science Foundation of China(No.41804112,author:Chengyun Song).
文摘Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.
基金sponsored by the National Natural Science Foundation of China Grant No.62271302the Shanghai Municipal Natural Science Foundation Grant 20ZR1423500.
文摘Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,labeled data is very scarce due to patient privacy concerns.For researchers,obtaining high-quality labeled images is exceedingly challenging because it involves manual annotation and clinical understanding.In addition,skin datasets are highly suitable for medical image classification studies due to the inter-class relationships and the inter-class similarities of skin lesions.In this paper,we propose a model called Coalition Sample Relation Consistency(CSRC),a consistency-based method that leverages Canonical Correlation Analysis(CCA)to capture the intrinsic relationships between samples.Considering that traditional consistency-based models only focus on the consistency of prediction,we additionally explore the similarity between features by using CCA.We enforce feature relation consistency based on traditional models,encouraging the model to learn more meaningful information from unlabeled data.Finally,considering that cross-entropy loss is not as suitable as the supervised loss when studying with imbalanced datasets(i.e.,ISIC 2017 and ISIC 2018),we improve the supervised loss to achieve better classification accuracy.Our study shows that this model performs better than many semi-supervised methods.
基金Supported by the National Natural Science Foundation of China(No.61976098)the Natural Science Foundation for Outstanding Young Scholars of Fujian Province(No.2022J06023).
文摘Unsupervised vehicle re-identification(Re-ID)methods have garnered widespread attention due to their potential in real-world traffic monitoring.However,existing unsupervised domain adaptation techniques often rely on pseudo-labels generated from the source domain,which struggle to effectively address the diversity and dynamic nature of real-world scenarios.Given the limited variety of common vehicle types,enhancing the model’s generalization capability across these types is crucial.To this end,an innovative approach called meta-type generalization(MTG)is proposed.By dividing the training data into meta-train and meta-test sets based on vehicle type information,a novel gradient interaction computation strategy is designed to enhance the model’s ability to learn typeinvariant features.Integrated into the ResNet50 backbone,the MTG model achieves improvements of 4.50%and 12.04%on the Veri-776 and VRAI datasets,respectively,compared with traditional unsupervised algorithms,and surpasses current state-of-the-art methods.This achievement holds promise for application in intelligent traffic systems,enabling more efficient urban traffic solutions.
基金This work is supported by the National Natural Science Foundation of China(Nos.61771154,61603239,61772454,6171101570).
文摘Deep Learning(DL)is such a powerful tool that we have seen tremendous success in areas such as Computer Vision,Speech Recognition,and Natural Language Processing.Since Automated Modulation Classification(AMC)is an important part in Cognitive Radio Networks,we try to explore its potential in solving signal modulation recognition problem.It cannot be overlooked that DL model is a complex model,thus making them prone to over-fitting.DL model requires many training data to combat with over-fitting,but adding high quality labels to training data manually is not always cheap and accessible,especially in real-time system,which may counter unprecedented data in dataset.Semi-supervised Learning is a way to exploit unlabeled data effectively to reduce over-fitting in DL.In this paper,we extend Generative Adversarial Networks(GANs)to the semi-supervised learning will show it is a method can be used to create a more dataefficient classifier.
基金Supported by Hebei Provincial Natural Science Foundation of China(Grant No.F2016203421)
文摘The performance of traditional vibration based fault diagnosis methods greatly depends on those hand- crafted features extracted using signal processing algo- rithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised represen- tation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal struc- tures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at dif- ferent scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multi- scale representations. Finally, the multiscale representa- tions are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.
文摘The majority of big data analytics applied to transportation datasets suffer from being too domain-specific,that is,they draw conclusions for a dataset based on analytics on the same dataset.This makes models trained from one domain(e.g.taxi data)applies badly to a different domain(e.g.Uber data).To achieve accurate analyses on a new domain,substantial amounts of data must be available,which limits practical applications.To remedy this,we propose to use semi-supervised and active learning of big data to accomplish the domain adaptation task:Selectively choosing a small amount of datapoints from a new domain while achieving comparable performances to using all the datapoints.We choose the New York City(NYC)transportation data of taxi and Uber as our dataset,simulating different domains with 90%as the source data domain for training and the remaining 10%as the target data domain for evaluation.We propose semi-supervised and active learning strategies and apply it to the source domain for selecting datapoints.Experimental results show that our adaptation achieves a comparable performance of using all datapoints while using only a fraction of them,substantially reducing the amount of data required.Our approach has two major advantages:It can make accurate analytics and predictions when big datasets are not available,and even if big datasets are available,our approach chooses the most informative datapoints out of the dataset,making the process much more efficient without having to process huge amounts of data.
基金supported by China Southern Power Grid Co.Ltd.science and technology project(Research on the theory,technology and application of stereoscopic disaster defense for power distribution network in large city,GZHKJXM20180060)National Natural Science Foundation of China(No.51477100).
文摘It is crucial to maintain the safe and stable operation of distribution transformers,which constitute a key part of power systems.In the event of transformer failure,the fault type must be diagnosed in a timely and accurate manner.To this end,a transformer fault diagnosis method based on infrared image processing and semi-supervised learning is proposed herein.First,we perform feature extraction on the collected infrared-image data to extract temperature,texture,and shape features as the model reference vectors.Then,a generative adversarial network(GAN)is constructed to generate synthetic samples for the minority subset of labelled samples.The proposed method can learn information from unlabeled sample data,unlike conventional supervised learning methods.Subsequently,a semi-supervised graph model is trained on the entire dataset,i.e.,both labeled and unlabeled data.Finally,we test the proposed model on an actual dataset collected from a Chinese electricity provider.The experimental results show that the use of feature extraction,sample generation,and semi-supervised learning model can improve the accuracy of transformer fault classification.This verifies the effectiveness of the proposed method.
基金Projects(61603393,61973306)supported in part by the National Natural Science Foundation of ChinaProject(BK20160275)supported by the Natural Science Foundation of Jiangsu Province,China+1 种基金Projects(2015M581885,2018T110571)supported by the Postdoctoral Science Foundation of ChinaProject(PAL-N201706)supported by the Open Project Foundation of State Key Laboratory of Synthetical Automation for Process Industries of Northeastern University,China
文摘Direct online measurement on product quality of industrial processes is difficult to be realized,which leads to a large number of unlabeled samples in modeling data.Therefore,it needs to employ semi-supervised learning(SSL)method to establish the soft sensor model of product quality.Considering the slow time-varying characteristic of industrial processes,the model parameters should be updated smoothly.According to this characteristic,this paper proposes an online adaptive semi-supervised learning algorithm based on random vector functional link network(RVFLN),denoted as OAS-RVFLN.By introducing a L2-fusion term that can be seen a weight deviation constraint,the proposed algorithm unifies the offline and online learning,and achieves smoothness of model parameter update.Empirical evaluations both on benchmark testing functions and datasets reveal that the proposed OAS-RVFLN can outperform the conventional methods in learning speed and accuracy.Finally,the OAS-RVFLN is applied to the coal dense medium separation process in coal industry to estimate the ash content of coal product,which further verifies its effectiveness and potential of industrial application.
基金supported in part by the Fundamental Research Funds for the Central Universities(WK2350000002)。
文摘In this paper we present a CNN based approach for a real time 3 D-hand pose estimation from the depth sequence.Prior discriminative approaches have achieved remarkable success but are facing two main challenges:Firstly,the methods are fully supervised hence require large numbers of annotated training data to extract the dynamic information from a hand representation.Secondly,unreliable hand detectors based on strong assumptions or a weak detector which often fail in several situations like complex environment and multiple hands.In contrast to these methods,this paper presents an approach that can be considered as semi-supervised by performing predictive coding of image sequences of hand poses in order to capture latent features underlying a given image without supervision.The hand is modelled using a novel latent tree dependency model(LDTM)which transforms internal joint location to an explicit representation.Then the modeled hand topology is integrated with the pose estimator using data dependent method to jointly learn latent variables of the posterior pose appearance and the pose configuration respectively.Finally,an unsupervised error term which is a part of the recurrent architecture ensures smooth estimations of the final pose.Experiments on three challenging public datasets,ICVL,MSRA,and NYU demonstrate the significant performance of the proposed method which is comparable or better than state-of-the-art approaches.
基金financially supported by the National Key R&D Program of China(No.2018YFA0702504)the National Natural Science Foundation of China(No.42174152 and No.41974140)+1 种基金the Science Foundation of China University of Petroleum,Beijing(No.2462020YXZZ008 and No.2462020QZDX003)the Strategic Cooperation Technology Projects of CNPC and CUPB(No.ZLZX2020-03).
文摘Intelligent seismic facies identification based on deep learning can alleviate the time-consuming and labor-intensive problem of manual interpretation,which has been widely applied.Supervised learning can realize facies identification with high efficiency and accuracy;however,it depends on the usage of a large amount of well-labeled data.To solve this issue,we propose herein an incremental semi-supervised method for intelligent facies identification.Our method considers the continuity of the lateral variation of strata and uses cosine similarity to quantify the similarity of the seismic data feature domain.The maximum-diff erence sample in the neighborhood of the currently used training data is then found to reasonably expand the training sets.This process continuously increases the amount of training data and learns its distribution.We integrate old knowledge while absorbing new ones to realize incremental semi-supervised learning and achieve the purpose of evolving the network models.In this work,accuracy and confusion matrix are employed to jointly control the predicted results of the model from both overall and partial aspects.The obtained values are then applied to a three-dimensional(3D)real dataset and used to quantitatively evaluate the results.Using unlabeled data,our proposed method acquires more accurate and stable testing results compared to conventional supervised learning algorithms that only use well-labeled data.A considerable improvement for small-sample categories is also observed.Using less than 1%of the training data,the proposed method can achieve an average accuracy of over 95%on the 3D dataset.In contrast,the conventional supervised learning algorithm achieved only approximately 85%.
基金This research/paper was fully supported by Universiti Teknologi PETRONAS,under the Yayasan Universiti Teknologi PETRONAS(YUTP)Fundamental Research Grant Scheme(YUTP-015LC0-123).
文摘Anomaly detection in high dimensional data is a critical research issue with serious implication in the real-world problems.Many issues in this field still unsolved,so several modern anomaly detection methods struggle to maintain adequate accuracy due to the highly descriptive nature of big data.Such a phenomenon is referred to as the“curse of dimensionality”that affects traditional techniques in terms of both accuracy and performance.Thus,this research proposed a hybrid model based on Deep Autoencoder Neural Network(DANN)with five layers to reduce the difference between the input and output.The proposed model was applied to a real-world gas turbine(GT)dataset that contains 87620 columns and 56 rows.During the experiment,two issues have been investigated and solved to enhance the results.The first is the dataset class imbalance,which solved using SMOTE technique.The second issue is the poor performance,which can be solved using one of the optimization algorithms.Several optimization algorithms have been investigated and tested,including stochastic gradient descent(SGD),RMSprop,Adam and Adamax.However,Adamax optimization algorithm showed the best results when employed to train theDANNmodel.The experimental results show that our proposed model can detect the anomalies by efficiently reducing the high dimensionality of dataset with accuracy of 99.40%,F1-score of 0.9649,Area Under the Curve(AUC)rate of 0.9649,and a minimal loss function during the hybrid model training.