Deep learning plays a vital role in real-life applications, for example object identification, human face recognition, speech recognition, biometrics identification, and short and long-term forecasting of data. The ma...Deep learning plays a vital role in real-life applications, for example object identification, human face recognition, speech recognition, biometrics identification, and short and long-term forecasting of data. The main objective of our work is to predict the market performance of the Dhaka Stock Exchange (DSE) on day closing price using different Deep Learning techniques. In this study, we have used the LSTM (Long Short-Term Memory) network to forecast the data of DSE for the convenience of shareholders. We have enforced LSTM networks to train data as well as forecast the future time series that has differentiated with test data. We have computed the Root Mean Square Error (RMSE) value to scrutinize the error between the forecasted value and test data that diminished the error by updating the LSTM networks. As a consequence of the renovation of the network, the LSTM network provides tremendous performance which outperformed the existing works to predict stock market prices.展开更多
The remaining useful life prediction of rolling bearing is vital in safety and reliability guarantee.In engineering scenarios,only a small amount of bearing performance degradation data can be obtained through acceler...The remaining useful life prediction of rolling bearing is vital in safety and reliability guarantee.In engineering scenarios,only a small amount of bearing performance degradation data can be obtained through accelerated life testing.In the absence of lifetime data,the hidden long-term correlation between performance degradation data is challenging to mine effectively,which is the main factor that restricts the prediction precision and engineering application of the residual life prediction method.To address this problem,a novel method based on the multi-layer perception neural network and bidirectional long short-term memory network is proposed.Firstly,a nonlinear health indicator(HI)calculation method based on kernel principal component analysis(KPCA)and exponential weighted moving average(EWMA)is designed.Then,using the raw vibration data and HI,a multi-layer perceptron(MLP)neural network is trained to further calculate the HI of the online bearing in real time.Furthermore,The bidirectional long short-term memory model(BiLSTM)optimized by particle swarm optimization(PSO)is used to mine the time series features of HI and predict the remaining service life.Performance verification experiments and comparative experiments are carried out on the XJTU-SY bearing open dataset.The research results indicate that this method has an excellent ability to predict future HI and remaining life.展开更多
A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan....A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively.展开更多
Stress changes due to changes in fluid pressure and temperature in a faulted formation may lead to the opening/shearing of the fault.This can be due to subsurface(geo)engineering activities such as fluid injections an...Stress changes due to changes in fluid pressure and temperature in a faulted formation may lead to the opening/shearing of the fault.This can be due to subsurface(geo)engineering activities such as fluid injections and geologic disposal of nuclear waste.Such activities are expected to rise in the future making it necessary to assess their short-and long-term safety.Here,a new machine learning(ML)approach to model pore pressure and fault displacements in response to high-pressure fluid injection cycles is developed.The focus is on fault behavior near the injection borehole.To capture the temporal dependencies in the data,long short-term memory(LSTM)networks are utilized.To prevent error accumulation within the forecast window,four critical measures to train a robust LSTM model for predicting fault response are highlighted:(i)setting an appropriate value of LSTM lag,(ii)calibrating the LSTM cell dimension,(iii)learning rate reduction during weight optimization,and(iv)not adopting an independent injection cycle as a validation set.Several numerical experiments were conducted,which demonstrated that the ML model can capture peaks in pressure and associated fault displacement that accompany an increase in fluid injection.The model also captured the decay in pressure and displacement during the injection shut-in period.Further,the ability of an ML model to highlight key changes in fault hydromechanical activation processes was investigated,which shows that ML can be used to monitor risk of fault activation and leakage during high pressure fluid injections.展开更多
This paper introduces the time-frequency analyzed long short-term memory(TF-LSTM) neural network method for jamming signal recognition over the Global Navigation Satellite System(GNSS) receiver. The method introduces ...This paper introduces the time-frequency analyzed long short-term memory(TF-LSTM) neural network method for jamming signal recognition over the Global Navigation Satellite System(GNSS) receiver. The method introduces the long shortterm memory(LSTM) neural network into the recognition algorithm and combines the time-frequency(TF) analysis for signal preprocessing. Five kinds of navigation jamming signals including white Gaussian noise(WGN), pulse jamming, sweep jamming, audio jamming, and spread spectrum jamming are used as input for training and recognition. Since the signal parameters and quantity are unknown in the actual scenario, this work builds a data set containing multiple kinds and parameters jamming to train the TF-LSTM. The performance of this method is evaluated by simulations and experiments. The method has higher recognition accuracy and better robustness than the existing methods, such as LSTM and the convolutional neural network(CNN).展开更多
This research focuses on solving the fault detection and health monitoring of high-power thyristor converter.In terms of the critical role of thyristor converter in nuclear fusion system,a method based on long short-t...This research focuses on solving the fault detection and health monitoring of high-power thyristor converter.In terms of the critical role of thyristor converter in nuclear fusion system,a method based on long short-term memory(LSTM)neural network model is proposed to monitor the operational state of the converter and accurately detect faults as they occur.By sampling and processing a large number of thyristor converter operation data,the LSTM model is trained to identify and detect abnormal state,and the power supply health status is monitored.Compared with traditional methods,LSTM model shows higher accuracy and abnormal state detection ability.The experimental results show that this method can effectively improve the reliability and safety of the thyristor converter,and provide a strong guarantee for the stable operation of the nuclear fusion reactor.展开更多
A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a force...A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.展开更多
To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with...To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with sea surface wind and wave heights as training samples.The prediction performance of the model is evaluated,and the error analysis shows that when using the same set of numerically predicted sea surface wind as input,the prediction error produced by the proposed LSTM model at Sta.N01 is 20%,18%and 23%lower than the conventional numerical wave models in terms of the total root mean square error(RMSE),scatter index(SI)and mean absolute error(MAE),respectively.Particularly,for significant wave height in the range of 3–5 m,the prediction accuracy of the LSTM model is improved the most remarkably,with RMSE,SI and MAE all decreasing by 24%.It is also evident that the numbers of hidden neurons,the numbers of buoys used and the time length of training samples all have impact on the prediction accuracy.However,the prediction does not necessary improve with the increase of number of hidden neurons or number of buoys used.The experiment trained by data with the longest time length is found to perform the best overall compared to other experiments with a shorter time length for training.Overall,long short-term memory neural network was proved to be a very promising method for future development and applications in wave forecasting.展开更多
Holter usually monitors electrocardiogram(ECG)signals for more than 24 hours to capture short-lived cardiac abnormalities.In view of the large amount of Holter data and the fact that the normal part accounts for the m...Holter usually monitors electrocardiogram(ECG)signals for more than 24 hours to capture short-lived cardiac abnormalities.In view of the large amount of Holter data and the fact that the normal part accounts for the majority,it is reasonable to design an algorithm that can automatically eliminate normal data segments as much as possible without missing any abnormal data segments,and then take the left segments to the doctors or the computer programs for further diagnosis.In this paper,we propose a preliminary abnormal segment screening method for Holter data.Based on long short-term memory(LSTM)networks,the prediction model is established and trained with the normal data of a monitored object.Then,on the basis of kernel density estimation,we learn the distribution law of prediction errors after applying the trained LSTM model to the regular data.Based on these,the preliminary abnormal ECG segment screening analysis is carried out without R wave detection.Experiments on the MIT-BIH arrhythmia database show that,under the condition of ensuring that no abnormal point is missed,53.89% of normal segments can be effectively obviated.This work can greatly reduce the workload of subsequent further processing.展开更多
In this paper,the recurrent neural network structure of a bidirectional long shortterm memory network(Bi-LSTM)with special memory cells that store information is used to characterize the deep features of the variation...In this paper,the recurrent neural network structure of a bidirectional long shortterm memory network(Bi-LSTM)with special memory cells that store information is used to characterize the deep features of the variation pattern between logging and seismic data.A mapping relationship model between high-frequency logging data and low-frequency seismic data is established via nonlinear mapping.The seismic waveform is infinitely approximated using the logging curve in the low-frequency band to obtain a nonlinear mapping model of this scale,which then stepwise approach the logging curve in the high-frequency band.Finally,a seismic-inversion method of nonlinear mapping multilevel well–seismic matching based on the Bi-LSTM network is developed.The characteristic of this method is that by applying the multilevel well–seismic matching process,the seismic data are stepwise matched to the scale range that is consistent with the logging curve.Further,the matching operator at each level can be stably obtained to effectively overcome the problems that occur in the well–seismic matching process,such as the inconsistency in the scale of two types of data,accuracy in extracting the seismic wavelet of the well-side seismic traces,and multiplicity of solutions.Model test and practical application demonstrate that this method improves the vertical resolution of inversion results,and at the same time,the boundary and the lateral characteristics of the sand body are well maintained to improve the accuracy of thin-layer sand body prediction and achieve an improved practical application effect.展开更多
Load forecasting is of great significance to the development of new power systems.With the advancement of smart grids,the integration and distribution of distributed renewable energy sources and power electronics devi...Load forecasting is of great significance to the development of new power systems.With the advancement of smart grids,the integration and distribution of distributed renewable energy sources and power electronics devices have made power load data increasingly complex and volatile.This places higher demands on the prediction and analysis of power loads.In order to improve the prediction accuracy of short-term power load,a CNN-BiLSTMTPA short-term power prediction model based on the Improved Whale Optimization Algorithm(IWOA)with mixed strategies was proposed.Firstly,the model combined the Convolutional Neural Network(CNN)with the Bidirectional Long Short-Term Memory Network(BiLSTM)to fully extract the spatio-temporal characteristics of the load data itself.Then,the Temporal Pattern Attention(TPA)mechanism was introduced into the CNN-BiLSTM model to automatically assign corresponding weights to the hidden states of the BiLSTM.This allowed the model to differentiate the importance of load sequences at different time intervals.At the same time,in order to solve the problem of the difficulties of selecting the parameters of the temporal model,and the poor global search ability of the whale algorithm,which is easy to fall into the local optimization,the whale algorithm(IWOA)was optimized by using the hybrid strategy of Tent chaos mapping and Levy flight strategy,so as to better search the parameters of the model.In this experiment,the real load data of a region in Zhejiang was taken as an example to analyze,and the prediction accuracy(R2)of the proposed method reached 98.83%.Compared with the prediction models such as BP,WOA-CNN-BiLSTM,SSA-CNN-BiLSTM,CNN-BiGRU-Attention,etc.,the experimental results showed that the model proposed in this study has a higher prediction accuracy.展开更多
Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning netwo...Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.展开更多
A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively ...A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively used and still have considerable potential. In recent years, methods based on deep neural networks have made significant breakthroughs, and fault diagnosis methods for industrial processes based on deep learning have attracted considerable research attention. Therefore, we propose a fusion deeplearning algorithm based on a fully convolutional neural network(FCN) to extract features and build models to correctly diagnose all types of faults. We use long short-term memory(LSTM) units to expand our proposed FCN so that our proposed deep learning model can better extract the time-domain features of chemical process data. We also introduce the attention mechanism into the model, aimed at highlighting the importance of features, which is significant for the fault diagnosis of chemical processes with many features. When applied to the benchmark Tennessee Eastman process, our proposed model exhibits impressive performance, demonstrating the effectiveness of the attention-based LSTM FCN in chemical process fault diagnosis.展开更多
There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement an...There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement and time series of a landslide.The second one is the dynamic evolution of a landslide,which could not be feasibly simulated simply by traditional prediction models.In this paper,a dynamic model of displacement prediction is introduced for composite landslides based on a combination of empirical mode decomposition with soft screening stop criteria(SSSC-EMD)and deep bidirectional long short-term memory(DBi-LSTM)neural network.In the proposed model,the time series analysis and SSSC-EMD are used to decompose the observed accumulated displacements of a slope into three components,viz.trend displacement,periodic displacement,and random displacement.Then,by analyzing the evolution pattern of a landslide and its key factors triggering landslides,appropriate influencing factors are selected for each displacement component,and DBi-LSTM neural network to carry out multi-datadriven dynamic prediction for each displacement component.An accumulated displacement prediction has been obtained by a summation of each component.For accuracy verification and engineering practicability of the model,field observations from two known landslides in China,the Xintan landslide and the Bazimen landslide were collected for comparison and evaluation.The case study verified that the model proposed in this paper can better characterize the"stepwise"deformation characteristics of a slope.As compared with long short-term memory(LSTM)neural network,support vector machine(SVM),and autoregressive integrated moving average(ARIMA)model,DBi-LSTM neural network has higher accuracy in predicting the periodic displacement of slope deformation,with the mean absolute percentage error reduced by 3.063%,14.913%,and 13.960%respectively,and the root mean square error reduced by 1.951 mm,8.954 mm and 7.790 mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide new insight for practical landslide prevention and control engineering.展开更多
An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models...An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models, this paper proposes a dynamic prediction model of landslide displacement based on singular spectrum analysis(SSA) and stack long short-term memory(SLSTM) network. The SSA is used to decompose the landslide accumulated displacement time series data into trend term and periodic term displacement subsequences. A cubic polynomial function is used to predict the trend term displacement subsequence, and the SLSTM neural network is used to predict the periodic term displacement subsequence. At the same time, the Bayesian optimization algorithm is used to determine that the SLSTM network input sequence length is 12 and the number of hidden layer nodes is 18. The SLSTM network is updated by adding predicted values to the training set to achieve dynamic displacement prediction. Finally, the accumulated landslide displacement is obtained by superimposing the predicted value of each displacement subsequence. The proposed model was verified on the Xintan landslide in Hubei Province, China. The results show that when predicting the displacement of the periodic term, the SLSTM network has higher prediction accuracy than the support vector machine(SVM) and auto regressive integrated moving average(ARIMA). The mean relative error(MRE) is reduced by 4.099% and 3.548% respectively, while the root mean square error(RMSE) is reduced by 5.830 mm and 3.854 mm respectively. It is concluded that the SLSTM network model can better simulate the dynamic characteristics of landslides.展开更多
The global financial and economic market is now made up of several structures that are powerful and complex.In the last few decades,a few techniques and theories have been implemented that have revolutionized the unde...The global financial and economic market is now made up of several structures that are powerful and complex.In the last few decades,a few techniques and theories have been implemented that have revolutionized the understanding of those systems to forecast financial markets based on time series analysis.However still,none has been shown to function successfully consistently.In this project,a special form of Neural Network Modeling called LSTM to forecast the foreign exchange rate of currencies.In several different forecasting applications,this method of modelling has become popular as it can be defined complex non-linear relationships between variables and the outcome it wishes to predict.In compare to the stock market,exchange rates tend to be more relevant due to the availability of macroeconomic data that can be used to train the network to learn the impact of particular variables on the rate to be predicted.The information was collected using Quandl,an economic and financial platform that offers quantitative indicators for a wide variety of countries.Model is compared with three different metrics by exponential moving average and an autoregressive integrated moving average.then compare and validate the ability of the model to reliably predict future values and compare which of the models predicted the most correctly.展开更多
Wind power volatility not only limits the large-scale grid connection but also poses many challenges to safe grid operation.Accurate wind power prediction can mitigate the adverse effects of wind power volatility on w...Wind power volatility not only limits the large-scale grid connection but also poses many challenges to safe grid operation.Accurate wind power prediction can mitigate the adverse effects of wind power volatility on wind power grid connections.For the characteristics of wind power antecedent data and precedent data jointly to determine the prediction accuracy of the prediction model,the short-term prediction of wind power based on a combined neural network is proposed.First,the Bi-directional Long Short Term Memory(BiLSTM)network prediction model is constructed,and the bi-directional nature of the BiLSTM network is used to deeply mine the wind power data information and find the correlation information within the data.Secondly,to avoid the limitation of a single prediction model when the wind power changes abruptly,the Wavelet Transform-Improved Adaptive Genetic Algorithm-Back Propagation(WT-IAGA-BP)neural network based on the combination of the WT-IAGA-BP neural network and BiLSTM network is constructed for the short-term prediction of wind power.Finally,comparing with LSTM,BiLSTM,WT-LSTM,WT-BiLSTM,WT-IAGA-BP,and WT-IAGA-BP&LSTM prediction models,it is verified that the wind power short-term prediction model based on the combination of WT-IAGA-BP neural network and BiLSTM network has higher prediction accuracy.展开更多
The problems in equipment fault detection include data dimension explosion,computational complexity,low detection accuracy,etc.To solve these problems,a device anomaly detection algorithm based on enhanced long short-...The problems in equipment fault detection include data dimension explosion,computational complexity,low detection accuracy,etc.To solve these problems,a device anomaly detection algorithm based on enhanced long short-term memory(LSTM)is proposed.The algorithm first reduces the dimensionality of the device sensor data by principal component analysis(PCA),extracts the strongly correlated variable data among the multidimensional sensor data with the lowest possible information loss,and then uses the enhanced stacked LSTM to predict the extracted temporal data,thus improving the accuracy of anomaly detection.To improve the efficiency of the anomaly detection,a genetic algorithm(GA)is used to adjust the magnitude of the enhancements made by the LSTM model.The validation of the actual data from the pumps shows that the algorithm has significantly improved the recall rate and the detection speed of device anomaly detection,with the recall rate of 97.07%,which indicates that the algorithm is effective and efficient for device anomaly detection in the actual production environment.展开更多
Very short-term prediction of ship motion is critically important in many scenarios such as carrier aircraft landings and marine engineering operations.This paper introduces the newly developed functional deep learnin...Very short-term prediction of ship motion is critically important in many scenarios such as carrier aircraft landings and marine engineering operations.This paper introduces the newly developed functional deep learning model,named as Deep Operator networks neural network(DeepOnet)to predict very short-term ship motion in waves.It takes wave height as input and predicts ship motion as output,employing a cause-to-effect prediction approach.The modeling data for this study is derived from publicly available experimental data at the Iowa Institute of Hydraulic Research.Initially,the tuning of the hyperparameters within the neural network system was conducted to identify the optimal parameter combination.Subsequently,the DeepOnet model for wave height and multi-degree-of-freedom motion was established,and the impact of increasing time steps on prediction accuracy was analyzed.Lastly,a comparative analysis was performed between the DeepOnet model and the classical time series model,long short-term memory(LSTM).It was observed that the DeepOnet model exhibited a tenfold improvement in accuracy for roll and heave motions.Furthermore,as the forecast duration increased,the advantage of the DeepOnet model showed a trend of strengthening.As a functional prediction model,DeepOnet offers a novel and promising tool for very short-term ship motion prediction.展开更多
文摘Deep learning plays a vital role in real-life applications, for example object identification, human face recognition, speech recognition, biometrics identification, and short and long-term forecasting of data. The main objective of our work is to predict the market performance of the Dhaka Stock Exchange (DSE) on day closing price using different Deep Learning techniques. In this study, we have used the LSTM (Long Short-Term Memory) network to forecast the data of DSE for the convenience of shareholders. We have enforced LSTM networks to train data as well as forecast the future time series that has differentiated with test data. We have computed the Root Mean Square Error (RMSE) value to scrutinize the error between the forecasted value and test data that diminished the error by updating the LSTM networks. As a consequence of the renovation of the network, the LSTM network provides tremendous performance which outperformed the existing works to predict stock market prices.
基金supported by the National Key Research and Development Project(Grant Number 2023YFB3709601)the National Natural Science Foundation of China(Grant Numbers 62373215,62373219,62073193)+2 种基金the Key Research and Development Plan of Shandong Province(Grant Numbers 2021CXGC010204,2022CXGC020902)the Fundamental Research Funds of Shandong University(Grant Number 2021JCG008)the Natural Science Foundation of Shandong Province(Grant Number ZR2023MF100).
文摘The remaining useful life prediction of rolling bearing is vital in safety and reliability guarantee.In engineering scenarios,only a small amount of bearing performance degradation data can be obtained through accelerated life testing.In the absence of lifetime data,the hidden long-term correlation between performance degradation data is challenging to mine effectively,which is the main factor that restricts the prediction precision and engineering application of the residual life prediction method.To address this problem,a novel method based on the multi-layer perception neural network and bidirectional long short-term memory network is proposed.Firstly,a nonlinear health indicator(HI)calculation method based on kernel principal component analysis(KPCA)and exponential weighted moving average(EWMA)is designed.Then,using the raw vibration data and HI,a multi-layer perceptron(MLP)neural network is trained to further calculate the HI of the online bearing in real time.Furthermore,The bidirectional long short-term memory model(BiLSTM)optimized by particle swarm optimization(PSO)is used to mine the time series features of HI and predict the remaining service life.Performance verification experiments and comparative experiments are carried out on the XJTU-SY bearing open dataset.The research results indicate that this method has an excellent ability to predict future HI and remaining life.
基金National Key Research and Development Program of China (Grant No. 2022YFE0102700)National Natural Science Foundation of China (Grant No. 52102420)+2 种基金research project “Safe Da Batt” (03EMF0409A) funded by the German Federal Ministry of Digital and Transport (BMDV)China Postdoctoral Science Foundation (Grant No. 2023T160085)Sichuan Science and Technology Program (Grant No. 2024NSFSC0938)。
文摘A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively.
基金supported by the US Department of Energy (DOE),the Office of Nuclear Energy,Spent Fuel and Waste Science and Technology Campaign,under Contract Number DE-AC02-05CH11231the National Energy Technology Laboratory under the award number FP00013650 at Lawrence Berkeley National Laboratory.
文摘Stress changes due to changes in fluid pressure and temperature in a faulted formation may lead to the opening/shearing of the fault.This can be due to subsurface(geo)engineering activities such as fluid injections and geologic disposal of nuclear waste.Such activities are expected to rise in the future making it necessary to assess their short-and long-term safety.Here,a new machine learning(ML)approach to model pore pressure and fault displacements in response to high-pressure fluid injection cycles is developed.The focus is on fault behavior near the injection borehole.To capture the temporal dependencies in the data,long short-term memory(LSTM)networks are utilized.To prevent error accumulation within the forecast window,four critical measures to train a robust LSTM model for predicting fault response are highlighted:(i)setting an appropriate value of LSTM lag,(ii)calibrating the LSTM cell dimension,(iii)learning rate reduction during weight optimization,and(iv)not adopting an independent injection cycle as a validation set.Several numerical experiments were conducted,which demonstrated that the ML model can capture peaks in pressure and associated fault displacement that accompany an increase in fluid injection.The model also captured the decay in pressure and displacement during the injection shut-in period.Further,the ability of an ML model to highlight key changes in fault hydromechanical activation processes was investigated,which shows that ML can be used to monitor risk of fault activation and leakage during high pressure fluid injections.
基金supported by the National Natural Science Foundation of China (62003354)。
文摘This paper introduces the time-frequency analyzed long short-term memory(TF-LSTM) neural network method for jamming signal recognition over the Global Navigation Satellite System(GNSS) receiver. The method introduces the long shortterm memory(LSTM) neural network into the recognition algorithm and combines the time-frequency(TF) analysis for signal preprocessing. Five kinds of navigation jamming signals including white Gaussian noise(WGN), pulse jamming, sweep jamming, audio jamming, and spread spectrum jamming are used as input for training and recognition. Since the signal parameters and quantity are unknown in the actual scenario, this work builds a data set containing multiple kinds and parameters jamming to train the TF-LSTM. The performance of this method is evaluated by simulations and experiments. The method has higher recognition accuracy and better robustness than the existing methods, such as LSTM and the convolutional neural network(CNN).
基金supported by the Open Fund of Magnetic Confinement Fusion Laboratory of Anhui Province(No.2024AMF04003)the Natural Science Foundation of Anhui Province(No.228085ME142)Comprehensive Research Facility for Fusion Technology(No.20180000527301001228)。
文摘This research focuses on solving the fault detection and health monitoring of high-power thyristor converter.In terms of the critical role of thyristor converter in nuclear fusion system,a method based on long short-term memory(LSTM)neural network model is proposed to monitor the operational state of the converter and accurately detect faults as they occur.By sampling and processing a large number of thyristor converter operation data,the LSTM model is trained to identify and detect abnormal state,and the power supply health status is monitored.Compared with traditional methods,LSTM model shows higher accuracy and abnormal state detection ability.The experimental results show that this method can effectively improve the reliability and safety of the thyristor converter,and provide a strong guarantee for the stable operation of the nuclear fusion reactor.
基金supported by the Ministry of Trade,Industry & Energy(MOTIE,Korea) under Industrial Technology Innovation Program (No.10063424,'development of distant speech recognition and multi-task dialog processing technologies for in-door conversational robots')
文摘A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.
基金The National Key R&D Program of China under contract No.2016YFC1402103
文摘To explore new operational forecasting methods of waves,a forecasting model for wave heights at three stations in the Bohai Sea has been developed.This model is based on long short-term memory(LSTM)neural network with sea surface wind and wave heights as training samples.The prediction performance of the model is evaluated,and the error analysis shows that when using the same set of numerically predicted sea surface wind as input,the prediction error produced by the proposed LSTM model at Sta.N01 is 20%,18%and 23%lower than the conventional numerical wave models in terms of the total root mean square error(RMSE),scatter index(SI)and mean absolute error(MAE),respectively.Particularly,for significant wave height in the range of 3–5 m,the prediction accuracy of the LSTM model is improved the most remarkably,with RMSE,SI and MAE all decreasing by 24%.It is also evident that the numbers of hidden neurons,the numbers of buoys used and the time length of training samples all have impact on the prediction accuracy.However,the prediction does not necessary improve with the increase of number of hidden neurons or number of buoys used.The experiment trained by data with the longest time length is found to perform the best overall compared to other experiments with a shorter time length for training.Overall,long short-term memory neural network was proved to be a very promising method for future development and applications in wave forecasting.
文摘Holter usually monitors electrocardiogram(ECG)signals for more than 24 hours to capture short-lived cardiac abnormalities.In view of the large amount of Holter data and the fact that the normal part accounts for the majority,it is reasonable to design an algorithm that can automatically eliminate normal data segments as much as possible without missing any abnormal data segments,and then take the left segments to the doctors or the computer programs for further diagnosis.In this paper,we propose a preliminary abnormal segment screening method for Holter data.Based on long short-term memory(LSTM)networks,the prediction model is established and trained with the normal data of a monitored object.Then,on the basis of kernel density estimation,we learn the distribution law of prediction errors after applying the trained LSTM model to the regular data.Based on these,the preliminary abnormal ECG segment screening analysis is carried out without R wave detection.Experiments on the MIT-BIH arrhythmia database show that,under the condition of ensuring that no abnormal point is missed,53.89% of normal segments can be effectively obviated.This work can greatly reduce the workload of subsequent further processing.
基金supported by the National Major Science and Technology Special Project(No.2016ZX05026-002).
文摘In this paper,the recurrent neural network structure of a bidirectional long shortterm memory network(Bi-LSTM)with special memory cells that store information is used to characterize the deep features of the variation pattern between logging and seismic data.A mapping relationship model between high-frequency logging data and low-frequency seismic data is established via nonlinear mapping.The seismic waveform is infinitely approximated using the logging curve in the low-frequency band to obtain a nonlinear mapping model of this scale,which then stepwise approach the logging curve in the high-frequency band.Finally,a seismic-inversion method of nonlinear mapping multilevel well–seismic matching based on the Bi-LSTM network is developed.The characteristic of this method is that by applying the multilevel well–seismic matching process,the seismic data are stepwise matched to the scale range that is consistent with the logging curve.Further,the matching operator at each level can be stably obtained to effectively overcome the problems that occur in the well–seismic matching process,such as the inconsistency in the scale of two types of data,accuracy in extracting the seismic wavelet of the well-side seismic traces,and multiplicity of solutions.Model test and practical application demonstrate that this method improves the vertical resolution of inversion results,and at the same time,the boundary and the lateral characteristics of the sand body are well maintained to improve the accuracy of thin-layer sand body prediction and achieve an improved practical application effect.
文摘Load forecasting is of great significance to the development of new power systems.With the advancement of smart grids,the integration and distribution of distributed renewable energy sources and power electronics devices have made power load data increasingly complex and volatile.This places higher demands on the prediction and analysis of power loads.In order to improve the prediction accuracy of short-term power load,a CNN-BiLSTMTPA short-term power prediction model based on the Improved Whale Optimization Algorithm(IWOA)with mixed strategies was proposed.Firstly,the model combined the Convolutional Neural Network(CNN)with the Bidirectional Long Short-Term Memory Network(BiLSTM)to fully extract the spatio-temporal characteristics of the load data itself.Then,the Temporal Pattern Attention(TPA)mechanism was introduced into the CNN-BiLSTM model to automatically assign corresponding weights to the hidden states of the BiLSTM.This allowed the model to differentiate the importance of load sequences at different time intervals.At the same time,in order to solve the problem of the difficulties of selecting the parameters of the temporal model,and the poor global search ability of the whale algorithm,which is easy to fall into the local optimization,the whale algorithm(IWOA)was optimized by using the hybrid strategy of Tent chaos mapping and Levy flight strategy,so as to better search the parameters of the model.In this experiment,the real load data of a region in Zhejiang was taken as an example to analyze,and the prediction accuracy(R2)of the proposed method reached 98.83%.Compared with the prediction models such as BP,WOA-CNN-BiLSTM,SSA-CNN-BiLSTM,CNN-BiGRU-Attention,etc.,the experimental results showed that the model proposed in this study has a higher prediction accuracy.
文摘Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.
文摘A correct and timely fault diagnosis is important for improving the safety and reliability of chemical processes. With the advancement of big data technology, data-driven fault diagnosis methods are being extensively used and still have considerable potential. In recent years, methods based on deep neural networks have made significant breakthroughs, and fault diagnosis methods for industrial processes based on deep learning have attracted considerable research attention. Therefore, we propose a fusion deeplearning algorithm based on a fully convolutional neural network(FCN) to extract features and build models to correctly diagnose all types of faults. We use long short-term memory(LSTM) units to expand our proposed FCN so that our proposed deep learning model can better extract the time-domain features of chemical process data. We also introduce the attention mechanism into the model, aimed at highlighting the importance of features, which is significant for the fault diagnosis of chemical processes with many features. When applied to the benchmark Tennessee Eastman process, our proposed model exhibits impressive performance, demonstrating the effectiveness of the attention-based LSTM FCN in chemical process fault diagnosis.
文摘There are two technical challenges in predicting slope deformation.The first one is the random displacement,which could not be decomposed and predicted by numerically resolving the observed accumulated displacement and time series of a landslide.The second one is the dynamic evolution of a landslide,which could not be feasibly simulated simply by traditional prediction models.In this paper,a dynamic model of displacement prediction is introduced for composite landslides based on a combination of empirical mode decomposition with soft screening stop criteria(SSSC-EMD)and deep bidirectional long short-term memory(DBi-LSTM)neural network.In the proposed model,the time series analysis and SSSC-EMD are used to decompose the observed accumulated displacements of a slope into three components,viz.trend displacement,periodic displacement,and random displacement.Then,by analyzing the evolution pattern of a landslide and its key factors triggering landslides,appropriate influencing factors are selected for each displacement component,and DBi-LSTM neural network to carry out multi-datadriven dynamic prediction for each displacement component.An accumulated displacement prediction has been obtained by a summation of each component.For accuracy verification and engineering practicability of the model,field observations from two known landslides in China,the Xintan landslide and the Bazimen landslide were collected for comparison and evaluation.The case study verified that the model proposed in this paper can better characterize the"stepwise"deformation characteristics of a slope.As compared with long short-term memory(LSTM)neural network,support vector machine(SVM),and autoregressive integrated moving average(ARIMA)model,DBi-LSTM neural network has higher accuracy in predicting the periodic displacement of slope deformation,with the mean absolute percentage error reduced by 3.063%,14.913%,and 13.960%respectively,and the root mean square error reduced by 1.951 mm,8.954 mm and 7.790 mm respectively.Conclusively,this model not only has high prediction accuracy but also is more stable,which can provide new insight for practical landslide prevention and control engineering.
基金supported by the Natural Science Foundation of Shaanxi Province under Grant 2019JQ206in part by the Science and Technology Department of Shaanxi Province under Grant 2020CGXNG-009in part by the Education Department of Shaanxi Province under Grant 17JK0346。
文摘An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models, this paper proposes a dynamic prediction model of landslide displacement based on singular spectrum analysis(SSA) and stack long short-term memory(SLSTM) network. The SSA is used to decompose the landslide accumulated displacement time series data into trend term and periodic term displacement subsequences. A cubic polynomial function is used to predict the trend term displacement subsequence, and the SLSTM neural network is used to predict the periodic term displacement subsequence. At the same time, the Bayesian optimization algorithm is used to determine that the SLSTM network input sequence length is 12 and the number of hidden layer nodes is 18. The SLSTM network is updated by adding predicted values to the training set to achieve dynamic displacement prediction. Finally, the accumulated landslide displacement is obtained by superimposing the predicted value of each displacement subsequence. The proposed model was verified on the Xintan landslide in Hubei Province, China. The results show that when predicting the displacement of the periodic term, the SLSTM network has higher prediction accuracy than the support vector machine(SVM) and auto regressive integrated moving average(ARIMA). The mean relative error(MRE) is reduced by 4.099% and 3.548% respectively, while the root mean square error(RMSE) is reduced by 5.830 mm and 3.854 mm respectively. It is concluded that the SLSTM network model can better simulate the dynamic characteristics of landslides.
文摘The global financial and economic market is now made up of several structures that are powerful and complex.In the last few decades,a few techniques and theories have been implemented that have revolutionized the understanding of those systems to forecast financial markets based on time series analysis.However still,none has been shown to function successfully consistently.In this project,a special form of Neural Network Modeling called LSTM to forecast the foreign exchange rate of currencies.In several different forecasting applications,this method of modelling has become popular as it can be defined complex non-linear relationships between variables and the outcome it wishes to predict.In compare to the stock market,exchange rates tend to be more relevant due to the availability of macroeconomic data that can be used to train the network to learn the impact of particular variables on the rate to be predicted.The information was collected using Quandl,an economic and financial platform that offers quantitative indicators for a wide variety of countries.Model is compared with three different metrics by exponential moving average and an autoregressive integrated moving average.then compare and validate the ability of the model to reliably predict future values and compare which of the models predicted the most correctly.
基金support of national natural science foundation of China(No.52067021)natural science foundation of Xinjiang(2022D01C35)+1 种基金excellent youth scientific and technological talents plan of Xinjiang(No.2019Q012)major science&technology special project of Xinjiang Uygur Autonomous Region(2022A01002-2)。
文摘Wind power volatility not only limits the large-scale grid connection but also poses many challenges to safe grid operation.Accurate wind power prediction can mitigate the adverse effects of wind power volatility on wind power grid connections.For the characteristics of wind power antecedent data and precedent data jointly to determine the prediction accuracy of the prediction model,the short-term prediction of wind power based on a combined neural network is proposed.First,the Bi-directional Long Short Term Memory(BiLSTM)network prediction model is constructed,and the bi-directional nature of the BiLSTM network is used to deeply mine the wind power data information and find the correlation information within the data.Secondly,to avoid the limitation of a single prediction model when the wind power changes abruptly,the Wavelet Transform-Improved Adaptive Genetic Algorithm-Back Propagation(WT-IAGA-BP)neural network based on the combination of the WT-IAGA-BP neural network and BiLSTM network is constructed for the short-term prediction of wind power.Finally,comparing with LSTM,BiLSTM,WT-LSTM,WT-BiLSTM,WT-IAGA-BP,and WT-IAGA-BP&LSTM prediction models,it is verified that the wind power short-term prediction model based on the combination of WT-IAGA-BP neural network and BiLSTM network has higher prediction accuracy.
基金National Key R&D Program of China(No.2020YFB1707700)。
文摘The problems in equipment fault detection include data dimension explosion,computational complexity,low detection accuracy,etc.To solve these problems,a device anomaly detection algorithm based on enhanced long short-term memory(LSTM)is proposed.The algorithm first reduces the dimensionality of the device sensor data by principal component analysis(PCA),extracts the strongly correlated variable data among the multidimensional sensor data with the lowest possible information loss,and then uses the enhanced stacked LSTM to predict the extracted temporal data,thus improving the accuracy of anomaly detection.To improve the efficiency of the anomaly detection,a genetic algorithm(GA)is used to adjust the magnitude of the enhancements made by the LSTM model.The validation of the actual data from the pumps shows that the algorithm has significantly improved the recall rate and the detection speed of device anomaly detection,with the recall rate of 97.07%,which indicates that the algorithm is effective and efficient for device anomaly detection in the actual production environment.
基金Project supported by the National Natural Science Foundation of China(Grant No.51679021)supported by the Southern Marine Science and Engineering Guangdong Laboratory(Guangzhou)(Grant Nos.GML20240001,GML2024009).
文摘Very short-term prediction of ship motion is critically important in many scenarios such as carrier aircraft landings and marine engineering operations.This paper introduces the newly developed functional deep learning model,named as Deep Operator networks neural network(DeepOnet)to predict very short-term ship motion in waves.It takes wave height as input and predicts ship motion as output,employing a cause-to-effect prediction approach.The modeling data for this study is derived from publicly available experimental data at the Iowa Institute of Hydraulic Research.Initially,the tuning of the hyperparameters within the neural network system was conducted to identify the optimal parameter combination.Subsequently,the DeepOnet model for wave height and multi-degree-of-freedom motion was established,and the impact of increasing time steps on prediction accuracy was analyzed.Lastly,a comparative analysis was performed between the DeepOnet model and the classical time series model,long short-term memory(LSTM).It was observed that the DeepOnet model exhibited a tenfold improvement in accuracy for roll and heave motions.Furthermore,as the forecast duration increased,the advantage of the DeepOnet model showed a trend of strengthening.As a functional prediction model,DeepOnet offers a novel and promising tool for very short-term ship motion prediction.