A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan....A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively.展开更多
Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for India...Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively.展开更多
Load forecasting is of great significance to the development of new power systems.With the advancement of smart grids,the integration and distribution of distributed renewable energy sources and power electronics devi...Load forecasting is of great significance to the development of new power systems.With the advancement of smart grids,the integration and distribution of distributed renewable energy sources and power electronics devices have made power load data increasingly complex and volatile.This places higher demands on the prediction and analysis of power loads.In order to improve the prediction accuracy of short-term power load,a CNN-BiLSTMTPA short-term power prediction model based on the Improved Whale Optimization Algorithm(IWOA)with mixed strategies was proposed.Firstly,the model combined the Convolutional Neural Network(CNN)with the Bidirectional Long Short-Term Memory Network(BiLSTM)to fully extract the spatio-temporal characteristics of the load data itself.Then,the Temporal Pattern Attention(TPA)mechanism was introduced into the CNN-BiLSTM model to automatically assign corresponding weights to the hidden states of the BiLSTM.This allowed the model to differentiate the importance of load sequences at different time intervals.At the same time,in order to solve the problem of the difficulties of selecting the parameters of the temporal model,and the poor global search ability of the whale algorithm,which is easy to fall into the local optimization,the whale algorithm(IWOA)was optimized by using the hybrid strategy of Tent chaos mapping and Levy flight strategy,so as to better search the parameters of the model.In this experiment,the real load data of a region in Zhejiang was taken as an example to analyze,and the prediction accuracy(R2)of the proposed method reached 98.83%.Compared with the prediction models such as BP,WOA-CNN-BiLSTM,SSA-CNN-BiLSTM,CNN-BiGRU-Attention,etc.,the experimental results showed that the model proposed in this study has a higher prediction accuracy.展开更多
A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an e...A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an end-to-end OCR system that does both localization and recognition and serves as a single unit to automate payable document processing such as cheques and cash disbursement.For text localization,the maximally stable extremal region is used,which extracts a word or digit chunk from an invoice.This chunk is later passed to the deep learning model,which performs text recognition.The deep learning model utilizes both convolution neural networks and long short-term memory(LSTM).The convolution layer is used for extracting features,which are fed to the LSTM.The model integrates feature extraction,modeling sequence,and transcription into a unified network.It handles the sequences of unconstrained lengths,independent of the character segmentation or horizontal scale normalization.Furthermore,it applies to both the lexicon-free and lexicon-based text recognition,and finally,it produces a comparatively smaller model,which can be implemented in practical applications.The overall superior performance in the experimental evaluation demonstrates the usefulness of the proposed model.The model is thus generic and can be used for other similar recognition scenarios.展开更多
As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical...As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical workers and patients because of its ability to assist in the diagnosis of diseases.Therefore,the research of real-time diagnosis and classification algorithms for arrhythmia can help to improve the diagnostic efficiency of diseases.In this paper,we design an automatic arrhythmia classification algorithm model based on Convolutional Neural Network(CNN)and Encoder-Decoder model.The model uses Long Short-Term Memory(LSTM)to consider the influence of time series features on classification results.Simultaneously,it is trained and tested by the MIT-BIH arrhythmia database.Besides,Generative Adversarial Networks(GAN)is adopted as a method of data equalization for solving data imbalance problem.The simulation results show that for the inter-patient arrhythmia classification,the hybrid model combining CNN and Encoder-Decoder model has the best classification accuracy,of which the accuracy can reach 94.05%.Especially,it has a better advantage for the classification effect of supraventricular ectopic beats(class S)and fusion beats(class F).展开更多
Low dynamic range(LDR)images captured by consumer cameras have a limited luminance range.As the conventional method for generating high dynamic range(HDR)images involves merging multiple-exposure LDR images of the sam...Low dynamic range(LDR)images captured by consumer cameras have a limited luminance range.As the conventional method for generating high dynamic range(HDR)images involves merging multiple-exposure LDR images of the same scene(assuming a stationary scene),we introduce a learning-based model for single-image HDR reconstruction.An input LDR image is sequentially segmented into the local region maps based on the cumulative histogram of the input brightness distribution.Using the local region maps,SParam-Net estimates the parameters of an inverse tone mapping function to generate a pseudo-HDR image.We process the segmented region maps as the input sequences on long short-term memory.Finally,a fast super-resolution convolutional neural network is used for HDR image reconstruction.The proposed method was trained and tested on datasets including HDR-Real,LDR-HDR-pair,and HDR-Eye.The experimental results revealed that HDR images can be generated more reliably than using contemporary end-to-end approaches.展开更多
Speech enhancement is the task of taking a noisy speech input and pro-ducing an enhanced speech output.In recent years,the need for speech enhance-ment has been increased due to challenges that occurred in various app...Speech enhancement is the task of taking a noisy speech input and pro-ducing an enhanced speech output.In recent years,the need for speech enhance-ment has been increased due to challenges that occurred in various applications such as hearing aids,Automatic Speech Recognition(ASR),and mobile speech communication systems.Most of the Speech Enhancement research work has been carried out for English,Chinese,and other European languages.Only a few research works involve speech enhancement in Indian regional Languages.In this paper,we propose a two-fold architecture to perform speech enhancement for Tamil speech signal based on convolutional recurrent neural network(CRN)that addresses the speech enhancement in a real-time single channel or track of sound created by the speaker.In thefirst stage mask based long short-term mem-ory(LSTM)is used for noise suppression along with loss function and in the sec-ond stage,Convolutional Encoder-Decoder(CED)is used for speech restoration.The proposed model is evaluated on various speaker and noisy environments like Babble noise,car noise,and white Gaussian noise.The proposed CRN model improves speech quality by 0.1 points when compared with the LSTM base model and also CRN requires fewer parameters for training.The performance of the pro-posed model is outstanding even in low Signal to Noise Ratio(SNR).展开更多
Accurately predicting motion responses is a crucial component of the design process for floating offshore structures.This study introduces a hybrid model that integrates a convolutional neural network(CNN),a bidirecti...Accurately predicting motion responses is a crucial component of the design process for floating offshore structures.This study introduces a hybrid model that integrates a convolutional neural network(CNN),a bidirectional long short-term memory(BiLSTM)neural network,and an attention mechanism for forecasting the short-term motion responses of a semisubmersible.First,the motions are processed through the CNN for feature extraction.The extracted features are subsequently utilized by the BiLSTM network to forecast future motions.To enhance the predictive capability of the neural networks,an attention mechanism is integrated.In addition to the hybrid model,the BiLSTM is independently employed to forecast the motion responses of the semi-submersible,serving as benchmark results for comparison.Furthermore,both the 1D and 2D convolutions are conducted to check the influence of the convolutional dimensionality on the predicted results.The results demonstrate that the hybrid 1D CNN-BiLSTM network with an attention mechanism outperforms all other models in accurately predicting motion responses.展开更多
Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and a...Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance.展开更多
针对现有能耗模型对动态工作负载波动具有低敏感性和低精度的问题,该文基于卷积长短期记忆(convolutional long short-term memory, ConvLSTM)神经网络,提出了用于移动边缘计算的服务器智能能耗模型(intelligence server energy consump...针对现有能耗模型对动态工作负载波动具有低敏感性和低精度的问题,该文基于卷积长短期记忆(convolutional long short-term memory, ConvLSTM)神经网络,提出了用于移动边缘计算的服务器智能能耗模型(intelligence server energy consumption model,IECM),用于预测和优化服务器的能量消耗。通过收集服务器运行时间参数,使用熵值法筛选和保留显著影响服务器能耗的参数。基于选定的参数,利用ConvLSTM神经网络训练服务器能耗模型的深度网络。与现有的能耗模型相比,IECM在CPU密集型、I/O密集型、内存密集型和混合型任务上,能够适应服务器工作负载的动态变化,并在能耗预测上具有更好的准确性。展开更多
Electrodeposition in electrochemical cells is one of the leading causes of its performance deterioration. The prediction of electrodeposition growth demands a good understanding of the complex physics involved, which ...Electrodeposition in electrochemical cells is one of the leading causes of its performance deterioration. The prediction of electrodeposition growth demands a good understanding of the complex physics involved, which can lead to the fabrication of a probabilistic mathematical model. As an alternative, a convolutional Long shortterm memory architecture-based image analysis approach is presented herein. This technique can predict the electrodeposition growth of the electrolytes, without prior detailed knowledge of the system. The captured images of the electrodeposition from the experiments are used to train and test the model. A comparison between the expected output image and predicted image on a pixel level, percentage mean squared error, absolute percentage error, and pattern density of the electrodeposit are investigated to assess the model accuracy. The randomness of the electrodeposition growth is outlined by investigating the fractal dimension and the interfacial length of the electrodeposits. The trained model predictions show a significant promise between all the experimentally obtained relevant parameters with the predicted one. It is expected that this deep learning-based approach for predicting random electrodeposition growth will be of immense help for designing and optimizing the relevant experimental scheme in near future without performing multiple experiments.展开更多
Multipath signal recognition is crucial to the ability to provide high-precision absolute-position services by the BeiDou Navigation Satellite System(BDS).However,most existing approaches to this issue involve supervi...Multipath signal recognition is crucial to the ability to provide high-precision absolute-position services by the BeiDou Navigation Satellite System(BDS).However,most existing approaches to this issue involve supervised machine learning(ML)methods,and it is difficult to move to unsupervised multipath signal recognition because of the limitations in signal labeling.Inspired by an autoencoder with powerful unsupervised feature extraction,we propose a new deep learning(DL)model for BDS signal recognition that places a long short-term memory(LSTM)module in series with a convolutional sparse autoencoder to create a new autoencoder structure.First,we propose to capture the temporal correlations in long-duration BeiDou satellite time-series signals by using the LSTM module to mine the temporal change patterns in the time series.Second,we develop a convolutional sparse autoencoder method that learns a compressed representation of the input data,which then enables downscaled and unsupervised feature extraction from long-duration BeiDou satellite series signals.Finally,we add an l_(1/2) regularizer to the objective function of our DL model to remove redundant neurons from the neural network while ensuring recognition accuracy.We tested our proposed approach on a real urban canyon dataset,and the results demonstrated that our algorithm could achieve better classification performance than two ML-based methods(e.g.,11%better than a support vector machine)and two existing DL-based methods(e.g.,7.26%better than convolutional neural networks).展开更多
Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning netwo...Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.展开更多
In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits...In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits of energy storage in the process of participating in the power market,this paper takes energy storage scheduling as merely one factor affecting short-term power load,which affects short-term load time series along with time-of-use price,holidays,and temperature.A deep learning network is used to predict the short-term load,a convolutional neural network(CNN)is used to extract the features,and a long short-term memory(LSTM)network is used to learn the temporal characteristics of the load value,which can effectively improve prediction accuracy.Taking the load data of a certain region as an example,the CNN-LSTM prediction model is compared with the single LSTM prediction model.The experimental results show that the CNN-LSTM deep learning network with the participation of energy storage in dispatching can have high prediction accuracy for short-term power load forecasting.展开更多
A precise and timely forecast of short-term rail transit passenger flow provides data support for traffic management and operation,assisting rail operators in efficiently allocating resources and timely relieving pres...A precise and timely forecast of short-term rail transit passenger flow provides data support for traffic management and operation,assisting rail operators in efficiently allocating resources and timely relieving pressure on passenger safety and operation.First,the passenger flow sequence models in the study are broken down using VMD for noise reduction.The objective environment features are then added to the characteristic factors that affect the passenger flow.The target station serves as an additional spatial feature and is mined concurrently using the KNN algorithm.It is shown that the hybrid model VMD-CLSMT has a higher prediction accuracy,by setting BP,CNN,and LSTM reference experiments.All models’second order prediction effects are superior to their first order effects,showing that the residual network can significantly raise model prediction accuracy.Additionally,it confirms the efficacy of supplementary and objective environmental features.展开更多
基金National Key Research and Development Program of China (Grant No. 2022YFE0102700)National Natural Science Foundation of China (Grant No. 52102420)+2 种基金research project “Safe Da Batt” (03EMF0409A) funded by the German Federal Ministry of Digital and Transport (BMDV)China Postdoctoral Science Foundation (Grant No. 2023T160085)Sichuan Science and Technology Program (Grant No. 2024NSFSC0938)。
文摘A fast-charging policy is widely employed to alleviate the inconvenience caused by the extended charging time of electric vehicles. However, fast charging exacerbates battery degradation and shortens battery lifespan. In addition, there is still a lack of tailored health estimations for fast-charging batteries;most existing methods are applicable at lower charging rates. This paper proposes a novel method for estimating the health of lithium-ion batteries, which is tailored for multi-stage constant current-constant voltage fast-charging policies. Initially, short charging segments are extracted by monitoring current switches,followed by deriving voltage sequences using interpolation techniques. Subsequently, a graph generation layer is used to transform the voltage sequence into graphical data. Furthermore, the integration of a graph convolution network with a long short-term memory network enables the extraction of information related to inter-node message transmission, capturing the key local and temporal features during the battery degradation process. Finally, this method is confirmed by utilizing aging data from 185 cells and 81 distinct fast-charging policies. The 4-minute charging duration achieves a balance between high accuracy in estimating battery state of health and low data requirements, with mean absolute errors and root mean square errors of 0.34% and 0.66%, respectively.
文摘Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively.
文摘Load forecasting is of great significance to the development of new power systems.With the advancement of smart grids,the integration and distribution of distributed renewable energy sources and power electronics devices have made power load data increasingly complex and volatile.This places higher demands on the prediction and analysis of power loads.In order to improve the prediction accuracy of short-term power load,a CNN-BiLSTMTPA short-term power prediction model based on the Improved Whale Optimization Algorithm(IWOA)with mixed strategies was proposed.Firstly,the model combined the Convolutional Neural Network(CNN)with the Bidirectional Long Short-Term Memory Network(BiLSTM)to fully extract the spatio-temporal characteristics of the load data itself.Then,the Temporal Pattern Attention(TPA)mechanism was introduced into the CNN-BiLSTM model to automatically assign corresponding weights to the hidden states of the BiLSTM.This allowed the model to differentiate the importance of load sequences at different time intervals.At the same time,in order to solve the problem of the difficulties of selecting the parameters of the temporal model,and the poor global search ability of the whale algorithm,which is easy to fall into the local optimization,the whale algorithm(IWOA)was optimized by using the hybrid strategy of Tent chaos mapping and Levy flight strategy,so as to better search the parameters of the model.In this experiment,the real load data of a region in Zhejiang was taken as an example to analyze,and the prediction accuracy(R2)of the proposed method reached 98.83%.Compared with the prediction models such as BP,WOA-CNN-BiLSTM,SSA-CNN-BiLSTM,CNN-BiGRU-Attention,etc.,the experimental results showed that the model proposed in this study has a higher prediction accuracy.
基金Researchers would like to thank the Deanship of Scientific Research,Qassim University,for funding publication of this project.
文摘A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an end-to-end OCR system that does both localization and recognition and serves as a single unit to automate payable document processing such as cheques and cash disbursement.For text localization,the maximally stable extremal region is used,which extracts a word or digit chunk from an invoice.This chunk is later passed to the deep learning model,which performs text recognition.The deep learning model utilizes both convolution neural networks and long short-term memory(LSTM).The convolution layer is used for extracting features,which are fed to the LSTM.The model integrates feature extraction,modeling sequence,and transcription into a unified network.It handles the sequences of unconstrained lengths,independent of the character segmentation or horizontal scale normalization.Furthermore,it applies to both the lexicon-free and lexicon-based text recognition,and finally,it produces a comparatively smaller model,which can be implemented in practical applications.The overall superior performance in the experimental evaluation demonstrates the usefulness of the proposed model.The model is thus generic and can be used for other similar recognition scenarios.
基金Fundamental Research Funds for the Central Universities(Grant No.FRF-TP-19-006A3).
文摘As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical workers and patients because of its ability to assist in the diagnosis of diseases.Therefore,the research of real-time diagnosis and classification algorithms for arrhythmia can help to improve the diagnostic efficiency of diseases.In this paper,we design an automatic arrhythmia classification algorithm model based on Convolutional Neural Network(CNN)and Encoder-Decoder model.The model uses Long Short-Term Memory(LSTM)to consider the influence of time series features on classification results.Simultaneously,it is trained and tested by the MIT-BIH arrhythmia database.Besides,Generative Adversarial Networks(GAN)is adopted as a method of data equalization for solving data imbalance problem.The simulation results show that for the inter-patient arrhythmia classification,the hybrid model combining CNN and Encoder-Decoder model has the best classification accuracy,of which the accuracy can reach 94.05%.Especially,it has a better advantage for the classification effect of supraventricular ectopic beats(class S)and fusion beats(class F).
基金This study was supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2018R1D1A1B07049932).
文摘Low dynamic range(LDR)images captured by consumer cameras have a limited luminance range.As the conventional method for generating high dynamic range(HDR)images involves merging multiple-exposure LDR images of the same scene(assuming a stationary scene),we introduce a learning-based model for single-image HDR reconstruction.An input LDR image is sequentially segmented into the local region maps based on the cumulative histogram of the input brightness distribution.Using the local region maps,SParam-Net estimates the parameters of an inverse tone mapping function to generate a pseudo-HDR image.We process the segmented region maps as the input sequences on long short-term memory.Finally,a fast super-resolution convolutional neural network is used for HDR image reconstruction.The proposed method was trained and tested on datasets including HDR-Real,LDR-HDR-pair,and HDR-Eye.The experimental results revealed that HDR images can be generated more reliably than using contemporary end-to-end approaches.
文摘Speech enhancement is the task of taking a noisy speech input and pro-ducing an enhanced speech output.In recent years,the need for speech enhance-ment has been increased due to challenges that occurred in various applications such as hearing aids,Automatic Speech Recognition(ASR),and mobile speech communication systems.Most of the Speech Enhancement research work has been carried out for English,Chinese,and other European languages.Only a few research works involve speech enhancement in Indian regional Languages.In this paper,we propose a two-fold architecture to perform speech enhancement for Tamil speech signal based on convolutional recurrent neural network(CRN)that addresses the speech enhancement in a real-time single channel or track of sound created by the speaker.In thefirst stage mask based long short-term mem-ory(LSTM)is used for noise suppression along with loss function and in the sec-ond stage,Convolutional Encoder-Decoder(CED)is used for speech restoration.The proposed model is evaluated on various speaker and noisy environments like Babble noise,car noise,and white Gaussian noise.The proposed CRN model improves speech quality by 0.1 points when compared with the LSTM base model and also CRN requires fewer parameters for training.The performance of the pro-posed model is outstanding even in low Signal to Noise Ratio(SNR).
基金the National Natural Science Foundation of China (Grant No. 52301322)the Jiangsu Provincial Natural Science Foundation (Grant No. BK20220653)+1 种基金the National Science Fund for Distinguished Young Scholars (Grant No. 52025112)the Key Projects of the National Natural Science Foundation of China (Grant No. 52331011)
文摘Accurately predicting motion responses is a crucial component of the design process for floating offshore structures.This study introduces a hybrid model that integrates a convolutional neural network(CNN),a bidirectional long short-term memory(BiLSTM)neural network,and an attention mechanism for forecasting the short-term motion responses of a semisubmersible.First,the motions are processed through the CNN for feature extraction.The extracted features are subsequently utilized by the BiLSTM network to forecast future motions.To enhance the predictive capability of the neural networks,an attention mechanism is integrated.In addition to the hybrid model,the BiLSTM is independently employed to forecast the motion responses of the semi-submersible,serving as benchmark results for comparison.Furthermore,both the 1D and 2D convolutions are conducted to check the influence of the convolutional dimensionality on the predicted results.The results demonstrate that the hybrid 1D CNN-BiLSTM network with an attention mechanism outperforms all other models in accurately predicting motion responses.
基金supported in part by the National Natural Science Foundation of China under Grant 62203468in part by the Technological Research and Development Program of China State Railway Group Co.,Ltd.under Grant Q2023X011+1 种基金in part by the Young Elite Scientist Sponsorship Program by China Association for Science and Technology(CAST)under Grant 2022QNRC001in part by the Youth Talent Program Supported by China Railway Society,and in part by the Research Program of China Academy of Railway Sciences Corporation Limited under Grant 2023YJ112.
文摘Purpose-To optimize train operations,dispatchers currently rely on experience for quick adjustments when delays occur.However,delay predictions often involve imprecise shifts based on known delay times.Real-time and accurate train delay predictions,facilitated by data-driven neural network models,can significantly reduce dispatcher stress and improve adjustment plans.Leveraging current train operation data,these models enable swift and precise predictions,addressing challenges posed by train delays in high-speed rail networks during unforeseen events.Design/methodology/approach-This paper proposes CBLA-net,a neural network architecture for predicting late arrival times.It combines CNN,Bi-LSTM,and attention mechanisms to extract features,handle time series data,and enhance information utilization.Trained on operational data from the Beijing-Tianjin line,it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.Findings-This study evaluates our model’s predictive performance using two data approaches:one considering full data and another focusing only on late arrivals.Results show precise and rapid predictions.Training with full data achieves aMAEof approximately 0.54 minutes and a RMSEof 0.65 minutes,surpassing the model trained solely on delay data(MAE:is about 1.02 min,RMSE:is about 1.52 min).Despite superior overall performance with full data,the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals.For enhanced adaptability to real-world train operations,training with full data is recommended.Originality/value-This paper introduces a novel neural network model,CBLA-net,for predicting train delay times.It innovatively compares and analyzes the model’s performance using both full data and delay data formats.Additionally,the evaluation of the network’s predictive capabilities considers different scenarios,providing a comprehensive demonstration of the model’s predictive performance.
文摘针对现有能耗模型对动态工作负载波动具有低敏感性和低精度的问题,该文基于卷积长短期记忆(convolutional long short-term memory, ConvLSTM)神经网络,提出了用于移动边缘计算的服务器智能能耗模型(intelligence server energy consumption model,IECM),用于预测和优化服务器的能量消耗。通过收集服务器运行时间参数,使用熵值法筛选和保留显著影响服务器能耗的参数。基于选定的参数,利用ConvLSTM神经网络训练服务器能耗模型的深度网络。与现有的能耗模型相比,IECM在CPU密集型、I/O密集型、内存密集型和混合型任务上,能够适应服务器工作负载的动态变化,并在能耗预测上具有更好的准确性。
文摘Electrodeposition in electrochemical cells is one of the leading causes of its performance deterioration. The prediction of electrodeposition growth demands a good understanding of the complex physics involved, which can lead to the fabrication of a probabilistic mathematical model. As an alternative, a convolutional Long shortterm memory architecture-based image analysis approach is presented herein. This technique can predict the electrodeposition growth of the electrolytes, without prior detailed knowledge of the system. The captured images of the electrodeposition from the experiments are used to train and test the model. A comparison between the expected output image and predicted image on a pixel level, percentage mean squared error, absolute percentage error, and pattern density of the electrodeposit are investigated to assess the model accuracy. The randomness of the electrodeposition growth is outlined by investigating the fractal dimension and the interfacial length of the electrodeposits. The trained model predictions show a significant promise between all the experimentally obtained relevant parameters with the predicted one. It is expected that this deep learning-based approach for predicting random electrodeposition growth will be of immense help for designing and optimizing the relevant experimental scheme in near future without performing multiple experiments.
基金supported in part by the National Natural Science Foundations of China(Nos.62273106,62203122,62320106008,62373114,62203123,and 62073086)in part by Guangdong Basic and Applied Basic Research Foundation(Nos.2023A1515011480 and 2023A1515011159)in part by China Postdoctoral Science Foundation funded project(No.2022M720840).
文摘Multipath signal recognition is crucial to the ability to provide high-precision absolute-position services by the BeiDou Navigation Satellite System(BDS).However,most existing approaches to this issue involve supervised machine learning(ML)methods,and it is difficult to move to unsupervised multipath signal recognition because of the limitations in signal labeling.Inspired by an autoencoder with powerful unsupervised feature extraction,we propose a new deep learning(DL)model for BDS signal recognition that places a long short-term memory(LSTM)module in series with a convolutional sparse autoencoder to create a new autoencoder structure.First,we propose to capture the temporal correlations in long-duration BeiDou satellite time-series signals by using the LSTM module to mine the temporal change patterns in the time series.Second,we develop a convolutional sparse autoencoder method that learns a compressed representation of the input data,which then enables downscaled and unsupervised feature extraction from long-duration BeiDou satellite series signals.Finally,we add an l_(1/2) regularizer to the objective function of our DL model to remove redundant neurons from the neural network while ensuring recognition accuracy.We tested our proposed approach on a real urban canyon dataset,and the results demonstrated that our algorithm could achieve better classification performance than two ML-based methods(e.g.,11%better than a support vector machine)and two existing DL-based methods(e.g.,7.26%better than convolutional neural networks).
文摘Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.
基金supported by a State Grid Zhejiang Electric Power Co.,Ltd.Economic and Technical Research Institute Project(Key Technologies and Empirical Research of Diversified Integrated Operation of User-Side Energy Storage in Power Market Environment,No.5211JY19000W)supported by the National Natural Science Foundation of China(Research on Power Market Management to Promote Large-Scale New Energy Consumption,No.71804045).
文摘In the electricity market,fluctuations in real-time prices are unstable,and changes in short-term load are determined by many factors.By studying the timing of charging and discharging,as well as the economic benefits of energy storage in the process of participating in the power market,this paper takes energy storage scheduling as merely one factor affecting short-term power load,which affects short-term load time series along with time-of-use price,holidays,and temperature.A deep learning network is used to predict the short-term load,a convolutional neural network(CNN)is used to extract the features,and a long short-term memory(LSTM)network is used to learn the temporal characteristics of the load value,which can effectively improve prediction accuracy.Taking the load data of a certain region as an example,the CNN-LSTM prediction model is compared with the single LSTM prediction model.The experimental results show that the CNN-LSTM deep learning network with the participation of energy storage in dispatching can have high prediction accuracy for short-term power load forecasting.
基金the Major Projects of the National Social Science Fund in China(21&ZD127).
文摘A precise and timely forecast of short-term rail transit passenger flow provides data support for traffic management and operation,assisting rail operators in efficiently allocating resources and timely relieving pressure on passenger safety and operation.First,the passenger flow sequence models in the study are broken down using VMD for noise reduction.The objective environment features are then added to the characteristic factors that affect the passenger flow.The target station serves as an additional spatial feature and is mined concurrently using the KNN algorithm.It is shown that the hybrid model VMD-CLSMT has a higher prediction accuracy,by setting BP,CNN,and LSTM reference experiments.All models’second order prediction effects are superior to their first order effects,showing that the residual network can significantly raise model prediction accuracy.Additionally,it confirms the efficacy of supplementary and objective environmental features.