In view of the problems of multi-scale changes of segmentation targets,noise interference,rough segmentation results and slow training process faced by medical image semantic segmentation,a multi-scale residual aggreg...In view of the problems of multi-scale changes of segmentation targets,noise interference,rough segmentation results and slow training process faced by medical image semantic segmentation,a multi-scale residual aggregation U-shaped attention network structure of MAAUNet(MultiRes aggregation attention UNet)is proposed based on MultiResUNet.Firstly,aggregate connection is introduced from the original feature aggregation at the same level.Skip connection is redesigned to aggregate features of different semantic scales at the decoder subnet,and the problem of semantic gaps is further solved that may exist between skip connections.Secondly,after the multi-scale convolution module,a convolution block attention module is added to focus and integrate features in the two attention directions of channel and space to adaptively optimize the intermediate feature map.Finally,the original convolution block is improved.The convolution channels are expanded with a series convolution structure to complement each other and extract richer spatial features.Residual connections are retained and the convolution block is turned into a multi-channel convolution block.The model is made to extract multi-scale spatial features.The experimental results show that MAAUNet has strong competitiveness in challenging datasets,and shows good segmentation performance and stability in dealing with multi-scale input and noise interference.展开更多
Time series is a kind of data widely used in various fields such as electricity forecasting,exchange rate forecasting,and solar power generation forecasting,and therefore time series prediction is of great significanc...Time series is a kind of data widely used in various fields such as electricity forecasting,exchange rate forecasting,and solar power generation forecasting,and therefore time series prediction is of great significance.Recently,the encoder-decoder model combined with long short-term memory(LSTM)is widely used for multivariate time series prediction.However,the encoder can only encode information into fixed-length vectors,hence the performance of the model decreases rapidly as the length of the input sequence or output sequence increases.To solve this problem,we propose a combination model named AR_CLSTM based on the encoder_decoder structure and linear autoregression.The model uses a time step-based attention mechanism to enable the decoder to adaptively select past hidden states and extract useful information,and then uses convolution structure to learn the internal relationship between different dimensions of multivariate time series.In addition,AR_CLSTM combines the traditional linear autoregressive method to learn the linear relationship of the time series,so as to further reduce the error of time series prediction in the encoder_decoder structure and improve the multivariate time series Predictive effect.Experiments show that the AR_CLSTM model performs well in different time series predictions,and its root mean square error,mean square error,and average absolute error all decrease significantly.展开更多
基金National Natural Science Foundation of China(No.61806006)Jiangsu University Superior Discipline Construction Project。
文摘In view of the problems of multi-scale changes of segmentation targets,noise interference,rough segmentation results and slow training process faced by medical image semantic segmentation,a multi-scale residual aggregation U-shaped attention network structure of MAAUNet(MultiRes aggregation attention UNet)is proposed based on MultiResUNet.Firstly,aggregate connection is introduced from the original feature aggregation at the same level.Skip connection is redesigned to aggregate features of different semantic scales at the decoder subnet,and the problem of semantic gaps is further solved that may exist between skip connections.Secondly,after the multi-scale convolution module,a convolution block attention module is added to focus and integrate features in the two attention directions of channel and space to adaptively optimize the intermediate feature map.Finally,the original convolution block is improved.The convolution channels are expanded with a series convolution structure to complement each other and extract richer spatial features.Residual connections are retained and the convolution block is turned into a multi-channel convolution block.The model is made to extract multi-scale spatial features.The experimental results show that MAAUNet has strong competitiveness in challenging datasets,and shows good segmentation performance and stability in dealing with multi-scale input and noise interference.
基金Shanxi Provincial Key Research and Development Program Project Fund(No.201703D111011)。
文摘Time series is a kind of data widely used in various fields such as electricity forecasting,exchange rate forecasting,and solar power generation forecasting,and therefore time series prediction is of great significance.Recently,the encoder-decoder model combined with long short-term memory(LSTM)is widely used for multivariate time series prediction.However,the encoder can only encode information into fixed-length vectors,hence the performance of the model decreases rapidly as the length of the input sequence or output sequence increases.To solve this problem,we propose a combination model named AR_CLSTM based on the encoder_decoder structure and linear autoregression.The model uses a time step-based attention mechanism to enable the decoder to adaptively select past hidden states and extract useful information,and then uses convolution structure to learn the internal relationship between different dimensions of multivariate time series.In addition,AR_CLSTM combines the traditional linear autoregressive method to learn the linear relationship of the time series,so as to further reduce the error of time series prediction in the encoder_decoder structure and improve the multivariate time series Predictive effect.Experiments show that the AR_CLSTM model performs well in different time series predictions,and its root mean square error,mean square error,and average absolute error all decrease significantly.