Retinal blood vessel segmentation is crucial for diagnosing ocular and cardiovascular diseases.Although the introduction of U-Net in 2015 by Olaf Ronneberger significantly advanced this field,yet issues like limited t...Retinal blood vessel segmentation is crucial for diagnosing ocular and cardiovascular diseases.Although the introduction of U-Net in 2015 by Olaf Ronneberger significantly advanced this field,yet issues like limited training data,imbalance data distribution,and inadequate feature extraction persist,hindering both the segmentation performance and optimal model generalization.Addressing these critical issues,the DEFFA-Unet is proposed featuring an additional encoder to process domain-invariant pre-processed inputs,thereby improving both richer feature encoding and enhanced model generalization.A feature filtering fusion module is developed to ensure the precise feature filtering and robust hybrid feature fusion.In response to the task-specific need for higher precision where false positives are very costly,traditional skip connections are replaced with the attention-guided feature reconstructing fusion module.Additionally,innovative data augmentation and balancing methods are proposed to counter data scarcity and distribution imbalance,further boosting the robustness and generalization of the model.With a comprehensive suite of evaluation metrics,extensive validations on four benchmark datasets(DRIVE,CHASEDB1,STARE,and HRF)and an SLO dataset(IOSTAR),demonstrate the proposed method’s superiority over both baseline and state-of-the-art models.Particularly the proposed method significantly outperforms the compared methods in cross-validation model generalization.展开更多
在全球能源绿色转型的背景下,页岩气作为低碳能源的重要性日益凸显,但其产量受高维、非线性及非平稳性因素影响,传统预测方法存在精度不足和计算复杂度高的问题。为此,本文提出一种基于RevIN-Autoformer-FECAM的深度学习模型,用于提升...在全球能源绿色转型的背景下,页岩气作为低碳能源的重要性日益凸显,但其产量受高维、非线性及非平稳性因素影响,传统预测方法存在精度不足和计算复杂度高的问题。为此,本文提出一种基于RevIN-Autoformer-FECAM的深度学习模型,用于提升页岩气产量预测的准确性。该模型通过可逆实例归一化(RevIN)缓解时间序列的非平稳性问题,结合Autoformer的自注意力机制捕捉长周期依赖关系,并引入频率增强通道注意力机制(FECAM)优化多频特征提取。实验基于威海页岩气田三口气井的生产数据,与Informer、Transformer等主流模型对比表明,RevIN-Autoformer-FECAM在均方误差(MSE)和平均绝对误差(MAE)指标上均显著优于基线模型,尤其在长周期预测(24~60天)中表现稳定。研究结果为复杂时序数据预测提供了高效解决方案,对页岩气开发优化具有重要应用价值。Against the backdrop of the global green energy transition, shale gas has emerged as a critical low-carbon energy resource. However, its production is influenced by high-dimensional, nonlinear, and non-stationary factors, while traditional prediction methods suffer from limited accuracy and high computational complexity. To address these challenges, this paper proposes a deep learning model, RevIN-Autoformer-FECAM, to enhance the accuracy of shale gas production forecasting. The model integrates Reversible Instance Normalization (RevIN) to mitigate non-stationarity in time series, leverages the self-attention mechanism of Autoformer to capture long-term dependencies, and introduces a Frequency Enhanced Channel Attention Mechanism (FECAM) to optimize multi-frequency feature extraction. Experiments conducted on production data from three shale gas wells in the Weihai field demonstrate that RevIN-Autoformer-FECAM significantly outperforms baseline models (e.g., Informer, Transformer) in terms of Mean Squared Error (MSE) and Mean Absolute Error (MAE), particularly showing stable performance in long-term predictions (24~60 days). The research provides an efficient solution for complex time series forecasting and holds significant application value for optimizing shale gas development.展开更多
Medical image fusion technology is crucial for improving the detection accuracy and treatment efficiency of diseases,but existing fusion methods have problems such as blurred texture details,low contrast,and inability...Medical image fusion technology is crucial for improving the detection accuracy and treatment efficiency of diseases,but existing fusion methods have problems such as blurred texture details,low contrast,and inability to fully extract fused image information.Therefore,a multimodal medical image fusion method based on mask optimization and parallel attention mechanism was proposed to address the aforementioned issues.Firstly,it converted the entire image into a binary mask,and constructed a contour feature map to maximize the contour feature information of the image and a triple path network for image texture detail feature extraction and optimization.Secondly,a contrast enhancement module and a detail preservation module were proposed to enhance the overall brightness and texture details of the image.Afterwards,a parallel attention mechanism was constructed using channel features and spatial feature changes to fuse images and enhance the salient information of the fused images.Finally,a decoupling network composed of residual networks was set up to optimize the information between the fused image and the source image so as to reduce information loss in the fused image.Compared with nine high-level methods proposed in recent years,the seven objective evaluation indicators of our method have improved by 6%−31%,indicating that this method can obtain fusion results with clearer texture details,higher contrast,and smaller pixel differences between the fused image and the source image.It is superior to other comparison algorithms in both subjective and objective indicators.展开更多
文摘Retinal blood vessel segmentation is crucial for diagnosing ocular and cardiovascular diseases.Although the introduction of U-Net in 2015 by Olaf Ronneberger significantly advanced this field,yet issues like limited training data,imbalance data distribution,and inadequate feature extraction persist,hindering both the segmentation performance and optimal model generalization.Addressing these critical issues,the DEFFA-Unet is proposed featuring an additional encoder to process domain-invariant pre-processed inputs,thereby improving both richer feature encoding and enhanced model generalization.A feature filtering fusion module is developed to ensure the precise feature filtering and robust hybrid feature fusion.In response to the task-specific need for higher precision where false positives are very costly,traditional skip connections are replaced with the attention-guided feature reconstructing fusion module.Additionally,innovative data augmentation and balancing methods are proposed to counter data scarcity and distribution imbalance,further boosting the robustness and generalization of the model.With a comprehensive suite of evaluation metrics,extensive validations on four benchmark datasets(DRIVE,CHASEDB1,STARE,and HRF)and an SLO dataset(IOSTAR),demonstrate the proposed method’s superiority over both baseline and state-of-the-art models.Particularly the proposed method significantly outperforms the compared methods in cross-validation model generalization.
文摘在全球能源绿色转型的背景下,页岩气作为低碳能源的重要性日益凸显,但其产量受高维、非线性及非平稳性因素影响,传统预测方法存在精度不足和计算复杂度高的问题。为此,本文提出一种基于RevIN-Autoformer-FECAM的深度学习模型,用于提升页岩气产量预测的准确性。该模型通过可逆实例归一化(RevIN)缓解时间序列的非平稳性问题,结合Autoformer的自注意力机制捕捉长周期依赖关系,并引入频率增强通道注意力机制(FECAM)优化多频特征提取。实验基于威海页岩气田三口气井的生产数据,与Informer、Transformer等主流模型对比表明,RevIN-Autoformer-FECAM在均方误差(MSE)和平均绝对误差(MAE)指标上均显著优于基线模型,尤其在长周期预测(24~60天)中表现稳定。研究结果为复杂时序数据预测提供了高效解决方案,对页岩气开发优化具有重要应用价值。Against the backdrop of the global green energy transition, shale gas has emerged as a critical low-carbon energy resource. However, its production is influenced by high-dimensional, nonlinear, and non-stationary factors, while traditional prediction methods suffer from limited accuracy and high computational complexity. To address these challenges, this paper proposes a deep learning model, RevIN-Autoformer-FECAM, to enhance the accuracy of shale gas production forecasting. The model integrates Reversible Instance Normalization (RevIN) to mitigate non-stationarity in time series, leverages the self-attention mechanism of Autoformer to capture long-term dependencies, and introduces a Frequency Enhanced Channel Attention Mechanism (FECAM) to optimize multi-frequency feature extraction. Experiments conducted on production data from three shale gas wells in the Weihai field demonstrate that RevIN-Autoformer-FECAM significantly outperforms baseline models (e.g., Informer, Transformer) in terms of Mean Squared Error (MSE) and Mean Absolute Error (MAE), particularly showing stable performance in long-term predictions (24~60 days). The research provides an efficient solution for complex time series forecasting and holds significant application value for optimizing shale gas development.
基金supported by Gansu Natural Science Foundation Programme(No.24JRRA231)National Natural Science Foundation of China(No.62061023)Gansu Provincial Education,Science and Technology Innovation and Industry(No.2021CYZC-04)。
文摘Medical image fusion technology is crucial for improving the detection accuracy and treatment efficiency of diseases,but existing fusion methods have problems such as blurred texture details,low contrast,and inability to fully extract fused image information.Therefore,a multimodal medical image fusion method based on mask optimization and parallel attention mechanism was proposed to address the aforementioned issues.Firstly,it converted the entire image into a binary mask,and constructed a contour feature map to maximize the contour feature information of the image and a triple path network for image texture detail feature extraction and optimization.Secondly,a contrast enhancement module and a detail preservation module were proposed to enhance the overall brightness and texture details of the image.Afterwards,a parallel attention mechanism was constructed using channel features and spatial feature changes to fuse images and enhance the salient information of the fused images.Finally,a decoupling network composed of residual networks was set up to optimize the information between the fused image and the source image so as to reduce information loss in the fused image.Compared with nine high-level methods proposed in recent years,the seven objective evaluation indicators of our method have improved by 6%−31%,indicating that this method can obtain fusion results with clearer texture details,higher contrast,and smaller pixel differences between the fused image and the source image.It is superior to other comparison algorithms in both subjective and objective indicators.