Low-dose computed tomography(LDCT)has gained increasing attention owing to its crucial role in reducing radiation exposure in patients.However,LDCT-reconstructed images often suffer from significant noise and artifact...Low-dose computed tomography(LDCT)has gained increasing attention owing to its crucial role in reducing radiation exposure in patients.However,LDCT-reconstructed images often suffer from significant noise and artifacts,negatively impacting the radiologists’ability to accurately diagnose.To address this issue,many studies have focused on denoising LDCT images using deep learning(DL)methods.However,these DL-based denoising methods have been hindered by the highly variable feature distribution of LDCT data from different imaging sources,which adversely affects the performance of current denoising models.In this study,we propose a parallel processing model,the multi-encoder deep feature transformation network(MDFTN),which is designed to enhance the performance of LDCT imaging for multisource data.Unlike traditional network structures,which rely on continual learning to process multitask data,the approach can simultaneously handle LDCT images within a unified framework from various imaging sources.The proposed MDFTN consists of multiple encoders and decoders along with a deep feature transformation module(DFTM).During forward propagation in network training,each encoder extracts diverse features from its respective data source in parallel and the DFTM compresses these features into a shared feature space.Subsequently,each decoder performs an inverse operation for multisource loss estimation.Through collaborative training,the proposed MDFTN leverages the complementary advantages of multisource data distribution to enhance its adaptability and generalization.Numerous experiments were conducted on two public datasets and one local dataset,which demonstrated that the proposed network model can simultaneously process multisource data while effectively suppressing noise and preserving fine structures.The source code is available at https://github.com/123456789ey/MDFTN.展开更多
Influenced by its training corpus,the performance of different machine translation systems varies greatly.Aiming at achieving higher quality translations,system combination methods combine the translation results of m...Influenced by its training corpus,the performance of different machine translation systems varies greatly.Aiming at achieving higher quality translations,system combination methods combine the translation results of multiple systems through statistical combination or neural network combination.This paper proposes a new multi-system translation combination method based on the Transformer architecture,which uses a multi-encoder to encode source sentences and the translation results of each system in order to realize encoder combination and decoder combination.The experimental verification on the Chinese-English translation task shows that this method has 1.2-2.35 more bilingual evaluation understudy(BLEU)points compared with the best single system results,0.71-3.12 more BLEU points compared with the statistical combination method,and 0.14-0.62 more BLEU points compared with the state-of-the-art neural network combination method.The experimental results demonstrate the effectiveness of the proposed system combination method based on Transformer.展开更多
基金supported in part by the National Key Research and Development Program of China,No.2022YFC2404103in part by the Jiangsu Provincial Key Research and Development Program Social Development Project,No.BE2022720+1 种基金in part by the Natural Science Foundation of China,No.62001471in part by the Suzhou Science and Technology Plan Project,No.SYG202345.
文摘Low-dose computed tomography(LDCT)has gained increasing attention owing to its crucial role in reducing radiation exposure in patients.However,LDCT-reconstructed images often suffer from significant noise and artifacts,negatively impacting the radiologists’ability to accurately diagnose.To address this issue,many studies have focused on denoising LDCT images using deep learning(DL)methods.However,these DL-based denoising methods have been hindered by the highly variable feature distribution of LDCT data from different imaging sources,which adversely affects the performance of current denoising models.In this study,we propose a parallel processing model,the multi-encoder deep feature transformation network(MDFTN),which is designed to enhance the performance of LDCT imaging for multisource data.Unlike traditional network structures,which rely on continual learning to process multitask data,the approach can simultaneously handle LDCT images within a unified framework from various imaging sources.The proposed MDFTN consists of multiple encoders and decoders along with a deep feature transformation module(DFTM).During forward propagation in network training,each encoder extracts diverse features from its respective data source in parallel and the DFTM compresses these features into a shared feature space.Subsequently,each decoder performs an inverse operation for multisource loss estimation.Through collaborative training,the proposed MDFTN leverages the complementary advantages of multisource data distribution to enhance its adaptability and generalization.Numerous experiments were conducted on two public datasets and one local dataset,which demonstrated that the proposed network model can simultaneously process multisource data while effectively suppressing noise and preserving fine structures.The source code is available at https://github.com/123456789ey/MDFTN.
基金Supported by the National Key Research and Development Program of China(No.2019YFA0707201)the Fund of the Institute of Scientific and Technical Information of China(No.ZD2021-17).
文摘Influenced by its training corpus,the performance of different machine translation systems varies greatly.Aiming at achieving higher quality translations,system combination methods combine the translation results of multiple systems through statistical combination or neural network combination.This paper proposes a new multi-system translation combination method based on the Transformer architecture,which uses a multi-encoder to encode source sentences and the translation results of each system in order to realize encoder combination and decoder combination.The experimental verification on the Chinese-English translation task shows that this method has 1.2-2.35 more bilingual evaluation understudy(BLEU)points compared with the best single system results,0.71-3.12 more BLEU points compared with the statistical combination method,and 0.14-0.62 more BLEU points compared with the state-of-the-art neural network combination method.The experimental results demonstrate the effectiveness of the proposed system combination method based on Transformer.