Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)t...Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.展开更多
Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the s...Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.展开更多
Brain tumor segmentation is critical in clinical diagnosis and treatment planning.Existing methods for brain tumor segmentation with missing modalities often struggle when dealing with multiple missing modalities,a co...Brain tumor segmentation is critical in clinical diagnosis and treatment planning.Existing methods for brain tumor segmentation with missing modalities often struggle when dealing with multiple missing modalities,a common scenario in real-world clinical settings.These methods primarily focus on handling a single missing modality at a time,making them insufficiently robust for the additional complexity encountered with incomplete data containing various missing modality combinations.Additionally,most existing methods rely on single models,which may limit their performance and increase the risk of overfitting the training data.This work proposes a novel method called the ensemble adversarial co-training neural network(EACNet)for accurate brain tumor segmentation from multi-modal magnetic resonance imaging(MRI)scans with multiple missing modalities.The proposed method consists of three key modules:the ensemble of pre-trained models,which captures diverse feature representations from the MRI data by employing an ensemble of pre-trained models;adversarial learning,which leverages a competitive training approach involving two models;a generator model,which creates realistic missing data,while sub-networks acting as discriminators learn to distinguish real data from the generated“fake”data.Co-training framework utilizes the information extracted by the multimodal path(trained on complete scans)to guide the learning process in the path handling missing modalities.The model potentially compensates for missing information through co-training interactions by exploiting the relationships between available modalities and the tumor segmentation task.EACNet was evaluated on the BraTS2018 and BraTS2020 challenge datasets and achieved state-of-the-art and competitive performance respectively.Notably,the segmentation results for the whole tumor(WT)dice similarity coefficient(DSC)reached 89.27%,surpassing the performance of existing methods.The analysis suggests that the ensemble approach offers potential benefits,and the adversarial co-training contributes to the increased robustness and accuracy of EACNet for brain tumor segmentation of MRI scans with missing modalities.The experimental results show that EACNet has promising results for the task of brain tumor segmentation of MRI scans with missing modalities and is a better candidate for real-world clinical applications.展开更多
Medical image segmentation has become a cornerstone for many healthcare applications,allowing for the automated extraction of critical information from images such as Computed Tomography(CT)scans,Magnetic Resonance Im...Medical image segmentation has become a cornerstone for many healthcare applications,allowing for the automated extraction of critical information from images such as Computed Tomography(CT)scans,Magnetic Resonance Imaging(MRIs),and X-rays.The introduction of U-Net in 2015 has significantly advanced segmentation capabilities,especially for small datasets commonly found in medical imaging.Since then,various modifications to the original U-Net architecture have been proposed to enhance segmentation accuracy and tackle challenges like class imbalance,data scarcity,and multi-modal image processing.This paper provides a detailed review and comparison of several U-Net-based architectures,focusing on their effectiveness in medical image segmentation tasks.We evaluate performance metrics such as Dice Similarity Coefficient(DSC)and Intersection over Union(IoU)across different U-Net variants including HmsU-Net,CrossU-Net,mResU-Net,and others.Our results indicate that architectural enhancements such as transformers,attention mechanisms,and residual connections improve segmentation performance across diverse medical imaging applications,including tumor detection,organ segmentation,and lesion identification.The study also identifies current challenges in the field,including data variability,limited dataset sizes,and issues with class imbalance.Based on these findings,the paper suggests potential future directions for improving the robustness and clinical applicability of U-Net-based models in medical image segmentation.展开更多
Laser speckle contrast imaging(LSCI)is a noninvasive,label-free technique that allows real-time investigation of the microcirculation situation of biological tissue.High-quality microvascular segmentation is critical ...Laser speckle contrast imaging(LSCI)is a noninvasive,label-free technique that allows real-time investigation of the microcirculation situation of biological tissue.High-quality microvascular segmentation is critical for analyzing and evaluating vascular morphology and blood flow dynamics.However,achieving high-quality vessel segmentation has always been a challenge due to the cost and complexity of label data acquisition and the irregular vascular morphology.In addition,supervised learning methods heavily rely on high-quality labels for accurate segmentation results,which often necessitate extensive labeling efforts.Here,we propose a novel approach LSWDP for high-performance real-time vessel segmentation that utilizes low-quality pseudo-labels for nonmatched training without relying on a substantial number of intricate labels and image pairing.Furthermore,we demonstrate that our method is more robust and effective in mitigating performance degradation than traditional segmentation approaches on diverse style data sets,even when confronted with unfamiliar data.Importantly,the dice similarity coefficient exceeded 85%in a rat experiment.Our study has the potential to efficiently segment and evaluate blood vessels in both normal and disease situations.This would greatly benefit future research in life and medicine.展开更多
Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addres...Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addresses these challenges through three key technical innovations:This work adds three things:(1)a changed ResNet-50 backbone with sequential 3×3 convolutions to keep fine anatomical details that are needed for finding lesion boundaries;(2)a computationally efficient regional attention mechanism that works on high-resolution features without using a transformer’s extra memory;and(3)an adaptive feature fusion strategy that changes local and global featuresbasedonhowthe image isbeing used.Extensive evaluation on two distinct datasets demonstrates UltraSegNet’s superior performance:On the BUSI dataset,it obtains a precision of 0.915,a recall of 0.908,and an F1 score of 0.911.In the UDAIT dataset,it achieves robust performance across the board,with a precision of 0.901 and recall of 0.894.Importantly,these improvements are achieved at clinically feasible computation times,taking 235 ms per image on standard GPU hardware.Notably,UltraSegNet does amazingly well on difficult small lesions(less than 10 mm),achieving a detection accuracy of 0.891.This is a huge improvement over traditional methods that have a hard time with small-scale features,as standard models can only achieve 0.63–0.71 accuracy.This improvement in small lesion detection is particularly crucial for early-stage breast cancer identification.Results from this work demonstrate that UltraSegNet can be practically deployable in clinical workflows to improve breast cancer screening accuracy.展开更多
The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships ...The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships between pixels requires further improvement.Previous methods face challenges in efficiently managing multi-scale fea-tures of different granularities from the encoder backbone,leaving room for improvement in their global representation and feature extraction capabilities.To address these challenges,we propose a novel Decoder with Multi-Head Feature Receptors(DMHFR),which receives multi-scale features from the encoder backbone and organizes them into three feature groups with different granularities:coarse,fine-grained,and full set.These groups are subsequently processed by Multi-Head Feature Receptors(MHFRs)after feature capture and modeling operations.MHFRs include two Three-Head Feature Receptors(THFRs)and one Four-Head Feature Receptor(FHFR).Each group of features is passed through these MHFRs and then fed into axial transformers,which help the model capture long-range dependencies within the features.The three MHFRs produce three distinct feature outputs.The output from the FHFR serves as auxiliary auxiliary features in the prediction head,and the prediction output and their losses will eventually be aggregated.Experimental results show that the Transformer using DMHFR outperforms 15 state of the arts(SOTA)methods on five public datasets.Specifically,it achieved significant improvements in mean DICE scores over the classic Parallel Reverse Attention Network(PraNet)method,with gains of 4.1%,2.2%,1.4%,8.9%,and 16.3%on the CVC-ClinicDB,Kvasir-SEG,CVC-T,CVC-ColonDB,and ETIS-LaribPolypDB datasets,respectively.展开更多
In the realm of medical image segmentation,particularly in cardiac magnetic resonance imaging(MRI),achieving robust performance with limited annotated data is a significant challenge.Performance often degrades when fa...In the realm of medical image segmentation,particularly in cardiac magnetic resonance imaging(MRI),achieving robust performance with limited annotated data is a significant challenge.Performance often degrades when faced with testing scenarios from unknown domains.To address this problem,this paper proposes a novel semi-supervised approach for cardiac magnetic resonance image segmentation,aiming to enhance predictive capabilities and domain generalization(DG).This paper establishes an MT-like model utilizing pseudo-labeling and consistency regularization from semi-supervised learning,and integrates uncertainty estimation to improve the accuracy of pseudo-labels.Additionally,to tackle the challenge of domain generalization,a data manipulation strategy is introduced,extracting spatial and content-related information from images across different domains,enriching the dataset with a multi-domain perspective.This papers method is meticulously evaluated on the publicly available cardiac magnetic resonance imaging dataset M&Ms,validating its effectiveness.Comparative analyses against various methods highlight the out-standing performance of this papers approach,demonstrating its capability to segment cardiac magnetic resonance images in previously unseen domains even with limited annotated data.展开更多
Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ...Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.展开更多
With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image t...With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image transmission as an example, from the semantic communication's view, not all pixels in the images are equally important for certain receivers. The existing semantic communication systems directly perform semantic encoding and decoding on the whole image, in which the region of interest cannot be identified. In this paper, we propose a novel semantic communication system for image transmission that can distinguish between Regions Of Interest (ROI) and Regions Of Non-Interest (RONI) based on semantic segmentation, where a semantic segmentation algorithm is used to classify each pixel of the image and distinguish ROI and RONI. The system also enables high-quality transmission of ROI with lower communication overheads by transmissions through different semantic communication networks with different bandwidth requirements. An improved metric θPSNR is proposed to evaluate the transmission accuracy of the novel semantic transmission network. Experimental results show that our proposed system achieves a significant performance improvement compared with existing approaches, namely, existing semantic communication approaches and the conventional approach without semantics.展开更多
To enhance the diversity and distribution uniformity of initial population,as well as to avoid local extrema in the Chimp Optimization Algorithm(CHOA),this paper improves the CHOA based on chaos initialization and Cau...To enhance the diversity and distribution uniformity of initial population,as well as to avoid local extrema in the Chimp Optimization Algorithm(CHOA),this paper improves the CHOA based on chaos initialization and Cauchy mutation.First,Sin chaos is introduced to improve the random population initialization scheme of the CHOA,which not only guarantees the diversity of the population,but also enhances the distribution uniformity of the initial population.Next,Cauchy mutation is added to optimize the global search ability of the CHOA in the process of position(threshold)updating to avoid the CHOA falling into local optima.Finally,an improved CHOA was formed through the combination of chaos initialization and Cauchy mutation(CICMCHOA),then taking fuzzy Kapur as the objective function,this paper applied CICMCHOA to natural and medical image segmentation,and compared it with four algorithms,including the improved Satin Bowerbird optimizer(ISBO),Cuckoo Search(ICS),etc.The experimental results deriving from visual and specific indicators demonstrate that CICMCHOA delivers superior segmentation effects in image segmentation.展开更多
Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi...Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.展开更多
The growing demand for energy-efficient solutions has led to increased interest in analyzing building facades,as buildings contribute significantly to energy consumption in urban environments.However,conventional imag...The growing demand for energy-efficient solutions has led to increased interest in analyzing building facades,as buildings contribute significantly to energy consumption in urban environments.However,conventional image segmentation methods often struggle to capture fine details such as edges and contours,limiting their effectiveness in identifying areas prone to energy loss.To address this challenge,we propose a novel segmentation methodology that combines object-wise processing with a two-stage deep learning model,Cascade U-Net.Object-wise processing isolates components of the facade,such as walls and windows,for independent analysis,while Cascade U-Net incorporates contour information to enhance segmentation accuracy.The methodology involves four steps:object isolation,which crops and adjusts the image based on bounding boxes;contour extraction,which derives contours;image segmentation,which modifies and reuses contours as guide data in Cascade U-Net to segment areas;and segmentation synthesis,which integrates the results obtained for each object to produce the final segmentation map.Applied to a dataset of Korean building images,the proposed method significantly outperformed traditional models,demonstrating improved accuracy and the ability to preserve critical structural details.Furthermore,we applied this approach to classify window thermal loss in real-world scenarios using infrared images,showing its potential to identify windows vulnerable to energy loss.Notably,our Cascade U-Net,which builds upon the relatively lightweight U-Net architecture,also exhibited strong performance,reinforcing the practical value of this method.Our approach offers a practical solution for enhancing energy efficiency in buildings by providing more precise segmentation results.展开更多
With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection abil...With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission.展开更多
Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irreg...Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irregular edges.The complex characteristics of lesions make SAH segmentation a challenging task.To cope with these difficulties,a u-shaped deformable transformer(UDT)is proposed for SAH segmentation.Specifically,first,a multi-scale deformable attention(MSDA)module is exploited to model the diffuseness and scale-variant characteristics of SAH lesions,where the MSDA module can fuse features in different scales and adjust the attention field of each element dynamically to generate discriminative multi-scale features.Second,the cross deformable attention-based skip connection(CDASC)module is designed to model the irregular edge char-acteristic of SAH lesions,where the CDASC module can utilise the spatial details from encoder features to refine the spatial information of decoder features.Third,the MSDA and CDASC modules are embedded into the backbone Res-UNet to construct the proposed UDT.Extensive experiments are conducted on the self-built SAH-CT dataset and two public medical datasets(GlaS and MoNuSeg).Experimental results show that the presented UDT achieves the state-of-the-art performance.展开更多
Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to fac...Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets.展开更多
Magnetic resonance(MR)imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body.The segmentation of MR im-ages plays a crucial role in medical image analysis,as ...Magnetic resonance(MR)imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body.The segmentation of MR im-ages plays a crucial role in medical image analysis,as it enables accurate diagnosis,treatment planning,and monitoring of various diseases and conditions.Due to the lack of sufficient medical images,it is challenging to achieve an accurate segmentation,especially with the application of deep learning networks.The aim of this work is to study transfer learning from T1-weighted(T1-w)to T2-weighted(T2-w)MR sequences to enhance bone segmentation with minimal required computation resources.With the use of an excitation-based convolutional neural networks,four transfer learning mechanisms are proposed:transfer learning without fine tuning,open fine tuning,conservative fine tuning,and hybrid transfer learning.Moreover,a multi-parametric segmentation model is proposed using T2-w MR as an intensity-based augmentation technique.The novelty of this work emerges in the hybrid transfer learning approach that overcomes the overfitting issue and preserves the features of both modalities with minimal computation time and resources.The segmentation results are evaluated using 14 clinical 3D brain MR and CT images.The results reveal that hybrid transfer learning is superior for bone segmentation in terms of performance and computation time with DSCs of 0.5393±0.0007.Although T2-w-based augmentation has no significant impact on the performance of T1-w MR segmentation,it helps in improving T2-w MR segmentation and developing a multi-sequences segmentation model.展开更多
In the present research,we describe a computer-aided detection(CAD)method aimed at automatic fetal head circumference(HC)measurement in 2D ultrasonography pictures during all trimesters of pregnancy.The HC might be ut...In the present research,we describe a computer-aided detection(CAD)method aimed at automatic fetal head circumference(HC)measurement in 2D ultrasonography pictures during all trimesters of pregnancy.The HC might be utilized toward determining gestational age and tracking fetal development.This automated approach is particularly valuable in low-resource settings where access to trained sonographers is limited.The CAD system is divided into two steps:to begin,Haar-like characteristics were extracted from ultrasound pictures in order to train a classifier using random forests to find the fetal skull.We identified the HC using dynamic programming,an elliptical fit,and a Hough transform.The computer-aided detection(CAD)program was well-trained on 999 pictures(HC18 challenge data source),and then verified on 335 photos from all trimesters in an independent test set.A skilled sonographer and an expert in medicine personally marked the test set.We used the crown-rump length(CRL)measurement to calculate the reference gestational age(GA).In the first,second,and third trimesters,the median difference between the standard GA and the GA calculated by the skilled sonographer stayed at 0.7±2.7,0.0±4.5,and 2.0±12.0 days,respectively.The regular duration variance between the baseline GA and the health investigator’s GA remained 1.5±3.0,1.9±5.0,and 4.0±14 a couple of days.The mean variance between the standard GA and the CAD system’s GA remained between 0.5 and 5.0,with an additional variation of 2.9 to 12.5 days.The outcomes reveal that the computer-aided detection(CAD)program outperforms an expert sonographer.When paired with the classifications reported in the literature,the provided system achieves results that are comparable or even better.We have assessed and scheduled this computerized approach for HC evaluation,which includes information from all trimesters of gestation.展开更多
Computed Tomography(CT)is a commonly used technology in Printed Circuit Boards(PCB)non-destructive testing,and element segmentation of CT images is a key subsequent step.With the development of deep learning,researche...Computed Tomography(CT)is a commonly used technology in Printed Circuit Boards(PCB)non-destructive testing,and element segmentation of CT images is a key subsequent step.With the development of deep learning,researchers began to exploit the“pre-training and fine-tuning”training process for multi-element segmentation,reducing the time spent on manual annotation.However,the existing element segmentation model only focuses on the overall accuracy at the pixel level,ignoring whether the element connectivity relationship can be correctly identified.To this end,this paper proposes a PCB CT image element segmentation model optimizing the semantic perception of connectivity relationship(OSPC-seg).The overall training process adopts a“pre-training and fine-tuning”training process.A loss function that optimizes the semantic perception of circuit connectivity relationship(OSPC Loss)is designed from the aspect of alleviating the class imbalance problem and improving the correct connectivity rate.Also,the correct connectivity rate index(CCR)is proposed to evaluate the model’s connectivity relationship recognition capabilities.Experiments show that mIoU and CCR of OSPC-seg on our datasets are 90.1%and 97.0%,improved by 1.5%and 1.6%respectively compared with the baseline model.From visualization results,it can be seen that the segmentation performance of connection positions is significantly improved,which also demonstrates the effectiveness of OSPC-seg.展开更多
Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in ...Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in 2015.However,the application of deep learning models to ocular medical image segmentation poses unique challenges,especially compared to other body parts,due to the complexity,small size,and blurriness of such images,coupled with the scarcity of data.This article aims to provide a comprehensive review of medical image segmentation from two perspectives:the development of deep network structures and the application of segmentation in ocular imaging.Initially,the article introduces an overview of medical imaging,data processing,and performance evaluation metrics.Subsequently,it analyzes recent developments in U-Net-based network structures.Finally,for the segmentation of ocular medical images,the application of deep learning is reviewed and categorized by the type of ocular tissue.展开更多
基金supported by the Natural Science Foundation of China(No.41804112,author:Chengyun Song).
文摘Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.
文摘Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.
基金supported by Gansu Natural Science Foundation Programme(No.24JRRA231)National Natural Science Foundation of China(No.62061023)Gansu Provincial Education,Science and Technology Innovation and Industry(No.2021CYZC-04)。
文摘Brain tumor segmentation is critical in clinical diagnosis and treatment planning.Existing methods for brain tumor segmentation with missing modalities often struggle when dealing with multiple missing modalities,a common scenario in real-world clinical settings.These methods primarily focus on handling a single missing modality at a time,making them insufficiently robust for the additional complexity encountered with incomplete data containing various missing modality combinations.Additionally,most existing methods rely on single models,which may limit their performance and increase the risk of overfitting the training data.This work proposes a novel method called the ensemble adversarial co-training neural network(EACNet)for accurate brain tumor segmentation from multi-modal magnetic resonance imaging(MRI)scans with multiple missing modalities.The proposed method consists of three key modules:the ensemble of pre-trained models,which captures diverse feature representations from the MRI data by employing an ensemble of pre-trained models;adversarial learning,which leverages a competitive training approach involving two models;a generator model,which creates realistic missing data,while sub-networks acting as discriminators learn to distinguish real data from the generated“fake”data.Co-training framework utilizes the information extracted by the multimodal path(trained on complete scans)to guide the learning process in the path handling missing modalities.The model potentially compensates for missing information through co-training interactions by exploiting the relationships between available modalities and the tumor segmentation task.EACNet was evaluated on the BraTS2018 and BraTS2020 challenge datasets and achieved state-of-the-art and competitive performance respectively.Notably,the segmentation results for the whole tumor(WT)dice similarity coefficient(DSC)reached 89.27%,surpassing the performance of existing methods.The analysis suggests that the ensemble approach offers potential benefits,and the adversarial co-training contributes to the increased robustness and accuracy of EACNet for brain tumor segmentation of MRI scans with missing modalities.The experimental results show that EACNet has promising results for the task of brain tumor segmentation of MRI scans with missing modalities and is a better candidate for real-world clinical applications.
文摘Medical image segmentation has become a cornerstone for many healthcare applications,allowing for the automated extraction of critical information from images such as Computed Tomography(CT)scans,Magnetic Resonance Imaging(MRIs),and X-rays.The introduction of U-Net in 2015 has significantly advanced segmentation capabilities,especially for small datasets commonly found in medical imaging.Since then,various modifications to the original U-Net architecture have been proposed to enhance segmentation accuracy and tackle challenges like class imbalance,data scarcity,and multi-modal image processing.This paper provides a detailed review and comparison of several U-Net-based architectures,focusing on their effectiveness in medical image segmentation tasks.We evaluate performance metrics such as Dice Similarity Coefficient(DSC)and Intersection over Union(IoU)across different U-Net variants including HmsU-Net,CrossU-Net,mResU-Net,and others.Our results indicate that architectural enhancements such as transformers,attention mechanisms,and residual connections improve segmentation performance across diverse medical imaging applications,including tumor detection,organ segmentation,and lesion identification.The study also identifies current challenges in the field,including data variability,limited dataset sizes,and issues with class imbalance.Based on these findings,the paper suggests potential future directions for improving the robustness and clinical applicability of U-Net-based models in medical image segmentation.
基金supported by grants fromthe State Key Laboratory of Vaccines for Infectious Diseases,Xiang An Biomedicine Laboratory(2023XAKJ0101031)National Natural Science Foundation of China(81971665)+8 种基金Natural Science Foundation of Fujian Province(2021J011366)Medical and Health Guidance Project of Xiamen(3502Z20214ZD1016)Xiamen Health High-Level Talent Training Program,Ningxia Hui Autonomous Region Key Research and Development Program(2022BEG03127)Fundamental Research Funds for the Central Universities of China(20720210117)Fujian Province Science and Technology Plan Guiding Project(2022Y0002)National Natural Science Foundation of China(62005048)Natural Science Foundation of Fujian Province(2020J01158)Ministry of Education Industry-university Cooperative Education Project(220606053295218)XMU Undergraduate Innovation and Entrepreneurship Training Programs(2023X805,2023X808,2023Y1109).
文摘Laser speckle contrast imaging(LSCI)is a noninvasive,label-free technique that allows real-time investigation of the microcirculation situation of biological tissue.High-quality microvascular segmentation is critical for analyzing and evaluating vascular morphology and blood flow dynamics.However,achieving high-quality vessel segmentation has always been a challenge due to the cost and complexity of label data acquisition and the irregular vascular morphology.In addition,supervised learning methods heavily rely on high-quality labels for accurate segmentation results,which often necessitate extensive labeling efforts.Here,we propose a novel approach LSWDP for high-performance real-time vessel segmentation that utilizes low-quality pseudo-labels for nonmatched training without relying on a substantial number of intricate labels and image pairing.Furthermore,we demonstrate that our method is more robust and effective in mitigating performance degradation than traditional segmentation approaches on diverse style data sets,even when confronted with unfamiliar data.Importantly,the dice similarity coefficient exceeded 85%in a rat experiment.Our study has the potential to efficiently segment and evaluate blood vessels in both normal and disease situations.This would greatly benefit future research in life and medicine.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R435),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addresses these challenges through three key technical innovations:This work adds three things:(1)a changed ResNet-50 backbone with sequential 3×3 convolutions to keep fine anatomical details that are needed for finding lesion boundaries;(2)a computationally efficient regional attention mechanism that works on high-resolution features without using a transformer’s extra memory;and(3)an adaptive feature fusion strategy that changes local and global featuresbasedonhowthe image isbeing used.Extensive evaluation on two distinct datasets demonstrates UltraSegNet’s superior performance:On the BUSI dataset,it obtains a precision of 0.915,a recall of 0.908,and an F1 score of 0.911.In the UDAIT dataset,it achieves robust performance across the board,with a precision of 0.901 and recall of 0.894.Importantly,these improvements are achieved at clinically feasible computation times,taking 235 ms per image on standard GPU hardware.Notably,UltraSegNet does amazingly well on difficult small lesions(less than 10 mm),achieving a detection accuracy of 0.891.This is a huge improvement over traditional methods that have a hard time with small-scale features,as standard models can only achieve 0.63–0.71 accuracy.This improvement in small lesion detection is particularly crucial for early-stage breast cancer identification.Results from this work demonstrate that UltraSegNet can be practically deployable in clinical workflows to improve breast cancer screening accuracy.
基金supported by Xiamen Medical and Health Guidance Project in 2021(No.3502Z20214ZD1070)supported by a grant from Guangxi Key Laboratory of Machine Vision and Intelligent Control,China(No.2023B02).
文摘The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships between pixels requires further improvement.Previous methods face challenges in efficiently managing multi-scale fea-tures of different granularities from the encoder backbone,leaving room for improvement in their global representation and feature extraction capabilities.To address these challenges,we propose a novel Decoder with Multi-Head Feature Receptors(DMHFR),which receives multi-scale features from the encoder backbone and organizes them into three feature groups with different granularities:coarse,fine-grained,and full set.These groups are subsequently processed by Multi-Head Feature Receptors(MHFRs)after feature capture and modeling operations.MHFRs include two Three-Head Feature Receptors(THFRs)and one Four-Head Feature Receptor(FHFR).Each group of features is passed through these MHFRs and then fed into axial transformers,which help the model capture long-range dependencies within the features.The three MHFRs produce three distinct feature outputs.The output from the FHFR serves as auxiliary auxiliary features in the prediction head,and the prediction output and their losses will eventually be aggregated.Experimental results show that the Transformer using DMHFR outperforms 15 state of the arts(SOTA)methods on five public datasets.Specifically,it achieved significant improvements in mean DICE scores over the classic Parallel Reverse Attention Network(PraNet)method,with gains of 4.1%,2.2%,1.4%,8.9%,and 16.3%on the CVC-ClinicDB,Kvasir-SEG,CVC-T,CVC-ColonDB,and ETIS-LaribPolypDB datasets,respectively.
基金Supported by the National Natural Science Foundation of China(No.62001313)the Key Project of Liaoning Provincial Department of Science and Technology(No.2021JH2/10300134,2022JH1/10500004)。
文摘In the realm of medical image segmentation,particularly in cardiac magnetic resonance imaging(MRI),achieving robust performance with limited annotated data is a significant challenge.Performance often degrades when faced with testing scenarios from unknown domains.To address this problem,this paper proposes a novel semi-supervised approach for cardiac magnetic resonance image segmentation,aiming to enhance predictive capabilities and domain generalization(DG).This paper establishes an MT-like model utilizing pseudo-labeling and consistency regularization from semi-supervised learning,and integrates uncertainty estimation to improve the accuracy of pseudo-labels.Additionally,to tackle the challenge of domain generalization,a data manipulation strategy is introduced,extracting spatial and content-related information from images across different domains,enriching the dataset with a multi-domain perspective.This papers method is meticulously evaluated on the publicly available cardiac magnetic resonance imaging dataset M&Ms,validating its effectiveness.Comparative analyses against various methods highlight the out-standing performance of this papers approach,demonstrating its capability to segment cardiac magnetic resonance images in previously unseen domains even with limited annotated data.
基金financially supported by the National Key Research and Development Program(Grant No.2022YFE0107000)the General Projects of the National Natural Science Foundation of China(Grant No.52171259)the High-Tech Ship Research Project of the Ministry of Industry and Information Technology(Grant No.[2021]342)。
文摘Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.
基金supported in part by collaborative research with Toyota Motor Corporation,in part by ROIS NII Open Collaborative Research under Grant 21S0601,in part by JSPS KAKENHI under Grants 20H00592,21H03424.
文摘With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image transmission as an example, from the semantic communication's view, not all pixels in the images are equally important for certain receivers. The existing semantic communication systems directly perform semantic encoding and decoding on the whole image, in which the region of interest cannot be identified. In this paper, we propose a novel semantic communication system for image transmission that can distinguish between Regions Of Interest (ROI) and Regions Of Non-Interest (RONI) based on semantic segmentation, where a semantic segmentation algorithm is used to classify each pixel of the image and distinguish ROI and RONI. The system also enables high-quality transmission of ROI with lower communication overheads by transmissions through different semantic communication networks with different bandwidth requirements. An improved metric θPSNR is proposed to evaluate the transmission accuracy of the novel semantic transmission network. Experimental results show that our proposed system achieves a significant performance improvement compared with existing approaches, namely, existing semantic communication approaches and the conventional approach without semantics.
基金This work is supported by Natural Science Foundation of Anhui under Grant 1908085MF207,KJ2020A1215,KJ2021A1251 and 2023AH052856the Excellent Youth Talent Support Foundation of Anhui underGrant gxyqZD2021142the Quality Engineering Project of Anhui under Grant 2021jyxm1117,2021kcszsfkc307,2022xsxx158 and 2022jcbs043.
文摘To enhance the diversity and distribution uniformity of initial population,as well as to avoid local extrema in the Chimp Optimization Algorithm(CHOA),this paper improves the CHOA based on chaos initialization and Cauchy mutation.First,Sin chaos is introduced to improve the random population initialization scheme of the CHOA,which not only guarantees the diversity of the population,but also enhances the distribution uniformity of the initial population.Next,Cauchy mutation is added to optimize the global search ability of the CHOA in the process of position(threshold)updating to avoid the CHOA falling into local optima.Finally,an improved CHOA was formed through the combination of chaos initialization and Cauchy mutation(CICMCHOA),then taking fuzzy Kapur as the objective function,this paper applied CICMCHOA to natural and medical image segmentation,and compared it with four algorithms,including the improved Satin Bowerbird optimizer(ISBO),Cuckoo Search(ICS),etc.The experimental results deriving from visual and specific indicators demonstrate that CICMCHOA delivers superior segmentation effects in image segmentation.
基金supported by the UC-National Lab In-Residence Graduate Fellowship Grant L21GF3606supported by a DOD National Defense Science and Engineering Graduate(NDSEG)Research Fellowship+1 种基金supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project numbers 20170668PRD1 and 20210213ERsupported by the NGA under Contract No.HM04762110003.
文摘Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.
基金supported by Korea Institute for Advancement of Technology(KIAT):P0017123,the Competency Development Program for Industry Specialist.
文摘The growing demand for energy-efficient solutions has led to increased interest in analyzing building facades,as buildings contribute significantly to energy consumption in urban environments.However,conventional image segmentation methods often struggle to capture fine details such as edges and contours,limiting their effectiveness in identifying areas prone to energy loss.To address this challenge,we propose a novel segmentation methodology that combines object-wise processing with a two-stage deep learning model,Cascade U-Net.Object-wise processing isolates components of the facade,such as walls and windows,for independent analysis,while Cascade U-Net incorporates contour information to enhance segmentation accuracy.The methodology involves four steps:object isolation,which crops and adjusts the image based on bounding boxes;contour extraction,which derives contours;image segmentation,which modifies and reuses contours as guide data in Cascade U-Net to segment areas;and segmentation synthesis,which integrates the results obtained for each object to produce the final segmentation map.Applied to a dataset of Korean building images,the proposed method significantly outperformed traditional models,demonstrating improved accuracy and the ability to preserve critical structural details.Furthermore,we applied this approach to classify window thermal loss in real-world scenarios using infrared images,showing its potential to identify windows vulnerable to energy loss.Notably,our Cascade U-Net,which builds upon the relatively lightweight U-Net architecture,also exhibited strong performance,reinforcing the practical value of this method.Our approach offers a practical solution for enhancing energy efficiency in buildings by providing more precise segmentation results.
基金supported in part by the Tianjin Technology Innovation Guidance Special Fund Project under Grant No.21YDTPJC00850in part by the National Natural Science Foundation of China under Grant No.41906161in part by the Natural Science Foundation of Tianjin under Grant No.21JCQNJC00650。
文摘With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission.
基金National Natural Science Foundation of China,Grant/Award Numbers:62377026,62201222Knowledge Innovation Program of Wuhan-Shuguang Project,Grant/Award Number:2023010201020382+1 种基金National Key Research and Development Programme of China,Grant/Award Number:2022YFD1700204Fundamental Research Funds for the Central Universities,Grant/Award Numbers:CCNU22QN014,CCNU22JC007,CCNU22XJ034.
文摘Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irregular edges.The complex characteristics of lesions make SAH segmentation a challenging task.To cope with these difficulties,a u-shaped deformable transformer(UDT)is proposed for SAH segmentation.Specifically,first,a multi-scale deformable attention(MSDA)module is exploited to model the diffuseness and scale-variant characteristics of SAH lesions,where the MSDA module can fuse features in different scales and adjust the attention field of each element dynamically to generate discriminative multi-scale features.Second,the cross deformable attention-based skip connection(CDASC)module is designed to model the irregular edge char-acteristic of SAH lesions,where the CDASC module can utilise the spatial details from encoder features to refine the spatial information of decoder features.Third,the MSDA and CDASC modules are embedded into the backbone Res-UNet to construct the proposed UDT.Extensive experiments are conducted on the self-built SAH-CT dataset and two public medical datasets(GlaS and MoNuSeg).Experimental results show that the presented UDT achieves the state-of-the-art performance.
基金This work is supported in part by The National Natural Science Foundation of China(Grant Number 61971078),which provided domain expertise and computational power that greatly assisted the activityThis work was financially supported by Chongqing Municipal Education Commission Grants for-Major Science and Technology Project(Grant Number gzlcx20243175).
文摘Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets.
基金Swiss National Science Foundation,Grant/Award Number:SNSF 320030_176052Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung,Grant/Award Number:320030_176052。
文摘Magnetic resonance(MR)imaging is a widely employed medical imaging technique that produces detailed anatomical images of the human body.The segmentation of MR im-ages plays a crucial role in medical image analysis,as it enables accurate diagnosis,treatment planning,and monitoring of various diseases and conditions.Due to the lack of sufficient medical images,it is challenging to achieve an accurate segmentation,especially with the application of deep learning networks.The aim of this work is to study transfer learning from T1-weighted(T1-w)to T2-weighted(T2-w)MR sequences to enhance bone segmentation with minimal required computation resources.With the use of an excitation-based convolutional neural networks,four transfer learning mechanisms are proposed:transfer learning without fine tuning,open fine tuning,conservative fine tuning,and hybrid transfer learning.Moreover,a multi-parametric segmentation model is proposed using T2-w MR as an intensity-based augmentation technique.The novelty of this work emerges in the hybrid transfer learning approach that overcomes the overfitting issue and preserves the features of both modalities with minimal computation time and resources.The segmentation results are evaluated using 14 clinical 3D brain MR and CT images.The results reveal that hybrid transfer learning is superior for bone segmentation in terms of performance and computation time with DSCs of 0.5393±0.0007.Although T2-w-based augmentation has no significant impact on the performance of T1-w MR segmentation,it helps in improving T2-w MR segmentation and developing a multi-sequences segmentation model.
文摘In the present research,we describe a computer-aided detection(CAD)method aimed at automatic fetal head circumference(HC)measurement in 2D ultrasonography pictures during all trimesters of pregnancy.The HC might be utilized toward determining gestational age and tracking fetal development.This automated approach is particularly valuable in low-resource settings where access to trained sonographers is limited.The CAD system is divided into two steps:to begin,Haar-like characteristics were extracted from ultrasound pictures in order to train a classifier using random forests to find the fetal skull.We identified the HC using dynamic programming,an elliptical fit,and a Hough transform.The computer-aided detection(CAD)program was well-trained on 999 pictures(HC18 challenge data source),and then verified on 335 photos from all trimesters in an independent test set.A skilled sonographer and an expert in medicine personally marked the test set.We used the crown-rump length(CRL)measurement to calculate the reference gestational age(GA).In the first,second,and third trimesters,the median difference between the standard GA and the GA calculated by the skilled sonographer stayed at 0.7±2.7,0.0±4.5,and 2.0±12.0 days,respectively.The regular duration variance between the baseline GA and the health investigator’s GA remained 1.5±3.0,1.9±5.0,and 4.0±14 a couple of days.The mean variance between the standard GA and the CAD system’s GA remained between 0.5 and 5.0,with an additional variation of 2.9 to 12.5 days.The outcomes reveal that the computer-aided detection(CAD)program outperforms an expert sonographer.When paired with the classifications reported in the literature,the provided system achieves results that are comparable or even better.We have assessed and scheduled this computerized approach for HC evaluation,which includes information from all trimesters of gestation.
文摘Computed Tomography(CT)is a commonly used technology in Printed Circuit Boards(PCB)non-destructive testing,and element segmentation of CT images is a key subsequent step.With the development of deep learning,researchers began to exploit the“pre-training and fine-tuning”training process for multi-element segmentation,reducing the time spent on manual annotation.However,the existing element segmentation model only focuses on the overall accuracy at the pixel level,ignoring whether the element connectivity relationship can be correctly identified.To this end,this paper proposes a PCB CT image element segmentation model optimizing the semantic perception of connectivity relationship(OSPC-seg).The overall training process adopts a“pre-training and fine-tuning”training process.A loss function that optimizes the semantic perception of circuit connectivity relationship(OSPC Loss)is designed from the aspect of alleviating the class imbalance problem and improving the correct connectivity rate.Also,the correct connectivity rate index(CCR)is proposed to evaluate the model’s connectivity relationship recognition capabilities.Experiments show that mIoU and CCR of OSPC-seg on our datasets are 90.1%and 97.0%,improved by 1.5%and 1.6%respectively compared with the baseline model.From visualization results,it can be seen that the segmentation performance of connection positions is significantly improved,which also demonstrates the effectiveness of OSPC-seg.
文摘Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in 2015.However,the application of deep learning models to ocular medical image segmentation poses unique challenges,especially compared to other body parts,due to the complexity,small size,and blurriness of such images,coupled with the scarcity of data.This article aims to provide a comprehensive review of medical image segmentation from two perspectives:the development of deep network structures and the application of segmentation in ocular imaging.Initially,the article introduces an overview of medical imaging,data processing,and performance evaluation metrics.Subsequently,it analyzes recent developments in U-Net-based network structures.Finally,for the segmentation of ocular medical images,the application of deep learning is reviewed and categorized by the type of ocular tissue.