A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can ...A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.展开更多
In computer vision and artificial intelligence,automatic facial expression-based emotion identification of humans has become a popular research and industry problem.Recent demonstrations and applications in several fi...In computer vision and artificial intelligence,automatic facial expression-based emotion identification of humans has become a popular research and industry problem.Recent demonstrations and applications in several fields,including computer games,smart homes,expression analysis,gesture recognition,surveillance films,depression therapy,patientmonitoring,anxiety,and others,have brought attention to its significant academic and commercial importance.This study emphasizes research that has only employed facial images for face expression recognition(FER),because facial expressions are a basic way that people communicate meaning to each other.The immense achievement of deep learning has resulted in a growing use of its much architecture to enhance efficiency.This review is on machine learning,deep learning,and hybrid methods’use of preprocessing,augmentation techniques,and feature extraction for temporal properties of successive frames of data.The following section gives a brief summary of assessment criteria that are accessible to the public and then compares them with benchmark results the most trustworthy way to assess FER-related research topics statistically.In this review,a brief synopsis of the subject matter may be beneficial for novices in the field of FER as well as seasoned scholars seeking fruitful avenues for further investigation.The information conveys fundamental knowledge and provides a comprehensive understanding of the most recent state-of-the-art research.展开更多
Seal authentication is an important task for verifying the authenticity of stamped seals used in various domains to protect legal documents from tampering and counterfeiting.Stamped seal inspection is commonly audited...Seal authentication is an important task for verifying the authenticity of stamped seals used in various domains to protect legal documents from tampering and counterfeiting.Stamped seal inspection is commonly audited manually to ensure document authenticity.However,manual assessment of seal images is tedious and laborintensive due to human errors,inconsistent placement,and completeness of the seal.Traditional image recognition systems are inadequate enough to identify seal types accurately,necessitating a neural network-based method for seal image recognition.However,neural network-based classification algorithms,such as Residual Networks(ResNet)andVisualGeometryGroup with 16 layers(VGG16)yield suboptimal recognition rates on stamp datasets.Additionally,the fixed training data categories make handling new categories to be a challenging task.This paper proposes amulti-stage seal recognition algorithmbased on Siamese network to overcome these limitations.Firstly,the seal image is pre-processed by applying an image rotation correction module based on Histogram of Oriented Gradients(HOG).Secondly,the similarity between input seal image pairs is measured by utilizing a similarity comparison module based on the Siamese network.Finally,we compare the results with the pre-stored standard seal template images in the database to obtain the seal type.To evaluate the performance of the proposed method,we further create a new seal image dataset that contains two subsets with 210,000 valid labeled pairs in total.The proposed work has a practical significance in industries where automatic seal authentication is essential as in legal,financial,and governmental sectors,where automatic seal recognition can enhance document security and streamline validation processes.Furthermore,the experimental results show that the proposed multi-stage method for seal image recognition outperforms state-of-the-art methods on the two established datasets.展开更多
In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accurac...In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods.展开更多
Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been pr...Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been proposed.However,unlike DNNs,shallow convolutional neural networks often outperform deeper models in mitigating overfitting,particularly with small datasets.Still,many of these methods rely on a single feature for recognition,resulting in an insufficient ability to extract highly effective features.To address this limitation,in this paper,an Improved Dual-stream Shallow Convolutional Neural Network based on an Extreme Gradient Boosting Algorithm(IDSSCNN-XgBoost)is introduced for ME Recognition.The proposed method utilizes a dual-stream architecture where motion vectors(temporal features)are extracted using Optical Flow TV-L1 and amplify subtle changes(spatial features)via EulerianVideoMagnification(EVM).These features are processed by IDSSCNN,with an attention mechanism applied to refine the extracted effective features.The outputs are then fused,concatenated,and classified using the XgBoost algorithm.This comprehensive approach significantly improves recognition accuracy by leveraging the strengths of both temporal and spatial information,supported by the robust classification power of XgBoost.The proposed method is evaluated on three publicly available ME databases named Chinese Academy of Sciences Micro-expression Database(CASMEII),Spontaneous Micro-Expression Database(SMICHS),and Spontaneous Actions and Micro-Movements(SAMM).Experimental results indicate that the proposed model can achieve outstanding results compared to recent models.The accuracy results are 79.01%,69.22%,and 68.99%on CASMEII,SMIC-HS,and SAMM,and the F1-score are 75.47%,68.91%,and 63.84%,respectively.The proposed method has the advantage of operational efficiency and less computational time.展开更多
Pointer instruments are widely used in the nuclear power industry. Addressing the issues of low accuracy and slow detection speed in recognizing pointer meter readings under varying types and distances, this paper pro...Pointer instruments are widely used in the nuclear power industry. Addressing the issues of low accuracy and slow detection speed in recognizing pointer meter readings under varying types and distances, this paper proposes a recognition method based on YOLOv8 and DeepLabv3+. To improve the image input quality of the DeepLabv3+ model, the YOLOv8 detector is used to quickly locate the instrument region and crop it as the input image for recognition. To enhance the accuracy and speed of pointer recognition, the backbone network of DeepLabv3+ was replaced with Mo-bileNetv3, and the ECA+ module was designed to replace its SE module, reducing model parameters while improving recognition precision. The decoder’s fourfold-up sampling was replaced with two twofold-up samplings, and shallow feature maps were fused with encoder features of the corresponding size. The CBAM module was introduced to improve the segmentation accuracy of the pointer. Experiments were conducted using a self-made dataset of pointer-style instruments from nuclear power plants. Results showed that this method achieved a recognition accuracy of 94.5% at a precision level of 2.5, with an average error of 1.522% and an average total processing time of 0.56 seconds, demonstrating strong performance.展开更多
Correction to:Nano-Micro Lett.(2023)15:233 https://doi.org/10.1007/s40820-023-01201-7 Following publication of the original article[1],the authors reported that the first two lines of the introduction were accidentall...Correction to:Nano-Micro Lett.(2023)15:233 https://doi.org/10.1007/s40820-023-01201-7 Following publication of the original article[1],the authors reported that the first two lines of the introduction were accidentally placed in the right-hand column of the page in the PDF,which affects the readability.展开更多
Pill image recognition is an important field in computer vision.It has become a vital technology in healthcare and pharmaceuticals due to the necessity for precise medication identification to prevent errors and ensur...Pill image recognition is an important field in computer vision.It has become a vital technology in healthcare and pharmaceuticals due to the necessity for precise medication identification to prevent errors and ensure patient safety.This survey examines the current state of pill image recognition,focusing on advancements,methodologies,and the challenges that remain unresolved.It provides a comprehensive overview of traditional image processing-based,machine learning-based,deep learning-based,and hybrid-based methods,and aims to explore the ongoing difficulties in the field.We summarize and classify the methods used in each article,compare the strengths and weaknesses of traditional image processing-based,machine learning-based,deep learning-based,and hybrid-based methods,and review benchmark datasets for pill image recognition.Additionally,we compare the performance of proposed methods on popular benchmark datasets.This survey applies recent advancements,such as Transformer models and cutting-edge technologies like Augmented Reality(AR),to discuss potential research directions and conclude the review.By offering a holistic perspective,this paper aims to serve as a valuable resource for researchers and practitioners striving to advance the field of pill image recognition.展开更多
Emotion recognition plays a crucial role in various fields and is a key task in natural language processing (NLP). The objective is to identify and interpret emotional expressions in text. However, traditional emotion...Emotion recognition plays a crucial role in various fields and is a key task in natural language processing (NLP). The objective is to identify and interpret emotional expressions in text. However, traditional emotion recognition approaches often struggle in few-shot cross-domain scenarios due to their limited capacity to generalize semantic features across different domains. Additionally, these methods face challenges in accurately capturing complex emotional states, particularly those that are subtle or implicit. To overcome these limitations, we introduce a novel approach called Dual-Task Contrastive Meta-Learning (DTCML). This method combines meta-learning and contrastive learning to improve emotion recognition. Meta-learning enhances the model’s ability to generalize to new emotional tasks, while instance contrastive learning further refines the model by distinguishing unique features within each category, enabling it to better differentiate complex emotional expressions. Prototype contrastive learning, in turn, helps the model address the semantic complexity of emotions across different domains, enabling the model to learn fine-grained emotions expression. By leveraging dual tasks, DTCML learns from two domains simultaneously, the model is encouraged to learn more diverse and generalizable emotions features, thereby improving its cross-domain adaptability and robustness, and enhancing its generalization ability. We evaluated the performance of DTCML across four cross-domain settings, and the results show that our method outperforms the best baseline by 5.88%, 12.04%, 8.49%, and 8.40% in terms of accuracy.展开更多
Graph convolutional network(GCN)as an essential tool in human action recognition tasks have achieved excellent performance in previous studies.However,most current skeleton-based action recognition using GCN methods u...Graph convolutional network(GCN)as an essential tool in human action recognition tasks have achieved excellent performance in previous studies.However,most current skeleton-based action recognition using GCN methods use a shared topology,which cannot flexibly adapt to the diverse correlations between joints under different motion features.The video-shooting angle or the occlusion of the body parts may bring about errors when extracting the human pose coordinates with estimation algorithms.In this work,we propose a novel graph convolutional learning framework,called PCCTR-GCN,which integrates pose correction and channel topology refinement for skeleton-based human action recognition.Firstly,a pose correction module(PCM)is introduced,which corrects the pose coordinates of the input network to reduce the error in pose feature extraction.Secondly,channel topology refinement graph convolution(CTR-GC)is employed,which can dynamically learn the topology features and aggregate joint features in different channel dimensions so as to enhance the performance of graph convolution networks in feature extraction.Finally,considering that the joint stream and bone stream of skeleton data and their dynamic information are also important for distinguishing different actions,we employ a multi-stream data fusion approach to improve the network’s recognition performance.We evaluate the model using top-1 and top-5 classification accuracy.On the benchmark datasets iMiGUE and Kinetics,the top-1 classification accuracy reaches 55.08%and 36.5%,respectively,while the top-5 classification accuracy reaches 89.98%and 59.2%,respectively.On the NTU dataset,for the two benchmark RGB+Dsettings(X-Sub and X-View),the classification accuracy achieves 89.7%and 95.4%,respectively.展开更多
Named entity recognition(NER)in musk deer domain is the extraction of specific types of entities from unstructured texts,constituting a fundamental component of the knowledge graph,Q&A system,and text summarizatio...Named entity recognition(NER)in musk deer domain is the extraction of specific types of entities from unstructured texts,constituting a fundamental component of the knowledge graph,Q&A system,and text summarization system of musk deer domain.Due to limited annotated data,diverse entity types,and the ambiguity of Chinese word boundaries in musk deer domain NER,we present a novel NER model,CAELF-GP,which is based on cross-attention mechanism enhanced lexical features(CAELF).Specifically,we employ BERT as a character encoder and advocate the integration of external lexical information at the character representation layer.In the feature fusion module,instead of indiscriminately merging external dictionary information,we innovatively adopted a feature fusion method based on a cross-attention mechanism,which guides the model to focus on important lexical information by calculating the correlation between each character and its corresponding word sets.This module enhances the model’s semantic representation ability and entity boundary recognition capability.Ultimately,we introduce the decoding module of GlobalPointer(GP)for entity type recognition,capable of identifying both nested and non-nested entities.Since there is currently no publicly available dataset for the musk deer domain,we built a named entity recognition dataset for this domain by collecting relevant literature and working under the guidance of domain experts.The dataset facilitates the training and validation of the model and provides data foundation for subsequent related research.The model undergoes experimentation on two public datasets and the dataset of musk deer domain.The results show that it is superior to the baseline models,offering a promising technical avenue for the intelligent recognition of named entities in the musk deer domain.展开更多
In the task of Facial Expression Recognition(FER),data uncertainty has been a critical factor affecting performance,typically arising from the ambiguity of facial expressions,low-quality images,and the subjectivity of...In the task of Facial Expression Recognition(FER),data uncertainty has been a critical factor affecting performance,typically arising from the ambiguity of facial expressions,low-quality images,and the subjectivity of annotators.Tracking the training history reveals that misclassified samples often exhibit high confidence and excessive uncertainty in the early stages of training.To address this issue,we propose an uncertainty-based robust sample selection strategy,which combines confidence error with RandAugment to improve image diversity,effectively reducing overfitting caused by uncertain samples during deep learning model training.To validate the effectiveness of the proposed method,extensive experiments were conducted on FER public benchmarks.The accuracy obtained were 89.08%on RAF-DB,63.12%on AffectNet,and 88.73%on FERPlus.展开更多
Background Enterotoxigenic Escherichia coli(E.coli)is a threat to humans and animals that causes intestinal dis-orders.Antimicrobial resistance has urged alternatives,including Lactobacillus postbiotics,to mitigate th...Background Enterotoxigenic Escherichia coli(E.coli)is a threat to humans and animals that causes intestinal dis-orders.Antimicrobial resistance has urged alternatives,including Lactobacillus postbiotics,to mitigate the effects of enterotoxigenic E.coli.Methods Forty-eight newly weaned pigs were allotted to NC:no challenge/no supplement;PC:F18^(+)E.coli chal-lenge/no supplement;ATB:F18^(+)E.coli challenge/bacitracin;and LPB:F18^(+)E.coli challenge/postbiotics and fed diets for 28 d.On d 7,pigs were orally inoculated withF18^(+)E.coli.At d 28,the mucosa-associated microbiota,immune and oxidative stress status,intestinal morphology,the gene expression of pattern recognition receptors(PRR),and intestinal barrier function were measured.Data were analyzed using the MIXED procedure in SAS 9.4.Results PC increased(P<0.05)Helicobacter mastomyrinus whereas reduced(P<0.05)Prevotella copri and P.ster-corea compared to NC.The LPB increased(P<0.05)P.stercorea and Dialister succinatiphilus compared with PC.The ATB increased(P<0.05)Propionibacterium acnes,Corynebacterium glutamicum,and Sphingomonas pseudosanguinis compared to PC.The PC tended to reduce(P=0.054)PGLYRP4 and increased(P<0.05)TLR4,CD14,MDA,and crypt cell proliferation compared with NC.The ATB reduced(P<0.05)NOD1 compared with PC.The LPB increased(P<0.05)PGLYRP4,and interferon-γand reduced(P<0.05)NOD1 compared with PC.The ATB and LPB reduced(P<0.05)TNF-αand MDA compared with PC.Conclusions TheF18^(+)E.coli challenge compromised intestinal health.Bacitracin increased beneficial bacteria show-ing a trend towards increasing the intestinal barrier function,possibly by reducing the expression of PRR genes.Lac-tobacillus postbiotics enhanced the immunocompetence of nursery pigs by increasing the expression of interferon-γand PGLYRP4,and by reducing TLR4,NOD1,and CD14.展开更多
Safety maintenance of power equipment is of great importance in power grids,in which image-processing-based defect recognition is supposed to classify abnormal conditions during daily inspection.However,owing to the b...Safety maintenance of power equipment is of great importance in power grids,in which image-processing-based defect recognition is supposed to classify abnormal conditions during daily inspection.However,owing to the blurred features of defect images,the current defect recognition algorithm has poor fine-grained recognition ability.Visual attention can achieve fine-grained recognition with its abil-ity to model long-range dependencies while introducing extra computational complexity,especially for multi-head attention in vision transformer structures.Under these circumstances,this paper proposes a self-reduction multi-head attention module that can reduce computational complexity and be easily combined with a Convolutional Neural Network(CNN).In this manner,local and global fea-tures can be calculated simultaneously in our proposed structure,aiming to improve the defect recognition performance.Specifically,the proposed self-reduction multi-head attention can reduce redundant parameters,thereby solving the problem of limited computational resources.Experimental results were obtained based on the defect dataset collected from the substation.The results demonstrated the efficiency and superiority of the proposed method over other advanced algorithms.展开更多
A common but flawed design in existing CNN architectures is using strided convolutions and/or pooling layer,which will result in the loss of fine-grained feature information,especially for low-resolution images and sm...A common but flawed design in existing CNN architectures is using strided convolutions and/or pooling layer,which will result in the loss of fine-grained feature information,especially for low-resolution images and small objects.In this paper,a new CNN building block named SPD-Conv was used,which completely eliminated stride and pooling operations and replaced them with a space-to-depth convolution and a non-strided convolution.Such new design has the advantage of downsampling feature maps while retaining discriminant feature information.It also represents a general unified method,which can be easily applied to any CNN architectures,and can also be applied to strided conversion and pooling in the same way.展开更多
Counterfeit agricultural products pose a significant challenge to global food security and economic stability, necessitating advanced detection mechanisms to ensure authenticity and quality. To address this pressing i...Counterfeit agricultural products pose a significant challenge to global food security and economic stability, necessitating advanced detection mechanisms to ensure authenticity and quality. To address this pressing issue, we introduce iGFruit, an innovative model designed to enhance the detection of counterfeit agricultural products by integrating multimodal data processing. Our approach utilizes both image and text data for comprehensive feature extraction, employing advanced backbone models such as Vision Transformer (ViT), Normalizer-Free Network (NFNet), and Bidirectional Encoder Representations from Transformers (BERT). These extracted features are fused and processed using a Graph Attention Network (GAT) to capture intricate relationships within the multimodal data. The resulting fused representation is subsequently classified to detect counterfeit products with high precision. We validate the effectiveness of iGFruit through extensive experiments on two datasets: the publicly available MIT-States dataset and the proprietary TLU-States dataset, achieving state-of-the-art performance on both benchmarks. Specifically, iGFruit demonstrates an improvement of over 3% in average accuracy compared to baseline models, all while maintaining computational efficiency during inference. This work underscores the necessity and innovativeness of integrating graph-based feature learning to tackle the critical issue of counterfeit agricultural product detection.展开更多
Fine-grained Image Recognition(FGIR)task is dedicated to distinguishing similar sub-categories that belong to the same super-category,such as bird species and car types.In order to highlight visual differences,existin...Fine-grained Image Recognition(FGIR)task is dedicated to distinguishing similar sub-categories that belong to the same super-category,such as bird species and car types.In order to highlight visual differences,existing FGIR works often follow two steps:discriminative sub-region localization and local feature representation.However,these works pay less attention on global context information.They neglect a fact that the subtle visual difference in challenging scenarios can be highlighted through exploiting the spatial relationship among different subregions from a global view point.Therefore,in this paper,we consider both global and local information for FGIR,and propose a collaborative teacher-student strategy to reinforce and unity the two types of information.Our framework is implemented mainly by convolutional neural network,referred to Teacher-Student Based Attention Convolutional Neural Network(T-S-ACNN).For fine-grained local information,we choose the classic Multi-Attention Network(MA-Net)as our baseline,and propose a type of boundary constraint to further reduce background noises in the local attention maps.In this way,the discriminative sub-regions tend to appear in the area occupied by fine-grained objects,leading to more accurate sub-region localization.For fine-grained global information,we design a graph convolution based Global Attention Network(GA-Net),which can combine extracted local attention maps from MA-Net with non-local techniques to explore spatial relationship among subregions.At last,we develop a collaborative teacher-student strategy to adaptively determine the attended roles and optimization modes,so as to enhance the cooperative reinforcement of MA-Net and GA-Net.Extensive experiments on CUB-200-2011,Stanford Cars and FGVC Aircraft datasets illustrate the promising performance of our framework.展开更多
In recent years,audio pattern recognition has emerged as a key area of research,driven by its applications in human-computer interaction,robotics,and healthcare.Traditional methods,which rely heavily on handcrafted fe...In recent years,audio pattern recognition has emerged as a key area of research,driven by its applications in human-computer interaction,robotics,and healthcare.Traditional methods,which rely heavily on handcrafted features such asMel filters,often suffer frominformation loss and limited feature representation capabilities.To address these limitations,this study proposes an innovative end-to-end audio pattern recognition framework that directly processes raw audio signals,preserving original information and extracting effective classification features.The proposed framework utilizes a dual-branch architecture:a global refinement module that retains channel and temporal details and a multi-scale embedding module that captures high-level semantic information.Additionally,a guided fusion module integrates complementary features from both branches,ensuring a comprehensive representation of audio data.Specifically,the multi-scale audio context embedding module is designed to effectively extract spatiotemporal dependencies,while the global refinement module aggregates multi-scale channel and temporal cues for enhanced modeling.The guided fusion module leverages these features to achieve efficient integration of complementary information,resulting in improved classification accuracy.Experimental results demonstrate the model’s superior performance on multiple datasets,including ESC-50,UrbanSound8K,RAVDESS,and CREMA-D,with classification accuracies of 93.25%,90.91%,92.36%,and 70.50%,respectively.These results highlight the robustness and effectiveness of the proposed framework,which significantly outperforms existing approaches.By addressing critical challenges such as information loss and limited feature representation,thiswork provides newinsights and methodologies for advancing audio classification and multimodal interaction systems.展开更多
Fingerprint features,as unique and stable biometric identifiers,are crucial for identity verification.However,traditional centralized methods of processing these sensitive data linked to personal identity pose signifi...Fingerprint features,as unique and stable biometric identifiers,are crucial for identity verification.However,traditional centralized methods of processing these sensitive data linked to personal identity pose significant privacy risks,potentially leading to user data leakage.Federated Learning allows multiple clients to collaboratively train and optimize models without sharing raw data,effectively addressing privacy and security concerns.However,variations in fingerprint data due to factors such as region,ethnicity,sensor quality,and environmental conditions result in significant heterogeneity across clients.This heterogeneity adversely impacts the generalization ability of the global model,limiting its performance across diverse distributions.To address these challenges,we propose an Adaptive Federated Fingerprint Recognition algorithm(AFFR)based on Federated Learning.The algorithm incorporates a generalization adjustment mechanism that evaluates the generalization gap between the local models and the global model,adaptively adjusting aggregation weights to mitigate the impact of heterogeneity caused by differences in data quality and feature characteristics.Additionally,a noise mechanism is embedded in client-side training to reduce the risk of fingerprint data leakage arising from weight disclosures during model updates.Experiments conducted on three public datasets demonstrate that AFFR significantly enhances model accuracy while ensuring robust privacy protection,showcasing its strong application potential and competitiveness in heterogeneous data environments.展开更多
Bird vocalizations are pivotal for ecological monitoring,providing insights into biodiversity and ecosystem health.Traditional recognition methods often neglect phase information,resulting in incomplete feature repres...Bird vocalizations are pivotal for ecological monitoring,providing insights into biodiversity and ecosystem health.Traditional recognition methods often neglect phase information,resulting in incomplete feature representation.In this paper,we introduce a novel approach to bird vocalization recognition(BVR)that integrates both amplitude and phase information,leading to enhanced species identification.We propose MHARes Net,a deep learning(DL)model that employs residual blocks and a multi-head attention mechanism to capture salient features from logarithmic power(POW),Instantaneous Frequency(IF),and Group Delay(GD)extracted from bird vocalizations.Experiments on three bird vocalization datasets demonstrate our method's superior performance,achieving accuracy rates of 94%,98.9%,and 87.1%respectively.These results indicate that our approach provides a more effective representation of bird vocalizations,outperforming existing methods.This integration of phase information in BVR is innovative and significantly advances the field of automatic bird monitoring technology,offering valuable tools for ecological research and conservation efforts.展开更多
文摘A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.
文摘In computer vision and artificial intelligence,automatic facial expression-based emotion identification of humans has become a popular research and industry problem.Recent demonstrations and applications in several fields,including computer games,smart homes,expression analysis,gesture recognition,surveillance films,depression therapy,patientmonitoring,anxiety,and others,have brought attention to its significant academic and commercial importance.This study emphasizes research that has only employed facial images for face expression recognition(FER),because facial expressions are a basic way that people communicate meaning to each other.The immense achievement of deep learning has resulted in a growing use of its much architecture to enhance efficiency.This review is on machine learning,deep learning,and hybrid methods’use of preprocessing,augmentation techniques,and feature extraction for temporal properties of successive frames of data.The following section gives a brief summary of assessment criteria that are accessible to the public and then compares them with benchmark results the most trustworthy way to assess FER-related research topics statistically.In this review,a brief synopsis of the subject matter may be beneficial for novices in the field of FER as well as seasoned scholars seeking fruitful avenues for further investigation.The information conveys fundamental knowledge and provides a comprehensive understanding of the most recent state-of-the-art research.
基金the National Natural Science Foundation of China(Grant No.62172132)Public Welfare Technology Research Project of Zhejiang Province(Grant No.LGF21F020014)the Opening Project of Key Laboratory of Public Security Information Application Based on Big-Data Architecture,Ministry of Public Security of Zhejiang Police College(Grant No.2021DSJSYS002).
文摘Seal authentication is an important task for verifying the authenticity of stamped seals used in various domains to protect legal documents from tampering and counterfeiting.Stamped seal inspection is commonly audited manually to ensure document authenticity.However,manual assessment of seal images is tedious and laborintensive due to human errors,inconsistent placement,and completeness of the seal.Traditional image recognition systems are inadequate enough to identify seal types accurately,necessitating a neural network-based method for seal image recognition.However,neural network-based classification algorithms,such as Residual Networks(ResNet)andVisualGeometryGroup with 16 layers(VGG16)yield suboptimal recognition rates on stamp datasets.Additionally,the fixed training data categories make handling new categories to be a challenging task.This paper proposes amulti-stage seal recognition algorithmbased on Siamese network to overcome these limitations.Firstly,the seal image is pre-processed by applying an image rotation correction module based on Histogram of Oriented Gradients(HOG).Secondly,the similarity between input seal image pairs is measured by utilizing a similarity comparison module based on the Siamese network.Finally,we compare the results with the pre-stored standard seal template images in the database to obtain the seal type.To evaluate the performance of the proposed method,we further create a new seal image dataset that contains two subsets with 210,000 valid labeled pairs in total.The proposed work has a practical significance in industries where automatic seal authentication is essential as in legal,financial,and governmental sectors,where automatic seal recognition can enhance document security and streamline validation processes.Furthermore,the experimental results show that the proposed multi-stage method for seal image recognition outperforms state-of-the-art methods on the two established datasets.
基金supported by the National Natural Science Foundation of China(62272049,62236006,62172045)the Key Projects of Beijing Union University(ZKZD202301).
文摘In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods.
基金supported by the Key Research and Development Program of Jiangsu Province under Grant BE2022059-3,CTBC Bank through the Industry-Academia Cooperation Project,as well as by the Ministry of Science and Technology of Taiwan through Grants MOST-108-2218-E-002-055,MOST-109-2223-E-009-002-MY3,MOST-109-2218-E-009-025,and MOST431109-2218-E-002-015.
文摘Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been proposed.However,unlike DNNs,shallow convolutional neural networks often outperform deeper models in mitigating overfitting,particularly with small datasets.Still,many of these methods rely on a single feature for recognition,resulting in an insufficient ability to extract highly effective features.To address this limitation,in this paper,an Improved Dual-stream Shallow Convolutional Neural Network based on an Extreme Gradient Boosting Algorithm(IDSSCNN-XgBoost)is introduced for ME Recognition.The proposed method utilizes a dual-stream architecture where motion vectors(temporal features)are extracted using Optical Flow TV-L1 and amplify subtle changes(spatial features)via EulerianVideoMagnification(EVM).These features are processed by IDSSCNN,with an attention mechanism applied to refine the extracted effective features.The outputs are then fused,concatenated,and classified using the XgBoost algorithm.This comprehensive approach significantly improves recognition accuracy by leveraging the strengths of both temporal and spatial information,supported by the robust classification power of XgBoost.The proposed method is evaluated on three publicly available ME databases named Chinese Academy of Sciences Micro-expression Database(CASMEII),Spontaneous Micro-Expression Database(SMICHS),and Spontaneous Actions and Micro-Movements(SAMM).Experimental results indicate that the proposed model can achieve outstanding results compared to recent models.The accuracy results are 79.01%,69.22%,and 68.99%on CASMEII,SMIC-HS,and SAMM,and the F1-score are 75.47%,68.91%,and 63.84%,respectively.The proposed method has the advantage of operational efficiency and less computational time.
文摘Pointer instruments are widely used in the nuclear power industry. Addressing the issues of low accuracy and slow detection speed in recognizing pointer meter readings under varying types and distances, this paper proposes a recognition method based on YOLOv8 and DeepLabv3+. To improve the image input quality of the DeepLabv3+ model, the YOLOv8 detector is used to quickly locate the instrument region and crop it as the input image for recognition. To enhance the accuracy and speed of pointer recognition, the backbone network of DeepLabv3+ was replaced with Mo-bileNetv3, and the ECA+ module was designed to replace its SE module, reducing model parameters while improving recognition precision. The decoder’s fourfold-up sampling was replaced with two twofold-up samplings, and shallow feature maps were fused with encoder features of the corresponding size. The CBAM module was introduced to improve the segmentation accuracy of the pointer. Experiments were conducted using a self-made dataset of pointer-style instruments from nuclear power plants. Results showed that this method achieved a recognition accuracy of 94.5% at a precision level of 2.5, with an average error of 1.522% and an average total processing time of 0.56 seconds, demonstrating strong performance.
文摘Correction to:Nano-Micro Lett.(2023)15:233 https://doi.org/10.1007/s40820-023-01201-7 Following publication of the original article[1],the authors reported that the first two lines of the introduction were accidentally placed in the right-hand column of the page in the PDF,which affects the readability.
文摘Pill image recognition is an important field in computer vision.It has become a vital technology in healthcare and pharmaceuticals due to the necessity for precise medication identification to prevent errors and ensure patient safety.This survey examines the current state of pill image recognition,focusing on advancements,methodologies,and the challenges that remain unresolved.It provides a comprehensive overview of traditional image processing-based,machine learning-based,deep learning-based,and hybrid-based methods,and aims to explore the ongoing difficulties in the field.We summarize and classify the methods used in each article,compare the strengths and weaknesses of traditional image processing-based,machine learning-based,deep learning-based,and hybrid-based methods,and review benchmark datasets for pill image recognition.Additionally,we compare the performance of proposed methods on popular benchmark datasets.This survey applies recent advancements,such as Transformer models and cutting-edge technologies like Augmented Reality(AR),to discuss potential research directions and conclude the review.By offering a holistic perspective,this paper aims to serve as a valuable resource for researchers and practitioners striving to advance the field of pill image recognition.
基金supported by the ScientificResearch and Innovation Team Program of Sichuan University of Science and Technology(No.SUSE652A006)Sichuan Key Provincial Research Base of Intelligent Tourism(ZHYJ22-03)In addition,it is also listed as a project of Sichuan Provincial Science and Technology Programme(2022YFG0028).
文摘Emotion recognition plays a crucial role in various fields and is a key task in natural language processing (NLP). The objective is to identify and interpret emotional expressions in text. However, traditional emotion recognition approaches often struggle in few-shot cross-domain scenarios due to their limited capacity to generalize semantic features across different domains. Additionally, these methods face challenges in accurately capturing complex emotional states, particularly those that are subtle or implicit. To overcome these limitations, we introduce a novel approach called Dual-Task Contrastive Meta-Learning (DTCML). This method combines meta-learning and contrastive learning to improve emotion recognition. Meta-learning enhances the model’s ability to generalize to new emotional tasks, while instance contrastive learning further refines the model by distinguishing unique features within each category, enabling it to better differentiate complex emotional expressions. Prototype contrastive learning, in turn, helps the model address the semantic complexity of emotions across different domains, enabling the model to learn fine-grained emotions expression. By leveraging dual tasks, DTCML learns from two domains simultaneously, the model is encouraged to learn more diverse and generalizable emotions features, thereby improving its cross-domain adaptability and robustness, and enhancing its generalization ability. We evaluated the performance of DTCML across four cross-domain settings, and the results show that our method outperforms the best baseline by 5.88%, 12.04%, 8.49%, and 8.40% in terms of accuracy.
基金The Fundamental Research Funds for the Central Universities provided financial support for this research.
文摘Graph convolutional network(GCN)as an essential tool in human action recognition tasks have achieved excellent performance in previous studies.However,most current skeleton-based action recognition using GCN methods use a shared topology,which cannot flexibly adapt to the diverse correlations between joints under different motion features.The video-shooting angle or the occlusion of the body parts may bring about errors when extracting the human pose coordinates with estimation algorithms.In this work,we propose a novel graph convolutional learning framework,called PCCTR-GCN,which integrates pose correction and channel topology refinement for skeleton-based human action recognition.Firstly,a pose correction module(PCM)is introduced,which corrects the pose coordinates of the input network to reduce the error in pose feature extraction.Secondly,channel topology refinement graph convolution(CTR-GC)is employed,which can dynamically learn the topology features and aggregate joint features in different channel dimensions so as to enhance the performance of graph convolution networks in feature extraction.Finally,considering that the joint stream and bone stream of skeleton data and their dynamic information are also important for distinguishing different actions,we employ a multi-stream data fusion approach to improve the network’s recognition performance.We evaluate the model using top-1 and top-5 classification accuracy.On the benchmark datasets iMiGUE and Kinetics,the top-1 classification accuracy reaches 55.08%and 36.5%,respectively,while the top-5 classification accuracy reaches 89.98%and 59.2%,respectively.On the NTU dataset,for the two benchmark RGB+Dsettings(X-Sub and X-View),the classification accuracy achieves 89.7%and 95.4%,respectively.
基金funded by 5·5 Engineering Research&Innovation Team Project of Beijing Forestry University(No.BLRC2023C02).
文摘Named entity recognition(NER)in musk deer domain is the extraction of specific types of entities from unstructured texts,constituting a fundamental component of the knowledge graph,Q&A system,and text summarization system of musk deer domain.Due to limited annotated data,diverse entity types,and the ambiguity of Chinese word boundaries in musk deer domain NER,we present a novel NER model,CAELF-GP,which is based on cross-attention mechanism enhanced lexical features(CAELF).Specifically,we employ BERT as a character encoder and advocate the integration of external lexical information at the character representation layer.In the feature fusion module,instead of indiscriminately merging external dictionary information,we innovatively adopted a feature fusion method based on a cross-attention mechanism,which guides the model to focus on important lexical information by calculating the correlation between each character and its corresponding word sets.This module enhances the model’s semantic representation ability and entity boundary recognition capability.Ultimately,we introduce the decoding module of GlobalPointer(GP)for entity type recognition,capable of identifying both nested and non-nested entities.Since there is currently no publicly available dataset for the musk deer domain,we built a named entity recognition dataset for this domain by collecting relevant literature and working under the guidance of domain experts.The dataset facilitates the training and validation of the model and provides data foundation for subsequent related research.The model undergoes experimentation on two public datasets and the dataset of musk deer domain.The results show that it is superior to the baseline models,offering a promising technical avenue for the intelligent recognition of named entities in the musk deer domain.
文摘In the task of Facial Expression Recognition(FER),data uncertainty has been a critical factor affecting performance,typically arising from the ambiguity of facial expressions,low-quality images,and the subjectivity of annotators.Tracking the training history reveals that misclassified samples often exhibit high confidence and excessive uncertainty in the early stages of training.To address this issue,we propose an uncertainty-based robust sample selection strategy,which combines confidence error with RandAugment to improve image diversity,effectively reducing overfitting caused by uncertain samples during deep learning model training.To validate the effectiveness of the proposed method,extensive experiments were conducted on FER public benchmarks.The accuracy obtained were 89.08%on RAF-DB,63.12%on AffectNet,and 88.73%on FERPlus.
文摘Background Enterotoxigenic Escherichia coli(E.coli)is a threat to humans and animals that causes intestinal dis-orders.Antimicrobial resistance has urged alternatives,including Lactobacillus postbiotics,to mitigate the effects of enterotoxigenic E.coli.Methods Forty-eight newly weaned pigs were allotted to NC:no challenge/no supplement;PC:F18^(+)E.coli chal-lenge/no supplement;ATB:F18^(+)E.coli challenge/bacitracin;and LPB:F18^(+)E.coli challenge/postbiotics and fed diets for 28 d.On d 7,pigs were orally inoculated withF18^(+)E.coli.At d 28,the mucosa-associated microbiota,immune and oxidative stress status,intestinal morphology,the gene expression of pattern recognition receptors(PRR),and intestinal barrier function were measured.Data were analyzed using the MIXED procedure in SAS 9.4.Results PC increased(P<0.05)Helicobacter mastomyrinus whereas reduced(P<0.05)Prevotella copri and P.ster-corea compared to NC.The LPB increased(P<0.05)P.stercorea and Dialister succinatiphilus compared with PC.The ATB increased(P<0.05)Propionibacterium acnes,Corynebacterium glutamicum,and Sphingomonas pseudosanguinis compared to PC.The PC tended to reduce(P=0.054)PGLYRP4 and increased(P<0.05)TLR4,CD14,MDA,and crypt cell proliferation compared with NC.The ATB reduced(P<0.05)NOD1 compared with PC.The LPB increased(P<0.05)PGLYRP4,and interferon-γand reduced(P<0.05)NOD1 compared with PC.The ATB and LPB reduced(P<0.05)TNF-αand MDA compared with PC.Conclusions TheF18^(+)E.coli challenge compromised intestinal health.Bacitracin increased beneficial bacteria show-ing a trend towards increasing the intestinal barrier function,possibly by reducing the expression of PRR genes.Lac-tobacillus postbiotics enhanced the immunocompetence of nursery pigs by increasing the expression of interferon-γand PGLYRP4,and by reducing TLR4,NOD1,and CD14.
基金supported in part by Major Program of the National Natural Science Foundation of China under Grant 62127803.
文摘Safety maintenance of power equipment is of great importance in power grids,in which image-processing-based defect recognition is supposed to classify abnormal conditions during daily inspection.However,owing to the blurred features of defect images,the current defect recognition algorithm has poor fine-grained recognition ability.Visual attention can achieve fine-grained recognition with its abil-ity to model long-range dependencies while introducing extra computational complexity,especially for multi-head attention in vision transformer structures.Under these circumstances,this paper proposes a self-reduction multi-head attention module that can reduce computational complexity and be easily combined with a Convolutional Neural Network(CNN).In this manner,local and global fea-tures can be calculated simultaneously in our proposed structure,aiming to improve the defect recognition performance.Specifically,the proposed self-reduction multi-head attention can reduce redundant parameters,thereby solving the problem of limited computational resources.Experimental results were obtained based on the defect dataset collected from the substation.The results demonstrated the efficiency and superiority of the proposed method over other advanced algorithms.
文摘A common but flawed design in existing CNN architectures is using strided convolutions and/or pooling layer,which will result in the loss of fine-grained feature information,especially for low-resolution images and small objects.In this paper,a new CNN building block named SPD-Conv was used,which completely eliminated stride and pooling operations and replaced them with a space-to-depth convolution and a non-strided convolution.Such new design has the advantage of downsampling feature maps while retaining discriminant feature information.It also represents a general unified method,which can be easily applied to any CNN architectures,and can also be applied to strided conversion and pooling in the same way.
文摘Counterfeit agricultural products pose a significant challenge to global food security and economic stability, necessitating advanced detection mechanisms to ensure authenticity and quality. To address this pressing issue, we introduce iGFruit, an innovative model designed to enhance the detection of counterfeit agricultural products by integrating multimodal data processing. Our approach utilizes both image and text data for comprehensive feature extraction, employing advanced backbone models such as Vision Transformer (ViT), Normalizer-Free Network (NFNet), and Bidirectional Encoder Representations from Transformers (BERT). These extracted features are fused and processed using a Graph Attention Network (GAT) to capture intricate relationships within the multimodal data. The resulting fused representation is subsequently classified to detect counterfeit products with high precision. We validate the effectiveness of iGFruit through extensive experiments on two datasets: the publicly available MIT-States dataset and the proprietary TLU-States dataset, achieving state-of-the-art performance on both benchmarks. Specifically, iGFruit demonstrates an improvement of over 3% in average accuracy compared to baseline models, all while maintaining computational efficiency during inference. This work underscores the necessity and innovativeness of integrating graph-based feature learning to tackle the critical issue of counterfeit agricultural product detection.
基金supported by the National Natural Science Foundation of China,China (Grants No.62171232)the Priority Academic Program Development of Jiangsu Higher Education Institutions,China。
文摘Fine-grained Image Recognition(FGIR)task is dedicated to distinguishing similar sub-categories that belong to the same super-category,such as bird species and car types.In order to highlight visual differences,existing FGIR works often follow two steps:discriminative sub-region localization and local feature representation.However,these works pay less attention on global context information.They neglect a fact that the subtle visual difference in challenging scenarios can be highlighted through exploiting the spatial relationship among different subregions from a global view point.Therefore,in this paper,we consider both global and local information for FGIR,and propose a collaborative teacher-student strategy to reinforce and unity the two types of information.Our framework is implemented mainly by convolutional neural network,referred to Teacher-Student Based Attention Convolutional Neural Network(T-S-ACNN).For fine-grained local information,we choose the classic Multi-Attention Network(MA-Net)as our baseline,and propose a type of boundary constraint to further reduce background noises in the local attention maps.In this way,the discriminative sub-regions tend to appear in the area occupied by fine-grained objects,leading to more accurate sub-region localization.For fine-grained global information,we design a graph convolution based Global Attention Network(GA-Net),which can combine extracted local attention maps from MA-Net with non-local techniques to explore spatial relationship among subregions.At last,we develop a collaborative teacher-student strategy to adaptively determine the attended roles and optimization modes,so as to enhance the cooperative reinforcement of MA-Net and GA-Net.Extensive experiments on CUB-200-2011,Stanford Cars and FGVC Aircraft datasets illustrate the promising performance of our framework.
基金supported by the National Natural Science Foundation of China(62106214)the Hebei Natural Science Foundation(D2024203008)the Provincial Key Laboratory Performance Subsidy Project(22567612H).
文摘In recent years,audio pattern recognition has emerged as a key area of research,driven by its applications in human-computer interaction,robotics,and healthcare.Traditional methods,which rely heavily on handcrafted features such asMel filters,often suffer frominformation loss and limited feature representation capabilities.To address these limitations,this study proposes an innovative end-to-end audio pattern recognition framework that directly processes raw audio signals,preserving original information and extracting effective classification features.The proposed framework utilizes a dual-branch architecture:a global refinement module that retains channel and temporal details and a multi-scale embedding module that captures high-level semantic information.Additionally,a guided fusion module integrates complementary features from both branches,ensuring a comprehensive representation of audio data.Specifically,the multi-scale audio context embedding module is designed to effectively extract spatiotemporal dependencies,while the global refinement module aggregates multi-scale channel and temporal cues for enhanced modeling.The guided fusion module leverages these features to achieve efficient integration of complementary information,resulting in improved classification accuracy.Experimental results demonstrate the model’s superior performance on multiple datasets,including ESC-50,UrbanSound8K,RAVDESS,and CREMA-D,with classification accuracies of 93.25%,90.91%,92.36%,and 70.50%,respectively.These results highlight the robustness and effectiveness of the proposed framework,which significantly outperforms existing approaches.By addressing critical challenges such as information loss and limited feature representation,thiswork provides newinsights and methodologies for advancing audio classification and multimodal interaction systems.
基金supported by the National Natural Science Foundation of China(Nos.62002100,61902237)Key Research and Promotion Projects of Henan Province(Nos.232102240023,232102210063,222102210040).
文摘Fingerprint features,as unique and stable biometric identifiers,are crucial for identity verification.However,traditional centralized methods of processing these sensitive data linked to personal identity pose significant privacy risks,potentially leading to user data leakage.Federated Learning allows multiple clients to collaboratively train and optimize models without sharing raw data,effectively addressing privacy and security concerns.However,variations in fingerprint data due to factors such as region,ethnicity,sensor quality,and environmental conditions result in significant heterogeneity across clients.This heterogeneity adversely impacts the generalization ability of the global model,limiting its performance across diverse distributions.To address these challenges,we propose an Adaptive Federated Fingerprint Recognition algorithm(AFFR)based on Federated Learning.The algorithm incorporates a generalization adjustment mechanism that evaluates the generalization gap between the local models and the global model,adaptively adjusting aggregation weights to mitigate the impact of heterogeneity caused by differences in data quality and feature characteristics.Additionally,a noise mechanism is embedded in client-side training to reduce the risk of fingerprint data leakage arising from weight disclosures during model updates.Experiments conducted on three public datasets demonstrate that AFFR significantly enhances model accuracy while ensuring robust privacy protection,showcasing its strong application potential and competitiveness in heterogeneous data environments.
基金supported by the Beijing Natural Science Foundation (5252014)the National Natural Science Foundation of China (62303063)。
文摘Bird vocalizations are pivotal for ecological monitoring,providing insights into biodiversity and ecosystem health.Traditional recognition methods often neglect phase information,resulting in incomplete feature representation.In this paper,we introduce a novel approach to bird vocalization recognition(BVR)that integrates both amplitude and phase information,leading to enhanced species identification.We propose MHARes Net,a deep learning(DL)model that employs residual blocks and a multi-head attention mechanism to capture salient features from logarithmic power(POW),Instantaneous Frequency(IF),and Group Delay(GD)extracted from bird vocalizations.Experiments on three bird vocalization datasets demonstrate our method's superior performance,achieving accuracy rates of 94%,98.9%,and 87.1%respectively.These results indicate that our approach provides a more effective representation of bird vocalizations,outperforming existing methods.This integration of phase information in BVR is innovative and significantly advances the field of automatic bird monitoring technology,offering valuable tools for ecological research and conservation efforts.