A clustering algorithm for semi-supervised affinity propagation based on layered combination is proposed in this paper in light of existing flaws. To improve accuracy of the algorithm,it introduces the idea of layered...A clustering algorithm for semi-supervised affinity propagation based on layered combination is proposed in this paper in light of existing flaws. To improve accuracy of the algorithm,it introduces the idea of layered combination, divides an affinity propagation clustering( APC) process into several hierarchies evenly,draws samples from data of each hierarchy according to weight,and executes semi-supervised learning through construction of pairwise constraints and use of submanifold label mapping,weighting and combining clustering results of all hierarchies by combined promotion. It is shown by theoretical analysis and experimental result that clustering accuracy and computation complexity of the semi-supervised affinity propagation clustering algorithm based on layered combination( SAP-LC algorithm) have been greatly improved.展开更多
Clustering analysis is one of the main concerns in data mining.A common approach to the clustering process is to bring together points that are close to each other and separate points that are away from each other.The...Clustering analysis is one of the main concerns in data mining.A common approach to the clustering process is to bring together points that are close to each other and separate points that are away from each other.Therefore,measuring the distance between sample points is crucial to the effectiveness of clustering.Filtering features by label information and mea-suring the distance between samples by these features is a common supervised learning method to reconstruct distance metric.However,in many application scenarios,it is very expensive to obtain a large number of labeled samples.In this paper,to solve the clustering problem in the few supervised sample and high data dimensionality scenarios,a novel semi-supervised clustering algorithm is proposed by designing an improved prototype network that attempts to reconstruct the distance metric in the sample space with a small amount of pairwise supervised information,such as Must-Link and Cannot-Link,and then cluster the data in the new metric space.The core idea is to make the similar ones closer and the dissimilar ones further away through embedding mapping.Extensive experiments on both real-world and synthetic datasets show the effectiveness of this algorithm.Average clustering metrics on various datasets improved by 8%compared to the comparison algorithm.展开更多
This paper focuses on the unsupervised detection of the Higgs boson particle using the most informative features and variables which characterize the“Higgs machine learning challenge 2014”data set.This unsupervised ...This paper focuses on the unsupervised detection of the Higgs boson particle using the most informative features and variables which characterize the“Higgs machine learning challenge 2014”data set.This unsupervised detection goes in this paper analysis through 4 steps:(1)selection of the most informative features from the considered data;(2)definition of the number of clusters based on the elbow criterion.The experimental results showed that the optimal number of clusters that group the considered data in an unsupervised manner corresponds to 2 clusters;(3)proposition of a new approach for hybridization of both hard and fuzzy clustering tuned with Ant Lion Optimization(ALO);(4)comparison with some existing metaheuristic optimizations such as Genetic Algorithm(GA)and Particle Swarm Optimization(PSO).By employing a multi-angle analysis based on the cluster validation indices,the confusion matrix,the efficiencies and purities rates,the average cost variation,the computational time and the Sammon mapping visualization,the results highlight the effectiveness of the improved Gustafson-Kessel algorithm optimized withALO(ALOGK)to validate the proposed approach.Even if the paper gives a complete clustering analysis,its novel contribution concerns only the Steps(1)and(3)considered above.The first contribution lies in the method used for Step(1)to select the most informative features and variables.We used the t-Statistic technique to rank them.Afterwards,a feature mapping is applied using Self-Organizing Map(SOM)to identify the level of correlation between them.Then,Particle Swarm Optimization(PSO),a metaheuristic optimization technique,is used to reduce the data set dimension.The second contribution of thiswork concern the third step,where each one of the clustering algorithms as K-means(KM),Global K-means(GlobalKM),Partitioning AroundMedoids(PAM),Fuzzy C-means(FCM),Gustafson-Kessel(GK)and Gath-Geva(GG)is optimized and tuned with ALO.展开更多
With the rapid development of WLAN( Wireless Local Area Network) technology,an important target of indoor positioning systems is to improve the positioning accuracy while reducing the online computation.In this paper,...With the rapid development of WLAN( Wireless Local Area Network) technology,an important target of indoor positioning systems is to improve the positioning accuracy while reducing the online computation.In this paper,it proposes a novel fingerprint positioning algorithm known as semi-supervised affinity propagation clustering based on distance function constraints. We show that by employing affinity propagation techniques,it is able to use a fractional labeled data to adjust similarity matrix of signal space to cluster reference points with high accuracy. The semi-supervised APC uses a combination of machine learning,clustering analysis and fingerprinting algorithm. By collecting data and testing our algorithm in a realistic indoor WLAN environment,the experimental results indicate that the proposed algorithm can improve positioning accuracy while reduce the online localization computation,as compared with the widely used K nearest neighbor and maximum likelihood estimation algorithms.展开更多
In this paper,we introduce a novel Multi-scale and Auto-tuned Semi-supervised Deep Subspace Clustering(MAS-DSC)algorithm,aimed at addressing the challenges of deep subspace clustering in high-dimensional real-world da...In this paper,we introduce a novel Multi-scale and Auto-tuned Semi-supervised Deep Subspace Clustering(MAS-DSC)algorithm,aimed at addressing the challenges of deep subspace clustering in high-dimensional real-world data,particularly in the field of medical imaging.Traditional deep subspace clustering algorithms,which are mostly unsupervised,are limited in their ability to effectively utilize the inherent prior knowledge in medical images.Our MAS-DSC algorithm incorporates a semi-supervised learning framework that uses a small amount of labeled data to guide the clustering process,thereby enhancing the discriminative power of the feature representations.Additionally,the multi-scale feature extraction mechanism is designed to adapt to the complexity of medical imaging data,resulting in more accurate clustering performance.To address the difficulty of hyperparameter selection in deep subspace clustering,this paper employs a Bayesian optimization algorithm for adaptive tuning of hyperparameters related to subspace clustering,prior knowledge constraints,and model loss weights.Extensive experiments on standard clustering datasets,including ORL,Coil20,and Coil100,validate the effectiveness of the MAS-DSC algorithm.The results show that with its multi-scale network structure and Bayesian hyperparameter optimization,MAS-DSC achieves excellent clustering results on these datasets.Furthermore,tests on a brain tumor dataset demonstrate the robustness of the algorithm and its ability to leverage prior knowledge for efficient feature extraction and enhanced clustering performance within a semi-supervised learning framework.展开更多
Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)t...Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.展开更多
Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subse...Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.展开更多
Customer segmentation according to load-shape profiles using smart meter data is an increasingly important application to vital the planning and operation of energy systems and to enable citizens’participation in the...Customer segmentation according to load-shape profiles using smart meter data is an increasingly important application to vital the planning and operation of energy systems and to enable citizens’participation in the energy transition.This study proposes an innovative multi-step clustering procedure to segment customers based on load-shape patterns at the daily and intra-daily time horizons.Smart meter data is split between daily and hourly normalized time series to assess monthly,weekly,daily,and hourly seasonality patterns separately.The dimensionality reduction implicit in the splitting allows a direct approach to clustering raw daily energy time series data.The intraday clustering procedure sequentially identifies representative hourly day-unit profiles for each customer and the entire population.For the first time,a step function approach is applied to reduce time series dimensionality.Customer attributes embedded in surveys are employed to build external clustering validation metrics using Cramer’s V correlation factors and to identify statistically significant determinants of load-shape in energy usage.In addition,a time series features engineering approach is used to extract 16 relevant demand flexibility indicators that characterize customers and corresponding clusters along four different axes:available Energy(E),Temporal patterns(T),Consistency(C),and Variability(V).The methodology is implemented on a real-world electricity consumption dataset of 325 Small and Medium-sized Enterprise(SME)customers,identifying 4 daily and 6 hourly easy-to-interpret,well-defined clusters.The application of the methodology includes selecting key parameters via grid search and a thorough comparison of clustering distances and methods to ensure the robustness of the results.Further research can test the scalability of the methodology to larger datasets from various customer segments(households and large commercial)and locations with different weather and socioeconomic conditions.展开更多
Domaining is a crucial process in geostatistics, particularly when significant spatial variations are observed within a site, as these variations can significantly affect the outcomes of spatial modeling. This study i...Domaining is a crucial process in geostatistics, particularly when significant spatial variations are observed within a site, as these variations can significantly affect the outcomes of spatial modeling. This study investigates the application of hard and fuzzy clustering algorithms for domain delineation, using geological and geochemical data from two exploration campaigns at the eastern Kahang deposit in central Iran. The dataset includes geological layers (lithology, alteration, and mineral zones), geochemical layers (Cu, Mo, Ag, and Au grades), and borehole coordinates. Six clustering algorithms—K-means, hierarchical, affinity propagation, self-organizing map (SOM), fuzzy C-means, and Gustafson-Kessel—were applied to determine the optimal number of clusters, which ranged from 3 to 4. The fuzziness and weighting parameters were found to range from 1.1 to 1.3 and 0.1 to 0.3, respectively, based on the evaluation of various hard and fuzzy cluster validity indices. Directional variograms were computed to assess spatial anisotropy, and the anisotropy ellipsoid for each domain was defined to identify the model with the highest level of anisotropic discrimination among the domains. The SOM algorithm, which incorporated both qualitative and quantitative data, produced the best model, resulting in the identification of three distinct domains. These findings underscore the effectiveness of combining clustering techniques with variogram analysis for accurate domain delineation in geostatistical modeling.展开更多
Existing multi-view deep subspace clustering methods aim to learn a unified representation from multi-view data,while the learned representation is difficult to maintain the underlying structure hidden in the origin s...Existing multi-view deep subspace clustering methods aim to learn a unified representation from multi-view data,while the learned representation is difficult to maintain the underlying structure hidden in the origin samples,especially the high-order neighbor relationship between samples.To overcome the above challenges,this paper proposes a novel multi-order neighborhood fusion based multi-view deep subspace clustering model.We creatively integrate the multi-order proximity graph structures of different views into the self-expressive layer by a multi-order neighborhood fusion module.By this design,the multi-order Laplacian matrix supervises the learning of the view-consistent self-representation affinity matrix;then,we can obtain an optimal global affinity matrix where each connected node belongs to one cluster.In addition,the discriminative constraint between views is designed to further improve the clustering performance.A range of experiments on six public datasets demonstrates that the method performs better than other advanced multi-view clustering methods.The code is available at https://github.com/songzuolong/MNF-MDSC(accessed on 25 December 2024).展开更多
The characterization and clustering of rock discontinuity sets are a crucial and challenging task in rock mechanics and geotechnical engineering.Over the past few decades,the clustering of discontinuity sets has under...The characterization and clustering of rock discontinuity sets are a crucial and challenging task in rock mechanics and geotechnical engineering.Over the past few decades,the clustering of discontinuity sets has undergone rapid and remarkable development.However,there is no relevant literature summarizing these achievements,and this paper attempts to elaborate on the current status and prospects in this field.Specifically,this review aims to discuss the development process of clustering methods for discontinuity sets and the state-of-the-art relevant algorithms.First,we introduce the importance of discontinuity clustering analysis and follow the comprehensive characterization approaches of discontinuity data.A bibliometric analysis is subsequently conducted to clarify the current status and development characteristics of the clustering of discontinuity sets.The methods for the clustering analysis of rock discontinuities are reviewed in terms of single-and multi-parameter clustering methods.Single-parameter methods can be classified into empirical judgment methods,dynamic clustering methods,relative static clustering methods,and static clustering methods,reflecting the continuous optimization and improvement of clustering algorithms.Moreover,this paper compares the current mainstream of single-parameter clustering methods with multi-parameter clustering methods.It is emphasized that the current single-parameter clustering methods have reached their performance limits,with little room for improvement,and that there is a need to extend the study of multi-parameter clustering methods.Finally,several suggestions are offered for future research on the clustering of discontinuity sets.展开更多
Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlo...Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlooked challenge is their demand for considerable run-to-failure data for training.Collection of such training data leads to prohibitive testing efforts as the run-to-failure tests can last for years.Here,we propose a semi-supervised representation learning method to enhance prediction accuracy by learning from data without RUL labels.Our approach builds on a sophisticated deep neural network that comprises an encoder and three decoder heads to extract time-dependent representation features from short-term battery operating data regardless of the existence of RUL labels.The approach is validated using three datasets collected from 34 batteries operating under various conditions,encompassing over 19,900 charge and discharge cycles.Our method achieves a root mean squared error(RMSE)within 25 cycles,even when only 1/50 of the training dataset is labelled,representing a reduction of 48%compared to the conventional approach.We also demonstrate the method's robustness with varying numbers of labelled data and different weights assigned to the three decoder heads.The projection of extracted features in low space reveals that our method effectively learns degradation features from unlabelled data.Our approach highlights the promise of utilising semi-supervised learning to reduce the data demand for reliability monitoring of energy devices.展开更多
Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,l...Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,labeled data is very scarce due to patient privacy concerns.For researchers,obtaining high-quality labeled images is exceedingly challenging because it involves manual annotation and clinical understanding.In addition,skin datasets are highly suitable for medical image classification studies due to the inter-class relationships and the inter-class similarities of skin lesions.In this paper,we propose a model called Coalition Sample Relation Consistency(CSRC),a consistency-based method that leverages Canonical Correlation Analysis(CCA)to capture the intrinsic relationships between samples.Considering that traditional consistency-based models only focus on the consistency of prediction,we additionally explore the similarity between features by using CCA.We enforce feature relation consistency based on traditional models,encouraging the model to learn more meaningful information from unlabeled data.Finally,considering that cross-entropy loss is not as suitable as the supervised loss when studying with imbalanced datasets(i.e.,ISIC 2017 and ISIC 2018),we improve the supervised loss to achieve better classification accuracy.Our study shows that this model performs better than many semi-supervised methods.展开更多
In the realm of medical image segmentation,particularly in cardiac magnetic resonance imaging(MRI),achieving robust performance with limited annotated data is a significant challenge.Performance often degrades when fa...In the realm of medical image segmentation,particularly in cardiac magnetic resonance imaging(MRI),achieving robust performance with limited annotated data is a significant challenge.Performance often degrades when faced with testing scenarios from unknown domains.To address this problem,this paper proposes a novel semi-supervised approach for cardiac magnetic resonance image segmentation,aiming to enhance predictive capabilities and domain generalization(DG).This paper establishes an MT-like model utilizing pseudo-labeling and consistency regularization from semi-supervised learning,and integrates uncertainty estimation to improve the accuracy of pseudo-labels.Additionally,to tackle the challenge of domain generalization,a data manipulation strategy is introduced,extracting spatial and content-related information from images across different domains,enriching the dataset with a multi-domain perspective.This papers method is meticulously evaluated on the publicly available cardiac magnetic resonance imaging dataset M&Ms,validating its effectiveness.Comparative analyses against various methods highlight the out-standing performance of this papers approach,demonstrating its capability to segment cardiac magnetic resonance images in previously unseen domains even with limited annotated data.展开更多
Reliable Cluster Head(CH)selectionbased routing protocols are necessary for increasing the packet transmission efficiency with optimal path discovery that never introduces degradation over the transmission reliability...Reliable Cluster Head(CH)selectionbased routing protocols are necessary for increasing the packet transmission efficiency with optimal path discovery that never introduces degradation over the transmission reliability.In this paper,Hybrid Golden Jackal,and Improved Whale Optimization Algorithm(HGJIWOA)is proposed as an effective and optimal routing protocol that guarantees efficient routing of data packets in the established between the CHs and the movable sink.This HGJIWOA included the phases of Dynamic Lens-Imaging Learning Strategy and Novel Update Rules for determining the reliable route essential for data packets broadcasting attained through fitness measure estimation-based CH selection.The process of CH selection achieved using Golden Jackal Optimization Algorithm(GJOA)completely depends on the factors of maintainability,consistency,trust,delay,and energy.The adopted GJOA algorithm play a dominant role in determining the optimal path of routing depending on the parameter of reduced delay and minimal distance.It further utilized Improved Whale Optimisation Algorithm(IWOA)for forwarding the data from chosen CHs to the BS via optimized route depending on the parameters of energy and distance.It also included a reliable route maintenance process that aids in deciding the selected route through which data need to be transmitted or re-routed.The simulation outcomes of the proposed HGJIWOA mechanism with different sensor nodes confirmed an improved mean throughput of 18.21%,sustained residual energy of 19.64%with minimized end-to-end delay of 21.82%,better than the competitive CH selection approaches.展开更多
Music recommendation systems are essential due to the vast amount of music available on streaming platforms,which can overwhelm users trying to find new tracks that match their preferences.These systems analyze users...Music recommendation systems are essential due to the vast amount of music available on streaming platforms,which can overwhelm users trying to find new tracks that match their preferences.These systems analyze users’emotional responses,listening habits,and personal preferences to provide personalized suggestions.A significant challenge they face is the“cold start”problem,where new users have no past interactions to guide recommendations.To improve user experience,these systems aimto effectively recommendmusic even to such users by considering their listening behavior and music popularity.This paper introduces a novel music recommendation system that combines order clustering and a convolutional neural network,utilizing user comments and rankings as input.Initially,the system organizes users into clusters based on semantic similarity,followed by the utilization of their rating similarities as input for the convolutional neural network.This network then predicts ratings for unreviewed music by users.Additionally,the system analyses user music listening behaviour and music popularity.Music popularity can help to address cold start users as well.Finally,the proposed method recommends unreviewed music based on predicted high rankings and popularity,taking into account each user’s music listening habits.The proposed method combines predicted high rankings and popularity by first selecting popular unreviewedmusic that themodel predicts to have the highest ratings for each user.Among these,the most popular tracks are prioritized,defined by metrics such as frequency of listening across users.The number of recommended tracks is aligned with each user’s typical listening rate.The experimental findings demonstrate that the new method outperformed other classification techniques and prior recommendation systems,yielding a mean absolute error(MAE)rate and rootmean square error(RMSE)rate of approximately 0.0017,a hit rate of 82.45%,an average normalized discounted cumulative gain(nDCG)of 82.3%,and a prediction accuracy of new ratings at 99.388%.展开更多
In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clu...In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay.展开更多
To guarantee safe and efficient tunneling of a tunnel boring machine(TBM),rapid and accurate judgment of the rock mass condition is essential.Based on fuzzy C-means clustering,this paper proposes a grouped machine lea...To guarantee safe and efficient tunneling of a tunnel boring machine(TBM),rapid and accurate judgment of the rock mass condition is essential.Based on fuzzy C-means clustering,this paper proposes a grouped machine learning method for predicting rock mass parameters.An elaborate data set on field rock mass is collected,which also matches field TBM tunneling.Meanwhile,target stratum samples are divided into several clusters by fuzzy C-means clustering,and multiple submodels are trained by samples in different clusters with the input of pretreated TBM tunneling data and the output of rock mass parameter data.Each testing sample or newly encountered tunneling condition can be predicted by multiple submodels with the weight of the membership degree of the sample to each cluster.The proposed method has been realized by 100 training samples and verified by 30 testing samples collected from the C1 part of the Pearl Delta water resources allocation project.The average percentage error of uniaxial compressive strength and joint frequency(Jf)of the 30 testing samples predicted by the pure back propagation(BP)neural network is 13.62%and 12.38%,while that predicted by the BP neural network combined with fuzzy C-means is 7.66%and6.40%,respectively.In addition,by combining fuzzy C-means clustering,the prediction accuracies of support vector regression and random forest are also improved to different degrees,which demonstrates that fuzzy C-means clustering is helpful for improving the prediction accuracy of machine learning and thus has good applicability.Accordingly,the proposed method is valuable for predicting rock mass parameters during TBM tunneling.展开更多
Most existing semi-supervised clustering algorithms are not designed for handling high- dimensional data. On the other hand, semi-supervised dimensionality reduction methods may not necessarily improve the clustering ...Most existing semi-supervised clustering algorithms are not designed for handling high- dimensional data. On the other hand, semi-supervised dimensionality reduction methods may not necessarily improve the clustering performance, due to the fact that the inherent relationship between subspace selection and clustering is ignored. In order to mitigate the above problems, we present a semi-supervised clustering algo- rithm using adaptive distance metric learning (SCADM) which performs semi-supervised clustering and distance metric learning simultaneously. SCADM applies the clustering results to learn a distance metric and then projects the data onto a low-dimensional space where the separability of the data is maximized. Experimental results on real-world data sets show that the proposed method can effectively deal with high-dimensional data and provides an appealing clustering performance.展开更多
The influence of non-Independent Identically Distribution(non-IID)data on Federated Learning(FL)has been a serious concern.Clustered Federated Learning(CFL)is an emerging approach for reducing the impact of non-IID da...The influence of non-Independent Identically Distribution(non-IID)data on Federated Learning(FL)has been a serious concern.Clustered Federated Learning(CFL)is an emerging approach for reducing the impact of non-IID data,which employs the client similarity calculated by relevant metrics for clustering.Unfortunately,the existing CFL methods only pursue a single accuracy improvement,but ignore the convergence rate.Additionlly,the designed client selection strategy will affect the clustering results.Finally,traditional semi-supervised learning changes the distribution of data on clients,resulting in higher local costs and undesirable performance.In this paper,we propose a novel CFL method named ASCFL,which selects clients to participate in training and can dynamically adjust the balance between accuracy and convergence speed with datasets consisting of labeled and unlabeled data.To deal with unlabeled data,the prediction labels strategy predicts labels by encoders.The client selection strategy is to improve accuracy and reduce overhead by selecting clients with higher losses participating in the current round.What is more,the similarity-based clustering strategy uses a new indicator to measure the similarity between clients.Experimental results show that ASCFL has certain advantages in model accuracy and convergence speed over the three state-of-the-art methods with two popular datasets.展开更多
基金the Science and Technology Research Program of Zhejiang Province,China(No.2011C21036)Projects in Science and Technology of Ningbo Municipal,China(No.2012B82003)+1 种基金Shanghai Natural Science Foundation,China(No.10ZR1400100)the National Undergraduate Training Programs for Innovation and Entrepreneurship,China(No.201410876011)
文摘A clustering algorithm for semi-supervised affinity propagation based on layered combination is proposed in this paper in light of existing flaws. To improve accuracy of the algorithm,it introduces the idea of layered combination, divides an affinity propagation clustering( APC) process into several hierarchies evenly,draws samples from data of each hierarchy according to weight,and executes semi-supervised learning through construction of pairwise constraints and use of submanifold label mapping,weighting and combining clustering results of all hierarchies by combined promotion. It is shown by theoretical analysis and experimental result that clustering accuracy and computation complexity of the semi-supervised affinity propagation clustering algorithm based on layered combination( SAP-LC algorithm) have been greatly improved.
文摘Clustering analysis is one of the main concerns in data mining.A common approach to the clustering process is to bring together points that are close to each other and separate points that are away from each other.Therefore,measuring the distance between sample points is crucial to the effectiveness of clustering.Filtering features by label information and mea-suring the distance between samples by these features is a common supervised learning method to reconstruct distance metric.However,in many application scenarios,it is very expensive to obtain a large number of labeled samples.In this paper,to solve the clustering problem in the few supervised sample and high data dimensionality scenarios,a novel semi-supervised clustering algorithm is proposed by designing an improved prototype network that attempts to reconstruct the distance metric in the sample space with a small amount of pairwise supervised information,such as Must-Link and Cannot-Link,and then cluster the data in the new metric space.The core idea is to make the similar ones closer and the dissimilar ones further away through embedding mapping.Extensive experiments on both real-world and synthetic datasets show the effectiveness of this algorithm.Average clustering metrics on various datasets improved by 8%compared to the comparison algorithm.
文摘This paper focuses on the unsupervised detection of the Higgs boson particle using the most informative features and variables which characterize the“Higgs machine learning challenge 2014”data set.This unsupervised detection goes in this paper analysis through 4 steps:(1)selection of the most informative features from the considered data;(2)definition of the number of clusters based on the elbow criterion.The experimental results showed that the optimal number of clusters that group the considered data in an unsupervised manner corresponds to 2 clusters;(3)proposition of a new approach for hybridization of both hard and fuzzy clustering tuned with Ant Lion Optimization(ALO);(4)comparison with some existing metaheuristic optimizations such as Genetic Algorithm(GA)and Particle Swarm Optimization(PSO).By employing a multi-angle analysis based on the cluster validation indices,the confusion matrix,the efficiencies and purities rates,the average cost variation,the computational time and the Sammon mapping visualization,the results highlight the effectiveness of the improved Gustafson-Kessel algorithm optimized withALO(ALOGK)to validate the proposed approach.Even if the paper gives a complete clustering analysis,its novel contribution concerns only the Steps(1)and(3)considered above.The first contribution lies in the method used for Step(1)to select the most informative features and variables.We used the t-Statistic technique to rank them.Afterwards,a feature mapping is applied using Self-Organizing Map(SOM)to identify the level of correlation between them.Then,Particle Swarm Optimization(PSO),a metaheuristic optimization technique,is used to reduce the data set dimension.The second contribution of thiswork concern the third step,where each one of the clustering algorithms as K-means(KM),Global K-means(GlobalKM),Partitioning AroundMedoids(PAM),Fuzzy C-means(FCM),Gustafson-Kessel(GK)and Gath-Geva(GG)is optimized and tuned with ALO.
基金Sponsored by the National Natural Science Foundation of China(Grant No.61101122 and 61071105)
文摘With the rapid development of WLAN( Wireless Local Area Network) technology,an important target of indoor positioning systems is to improve the positioning accuracy while reducing the online computation.In this paper,it proposes a novel fingerprint positioning algorithm known as semi-supervised affinity propagation clustering based on distance function constraints. We show that by employing affinity propagation techniques,it is able to use a fractional labeled data to adjust similarity matrix of signal space to cluster reference points with high accuracy. The semi-supervised APC uses a combination of machine learning,clustering analysis and fingerprinting algorithm. By collecting data and testing our algorithm in a realistic indoor WLAN environment,the experimental results indicate that the proposed algorithm can improve positioning accuracy while reduce the online localization computation,as compared with the widely used K nearest neighbor and maximum likelihood estimation algorithms.
基金supported in part by the National Natural Science Foundation of China under Grant 62171203in part by the Jiangsu Province“333 Project”High-Level Talent Cultivation Subsidized Project+2 种基金in part by the SuzhouKey Supporting Subjects for Health Informatics under Grant SZFCXK202147in part by the Changshu Science and Technology Program under Grants CS202015 and CS202246in part by Changshu Key Laboratory of Medical Artificial Intelligence and Big Data under Grants CYZ202301 and CS202314.
文摘In this paper,we introduce a novel Multi-scale and Auto-tuned Semi-supervised Deep Subspace Clustering(MAS-DSC)algorithm,aimed at addressing the challenges of deep subspace clustering in high-dimensional real-world data,particularly in the field of medical imaging.Traditional deep subspace clustering algorithms,which are mostly unsupervised,are limited in their ability to effectively utilize the inherent prior knowledge in medical images.Our MAS-DSC algorithm incorporates a semi-supervised learning framework that uses a small amount of labeled data to guide the clustering process,thereby enhancing the discriminative power of the feature representations.Additionally,the multi-scale feature extraction mechanism is designed to adapt to the complexity of medical imaging data,resulting in more accurate clustering performance.To address the difficulty of hyperparameter selection in deep subspace clustering,this paper employs a Bayesian optimization algorithm for adaptive tuning of hyperparameters related to subspace clustering,prior knowledge constraints,and model loss weights.Extensive experiments on standard clustering datasets,including ORL,Coil20,and Coil100,validate the effectiveness of the MAS-DSC algorithm.The results show that with its multi-scale network structure and Bayesian hyperparameter optimization,MAS-DSC achieves excellent clustering results on these datasets.Furthermore,tests on a brain tumor dataset demonstrate the robustness of the algorithm and its ability to leverage prior knowledge for efficient feature extraction and enhanced clustering performance within a semi-supervised learning framework.
基金supported by the Natural Science Foundation of China(No.41804112,author:Chengyun Song).
文摘Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.
基金supported in part by NIH grants R01NS39600,U01MH114829RF1MH128693(to GAA)。
文摘Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.
基金supported by the Spanish Ministry of Science and Innovation under Projects PID2022-137680OB-C32 and PID2022-139187OB-I00.
文摘Customer segmentation according to load-shape profiles using smart meter data is an increasingly important application to vital the planning and operation of energy systems and to enable citizens’participation in the energy transition.This study proposes an innovative multi-step clustering procedure to segment customers based on load-shape patterns at the daily and intra-daily time horizons.Smart meter data is split between daily and hourly normalized time series to assess monthly,weekly,daily,and hourly seasonality patterns separately.The dimensionality reduction implicit in the splitting allows a direct approach to clustering raw daily energy time series data.The intraday clustering procedure sequentially identifies representative hourly day-unit profiles for each customer and the entire population.For the first time,a step function approach is applied to reduce time series dimensionality.Customer attributes embedded in surveys are employed to build external clustering validation metrics using Cramer’s V correlation factors and to identify statistically significant determinants of load-shape in energy usage.In addition,a time series features engineering approach is used to extract 16 relevant demand flexibility indicators that characterize customers and corresponding clusters along four different axes:available Energy(E),Temporal patterns(T),Consistency(C),and Variability(V).The methodology is implemented on a real-world electricity consumption dataset of 325 Small and Medium-sized Enterprise(SME)customers,identifying 4 daily and 6 hourly easy-to-interpret,well-defined clusters.The application of the methodology includes selecting key parameters via grid search and a thorough comparison of clustering distances and methods to ensure the robustness of the results.Further research can test the scalability of the methodology to larger datasets from various customer segments(households and large commercial)and locations with different weather and socioeconomic conditions.
文摘Domaining is a crucial process in geostatistics, particularly when significant spatial variations are observed within a site, as these variations can significantly affect the outcomes of spatial modeling. This study investigates the application of hard and fuzzy clustering algorithms for domain delineation, using geological and geochemical data from two exploration campaigns at the eastern Kahang deposit in central Iran. The dataset includes geological layers (lithology, alteration, and mineral zones), geochemical layers (Cu, Mo, Ag, and Au grades), and borehole coordinates. Six clustering algorithms—K-means, hierarchical, affinity propagation, self-organizing map (SOM), fuzzy C-means, and Gustafson-Kessel—were applied to determine the optimal number of clusters, which ranged from 3 to 4. The fuzziness and weighting parameters were found to range from 1.1 to 1.3 and 0.1 to 0.3, respectively, based on the evaluation of various hard and fuzzy cluster validity indices. Directional variograms were computed to assess spatial anisotropy, and the anisotropy ellipsoid for each domain was defined to identify the model with the highest level of anisotropic discrimination among the domains. The SOM algorithm, which incorporated both qualitative and quantitative data, produced the best model, resulting in the identification of three distinct domains. These findings underscore the effectiveness of combining clustering techniques with variogram analysis for accurate domain delineation in geostatistical modeling.
基金supported by the National Key R&D Program of China(2023YFC3304600).
文摘Existing multi-view deep subspace clustering methods aim to learn a unified representation from multi-view data,while the learned representation is difficult to maintain the underlying structure hidden in the origin samples,especially the high-order neighbor relationship between samples.To overcome the above challenges,this paper proposes a novel multi-order neighborhood fusion based multi-view deep subspace clustering model.We creatively integrate the multi-order proximity graph structures of different views into the self-expressive layer by a multi-order neighborhood fusion module.By this design,the multi-order Laplacian matrix supervises the learning of the view-consistent self-representation affinity matrix;then,we can obtain an optimal global affinity matrix where each connected node belongs to one cluster.In addition,the discriminative constraint between views is designed to further improve the clustering performance.A range of experiments on six public datasets demonstrates that the method performs better than other advanced multi-view clustering methods.The code is available at https://github.com/songzuolong/MNF-MDSC(accessed on 25 December 2024).
基金funding support from the National Natural Science Foundation of China(Grant No.42007269)the Young Talent Fund of Xi'an Association for Science and Technology(Grant No.959202313094)the Fundamental Research Funds for the Central Universities,CHD(Grant No.300102263401).
文摘The characterization and clustering of rock discontinuity sets are a crucial and challenging task in rock mechanics and geotechnical engineering.Over the past few decades,the clustering of discontinuity sets has undergone rapid and remarkable development.However,there is no relevant literature summarizing these achievements,and this paper attempts to elaborate on the current status and prospects in this field.Specifically,this review aims to discuss the development process of clustering methods for discontinuity sets and the state-of-the-art relevant algorithms.First,we introduce the importance of discontinuity clustering analysis and follow the comprehensive characterization approaches of discontinuity data.A bibliometric analysis is subsequently conducted to clarify the current status and development characteristics of the clustering of discontinuity sets.The methods for the clustering analysis of rock discontinuities are reviewed in terms of single-and multi-parameter clustering methods.Single-parameter methods can be classified into empirical judgment methods,dynamic clustering methods,relative static clustering methods,and static clustering methods,reflecting the continuous optimization and improvement of clustering algorithms.Moreover,this paper compares the current mainstream of single-parameter clustering methods with multi-parameter clustering methods.It is emphasized that the current single-parameter clustering methods have reached their performance limits,with little room for improvement,and that there is a need to extend the study of multi-parameter clustering methods.Finally,several suggestions are offered for future research on the clustering of discontinuity sets.
基金supported by the National Natural Science Foundation of China(No.52207229)the Key Research and Development Program of Ningxia Hui Autonomous Region of China(No.2024BEE02003)+1 种基金the financial support from the AEGiS Research Grant 2024,University of Wollongong(No.R6254)the financial support from the China Scholarship Council(No.202207550010).
文摘Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlooked challenge is their demand for considerable run-to-failure data for training.Collection of such training data leads to prohibitive testing efforts as the run-to-failure tests can last for years.Here,we propose a semi-supervised representation learning method to enhance prediction accuracy by learning from data without RUL labels.Our approach builds on a sophisticated deep neural network that comprises an encoder and three decoder heads to extract time-dependent representation features from short-term battery operating data regardless of the existence of RUL labels.The approach is validated using three datasets collected from 34 batteries operating under various conditions,encompassing over 19,900 charge and discharge cycles.Our method achieves a root mean squared error(RMSE)within 25 cycles,even when only 1/50 of the training dataset is labelled,representing a reduction of 48%compared to the conventional approach.We also demonstrate the method's robustness with varying numbers of labelled data and different weights assigned to the three decoder heads.The projection of extracted features in low space reveals that our method effectively learns degradation features from unlabelled data.Our approach highlights the promise of utilising semi-supervised learning to reduce the data demand for reliability monitoring of energy devices.
基金sponsored by the National Natural Science Foundation of China Grant No.62271302the Shanghai Municipal Natural Science Foundation Grant 20ZR1423500.
文摘Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,labeled data is very scarce due to patient privacy concerns.For researchers,obtaining high-quality labeled images is exceedingly challenging because it involves manual annotation and clinical understanding.In addition,skin datasets are highly suitable for medical image classification studies due to the inter-class relationships and the inter-class similarities of skin lesions.In this paper,we propose a model called Coalition Sample Relation Consistency(CSRC),a consistency-based method that leverages Canonical Correlation Analysis(CCA)to capture the intrinsic relationships between samples.Considering that traditional consistency-based models only focus on the consistency of prediction,we additionally explore the similarity between features by using CCA.We enforce feature relation consistency based on traditional models,encouraging the model to learn more meaningful information from unlabeled data.Finally,considering that cross-entropy loss is not as suitable as the supervised loss when studying with imbalanced datasets(i.e.,ISIC 2017 and ISIC 2018),we improve the supervised loss to achieve better classification accuracy.Our study shows that this model performs better than many semi-supervised methods.
基金Supported by the National Natural Science Foundation of China(No.62001313)the Key Project of Liaoning Provincial Department of Science and Technology(No.2021JH2/10300134,2022JH1/10500004)。
文摘In the realm of medical image segmentation,particularly in cardiac magnetic resonance imaging(MRI),achieving robust performance with limited annotated data is a significant challenge.Performance often degrades when faced with testing scenarios from unknown domains.To address this problem,this paper proposes a novel semi-supervised approach for cardiac magnetic resonance image segmentation,aiming to enhance predictive capabilities and domain generalization(DG).This paper establishes an MT-like model utilizing pseudo-labeling and consistency regularization from semi-supervised learning,and integrates uncertainty estimation to improve the accuracy of pseudo-labels.Additionally,to tackle the challenge of domain generalization,a data manipulation strategy is introduced,extracting spatial and content-related information from images across different domains,enriching the dataset with a multi-domain perspective.This papers method is meticulously evaluated on the publicly available cardiac magnetic resonance imaging dataset M&Ms,validating its effectiveness.Comparative analyses against various methods highlight the out-standing performance of this papers approach,demonstrating its capability to segment cardiac magnetic resonance images in previously unseen domains even with limited annotated data.
文摘Reliable Cluster Head(CH)selectionbased routing protocols are necessary for increasing the packet transmission efficiency with optimal path discovery that never introduces degradation over the transmission reliability.In this paper,Hybrid Golden Jackal,and Improved Whale Optimization Algorithm(HGJIWOA)is proposed as an effective and optimal routing protocol that guarantees efficient routing of data packets in the established between the CHs and the movable sink.This HGJIWOA included the phases of Dynamic Lens-Imaging Learning Strategy and Novel Update Rules for determining the reliable route essential for data packets broadcasting attained through fitness measure estimation-based CH selection.The process of CH selection achieved using Golden Jackal Optimization Algorithm(GJOA)completely depends on the factors of maintainability,consistency,trust,delay,and energy.The adopted GJOA algorithm play a dominant role in determining the optimal path of routing depending on the parameter of reduced delay and minimal distance.It further utilized Improved Whale Optimisation Algorithm(IWOA)for forwarding the data from chosen CHs to the BS via optimized route depending on the parameters of energy and distance.It also included a reliable route maintenance process that aids in deciding the selected route through which data need to be transmitted or re-routed.The simulation outcomes of the proposed HGJIWOA mechanism with different sensor nodes confirmed an improved mean throughput of 18.21%,sustained residual energy of 19.64%with minimized end-to-end delay of 21.82%,better than the competitive CH selection approaches.
基金funded by the National Nature Sciences Foundation of China with Grant No.42250410321。
文摘Music recommendation systems are essential due to the vast amount of music available on streaming platforms,which can overwhelm users trying to find new tracks that match their preferences.These systems analyze users’emotional responses,listening habits,and personal preferences to provide personalized suggestions.A significant challenge they face is the“cold start”problem,where new users have no past interactions to guide recommendations.To improve user experience,these systems aimto effectively recommendmusic even to such users by considering their listening behavior and music popularity.This paper introduces a novel music recommendation system that combines order clustering and a convolutional neural network,utilizing user comments and rankings as input.Initially,the system organizes users into clusters based on semantic similarity,followed by the utilization of their rating similarities as input for the convolutional neural network.This network then predicts ratings for unreviewed music by users.Additionally,the system analyses user music listening behaviour and music popularity.Music popularity can help to address cold start users as well.Finally,the proposed method recommends unreviewed music based on predicted high rankings and popularity,taking into account each user’s music listening habits.The proposed method combines predicted high rankings and popularity by first selecting popular unreviewedmusic that themodel predicts to have the highest ratings for each user.Among these,the most popular tracks are prioritized,defined by metrics such as frequency of listening across users.The number of recommended tracks is aligned with each user’s typical listening rate.The experimental findings demonstrate that the new method outperformed other classification techniques and prior recommendation systems,yielding a mean absolute error(MAE)rate and rootmean square error(RMSE)rate of approximately 0.0017,a hit rate of 82.45%,an average normalized discounted cumulative gain(nDCG)of 82.3%,and a prediction accuracy of new ratings at 99.388%.
基金partially supported by the National Natural Science Foundation of China(62161016)the Key Research and Development Project of Lanzhou Jiaotong University(ZDYF2304)+1 种基金the Beijing Engineering Research Center of Highvelocity Railway Broadband Mobile Communications(BHRC-2022-1)Beijing Jiaotong University。
文摘In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay.
基金Natural Science Foundation of Shandong Province,Grant/Award Number:ZR202103010903Doctoral Fund of Shandong Jianzhu University,Grant/Award Number:X21101Z。
文摘To guarantee safe and efficient tunneling of a tunnel boring machine(TBM),rapid and accurate judgment of the rock mass condition is essential.Based on fuzzy C-means clustering,this paper proposes a grouped machine learning method for predicting rock mass parameters.An elaborate data set on field rock mass is collected,which also matches field TBM tunneling.Meanwhile,target stratum samples are divided into several clusters by fuzzy C-means clustering,and multiple submodels are trained by samples in different clusters with the input of pretreated TBM tunneling data and the output of rock mass parameter data.Each testing sample or newly encountered tunneling condition can be predicted by multiple submodels with the weight of the membership degree of the sample to each cluster.The proposed method has been realized by 100 training samples and verified by 30 testing samples collected from the C1 part of the Pearl Delta water resources allocation project.The average percentage error of uniaxial compressive strength and joint frequency(Jf)of the 30 testing samples predicted by the pure back propagation(BP)neural network is 13.62%and 12.38%,while that predicted by the BP neural network combined with fuzzy C-means is 7.66%and6.40%,respectively.In addition,by combining fuzzy C-means clustering,the prediction accuracies of support vector regression and random forest are also improved to different degrees,which demonstrates that fuzzy C-means clustering is helpful for improving the prediction accuracy of machine learning and thus has good applicability.Accordingly,the proposed method is valuable for predicting rock mass parameters during TBM tunneling.
文摘Most existing semi-supervised clustering algorithms are not designed for handling high- dimensional data. On the other hand, semi-supervised dimensionality reduction methods may not necessarily improve the clustering performance, due to the fact that the inherent relationship between subspace selection and clustering is ignored. In order to mitigate the above problems, we present a semi-supervised clustering algo- rithm using adaptive distance metric learning (SCADM) which performs semi-supervised clustering and distance metric learning simultaneously. SCADM applies the clustering results to learn a distance metric and then projects the data onto a low-dimensional space where the separability of the data is maximized. Experimental results on real-world data sets show that the proposed method can effectively deal with high-dimensional data and provides an appealing clustering performance.
基金supported by the National Key Research and Development Program of China(No.2019YFC1520904)the National Natural Science Foundation of China(No.61973250).
文摘The influence of non-Independent Identically Distribution(non-IID)data on Federated Learning(FL)has been a serious concern.Clustered Federated Learning(CFL)is an emerging approach for reducing the impact of non-IID data,which employs the client similarity calculated by relevant metrics for clustering.Unfortunately,the existing CFL methods only pursue a single accuracy improvement,but ignore the convergence rate.Additionlly,the designed client selection strategy will affect the clustering results.Finally,traditional semi-supervised learning changes the distribution of data on clients,resulting in higher local costs and undesirable performance.In this paper,we propose a novel CFL method named ASCFL,which selects clients to participate in training and can dynamically adjust the balance between accuracy and convergence speed with datasets consisting of labeled and unlabeled data.To deal with unlabeled data,the prediction labels strategy predicts labels by encoders.The client selection strategy is to improve accuracy and reduce overhead by selecting clients with higher losses participating in the current round.What is more,the similarity-based clustering strategy uses a new indicator to measure the similarity between clients.Experimental results show that ASCFL has certain advantages in model accuracy and convergence speed over the three state-of-the-art methods with two popular datasets.