We propose a newmethod to generate surface quadrilateralmesh by calculating a globally defined parameterization with feature constraints.In the field of quadrilateral generation with features,the cross field methods a...We propose a newmethod to generate surface quadrilateralmesh by calculating a globally defined parameterization with feature constraints.In the field of quadrilateral generation with features,the cross field methods are wellknown because of their superior performance in feature preservation.The methods based on metrics are popular due to their sound theoretical basis,especially the Ricci flow algorithm.The cross field methods’major part,the Poisson equation,is challenging to solve in three dimensions directly.When it comes to cases with a large number of elements,the computational costs are expensive while the methods based on metrics are on the contrary.In addition,an appropriate initial value plays a positive role in the solution of the Poisson equation,and this initial value can be obtained from the Ricci flow algorithm.So we combine the methods based on metric with the cross field methods.We use the discrete dynamic Ricci flow algorithm to generate an initial value for the Poisson equation,which speeds up the solution of the equation and ensures the convergence of the computation.Numerical experiments show that our method is effective in generating a quadrilateral mesh for models with features,and the quality of the quadrilateral mesh is reliable.展开更多
Interconnection of all things challenges the traditional communication methods,and Semantic Communication and Computing(SCC)will become new solutions.It is a challenging task to accurately detect,extract,and represent...Interconnection of all things challenges the traditional communication methods,and Semantic Communication and Computing(SCC)will become new solutions.It is a challenging task to accurately detect,extract,and represent semantic information in the research of SCC-based networks.In previous research,researchers usually use convolution to extract the feature information of a graph and perform the corresponding task of node classification.However,the content of semantic information is quite complex.Although graph convolutional neural networks provide an effective solution for node classification tasks,due to their limitations in representing multiple relational patterns and not recognizing and analyzing higher-order local structures,the extracted feature information is subject to varying degrees of loss.Therefore,this paper extends from a single-layer topology network to a multi-layer heterogeneous topology network.The Bidirectional Encoder Representations from Transformers(BERT)training word vector is introduced to extract the semantic features in the network,and the existing graph neural network is improved by combining the higher-order local feature module of the network model representation network.A multi-layer network embedding algorithm on SCC-based networks with motifs is proposed to complete the task of end-to-end node classification.We verify the effectiveness of the algorithm on a real multi-layer heterogeneous network.展开更多
In the past decade,blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention.However,current blockchain systems face the problems of limited throughput,poor ...In the past decade,blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention.However,current blockchain systems face the problems of limited throughput,poor scalability,and high latency.Due to the failure of consensus algorithms in managing nodes’identities,blockchain technology is considered inappropriate for many applications,e.g.,in IoT environments,because of poor scalability.This paper proposes a blockchain consensus mechanism called the Advanced DAG-based Ranking(ADR)protocol to improve blockchain scalability and throughput.The ADR protocol uses the directed acyclic graph ledger,where nodes are placed according to their ranking positions in the graph.It allows honest nodes to use theDirect Acyclic Graph(DAG)topology to write blocks and verify transactions instead of a chain of blocks.By using a three-step strategy,this protocol ensures that the system is secured against doublespending attacks and allows for higher throughput and scalability.The first step involves the safe entry of nodes into the system by verifying their private and public keys.The next step involves developing an advanced DAG ledger so nodes can start block production and verify transactions.In the third step,a ranking algorithm is developed to separate the nodes created by attackers.After eliminating attacker nodes,the nodes are ranked according to their performance in the system,and true nodes are arranged in blocks in topological order.As a result,the ADR protocol is suitable for applications in the Internet of Things(IoT).We evaluated ADR on EC2 clusters with more than 100 nodes and achieved better transaction throughput and liveness of the network while adding malicious nodes.Based on the simulation results,this research determined that the transaction’s performance was significantly improved over blockchains like Internet of Things Applications(IOTA)and ByteBall.展开更多
Blind image deblurring is a long-standing ill-posed inverse problem which aims to recover a latent sharp image given only a blurry observation.So far,existing studies have designed many effective priors w.r.t.the late...Blind image deblurring is a long-standing ill-posed inverse problem which aims to recover a latent sharp image given only a blurry observation.So far,existing studies have designed many effective priors w.r.t.the latent image within the maximum a posteriori(MAP)framework in order to narrow down the solution space.These non-convex priors are always integrated into the final deblurring model,which makes the optimization challenging.However,due to unknown image distribution,complex kernel structure and non-uniform noises in real-world scenarios,it is indeed challenging to explicitly design a fixed prior for all cases.Thus we adopt the idea of adaptive optimization and propose the sparse structure control(SSC)for the latent image during the optimization process.In this paper,we only formulate the necessary optiinization constraints in a lightweight MAP model with no priors.Then we develop an inexact projected gradient scheme to incorporate flexible SSC in MAP inference.Besides Zp-norm based SSC in our previous work,we also train a group of denoising convolutional neural networks(CNNs)to learn the sparse image structure automatically from the training data under different noise levels,and we show that CNNs-based SSC can achieve similar results compared with Zp-norm but are more robust to noise.Extensive experiments demonstrate that the proposed adaptive optimization scheme with two types of SSC achieves the state-of-the-art results on both synthetic data and real-world images.展开更多
The long-tailed data distribution poses an enormous challenge for training neural networks in classification.A classification network can be decoupled into a feature extractor and a classifier.This paper takes a semi-...The long-tailed data distribution poses an enormous challenge for training neural networks in classification.A classification network can be decoupled into a feature extractor and a classifier.This paper takes a semi-discrete optimal transport(OT)perspective to analyze the long-tailed classification problem,where the feature space is viewed as a continuous source domain,and the classifier weights are viewed as a discrete target domain.The classifier is indeed to find a cell decomposition of the feature space with each cell corresponding to one class.An imbalanced training set causes the more frequent classes to have larger volume cells,which means that the classifier's decision boundary is biased towards less frequent classes,resulting in reduced classification performance in the inference phase.Therefore,we propose a novel OTdynamic softmax loss,which dynamically adjusts the decision boundary in the training phase to avoid overfitting in the tail classes.In addition,our method incorporates the supervised contrastive loss so that the feature space can satisfy the uniform distribution condition.Extensive and comprehensive experiments demonstrate that our method achieves state-ofthe-art performance on multiple long-tailed recognition benchmarks,including CIFAR-LT,ImageNet-LT,iNaturalist 2018,and Places-LT.展开更多
基金supported by NSFC Nos.61907005,61720106005,61936002,62272080.
文摘We propose a newmethod to generate surface quadrilateralmesh by calculating a globally defined parameterization with feature constraints.In the field of quadrilateral generation with features,the cross field methods are wellknown because of their superior performance in feature preservation.The methods based on metrics are popular due to their sound theoretical basis,especially the Ricci flow algorithm.The cross field methods’major part,the Poisson equation,is challenging to solve in three dimensions directly.When it comes to cases with a large number of elements,the computational costs are expensive while the methods based on metrics are on the contrary.In addition,an appropriate initial value plays a positive role in the solution of the Poisson equation,and this initial value can be obtained from the Ricci flow algorithm.So we combine the methods based on metric with the cross field methods.We use the discrete dynamic Ricci flow algorithm to generate an initial value for the Poisson equation,which speeds up the solution of the equation and ensures the convergence of the computation.Numerical experiments show that our method is effective in generating a quadrilateral mesh for models with features,and the quality of the quadrilateral mesh is reliable.
基金supported by National Natural Science Foundation of China(62101088,61801076,61971336)Natural Science Foundation of Liaoning Province(2022-MS-157,2023-MS-108)+1 种基金Key Laboratory of Big Data Intelligent Computing Funds for Chongqing University of Posts and Telecommunications(BDIC-2023-A-003)Fundamental Research Funds for the Central Universities(3132022230).
文摘Interconnection of all things challenges the traditional communication methods,and Semantic Communication and Computing(SCC)will become new solutions.It is a challenging task to accurately detect,extract,and represent semantic information in the research of SCC-based networks.In previous research,researchers usually use convolution to extract the feature information of a graph and perform the corresponding task of node classification.However,the content of semantic information is quite complex.Although graph convolutional neural networks provide an effective solution for node classification tasks,due to their limitations in representing multiple relational patterns and not recognizing and analyzing higher-order local structures,the extracted feature information is subject to varying degrees of loss.Therefore,this paper extends from a single-layer topology network to a multi-layer heterogeneous topology network.The Bidirectional Encoder Representations from Transformers(BERT)training word vector is introduced to extract the semantic features in the network,and the existing graph neural network is improved by combining the higher-order local feature module of the network model representation network.A multi-layer network embedding algorithm on SCC-based networks with motifs is proposed to complete the task of end-to-end node classification.We verify the effectiveness of the algorithm on a real multi-layer heterogeneous network.
文摘In the past decade,blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention.However,current blockchain systems face the problems of limited throughput,poor scalability,and high latency.Due to the failure of consensus algorithms in managing nodes’identities,blockchain technology is considered inappropriate for many applications,e.g.,in IoT environments,because of poor scalability.This paper proposes a blockchain consensus mechanism called the Advanced DAG-based Ranking(ADR)protocol to improve blockchain scalability and throughput.The ADR protocol uses the directed acyclic graph ledger,where nodes are placed according to their ranking positions in the graph.It allows honest nodes to use theDirect Acyclic Graph(DAG)topology to write blocks and verify transactions instead of a chain of blocks.By using a three-step strategy,this protocol ensures that the system is secured against doublespending attacks and allows for higher throughput and scalability.The first step involves the safe entry of nodes into the system by verifying their private and public keys.The next step involves developing an advanced DAG ledger so nodes can start block production and verify transactions.In the third step,a ranking algorithm is developed to separate the nodes created by attackers.After eliminating attacker nodes,the nodes are ranked according to their performance in the system,and true nodes are arranged in blocks in topological order.As a result,the ADR protocol is suitable for applications in the Internet of Things(IoT).We evaluated ADR on EC2 clusters with more than 100 nodes and achieved better transaction throughput and liveness of the network while adding malicious nodes.Based on the simulation results,this research determined that the transaction’s performance was significantly improved over blockchains like Internet of Things Applications(IOTA)and ByteBall.
基金the National Natural Science Foundation of China under Grant Nos.61672125 and 61772108.
文摘Blind image deblurring is a long-standing ill-posed inverse problem which aims to recover a latent sharp image given only a blurry observation.So far,existing studies have designed many effective priors w.r.t.the latent image within the maximum a posteriori(MAP)framework in order to narrow down the solution space.These non-convex priors are always integrated into the final deblurring model,which makes the optimization challenging.However,due to unknown image distribution,complex kernel structure and non-uniform noises in real-world scenarios,it is indeed challenging to explicitly design a fixed prior for all cases.Thus we adopt the idea of adaptive optimization and propose the sparse structure control(SSC)for the latent image during the optimization process.In this paper,we only formulate the necessary optiinization constraints in a lightweight MAP model with no priors.Then we develop an inexact projected gradient scheme to incorporate flexible SSC in MAP inference.Besides Zp-norm based SSC in our previous work,we also train a group of denoising convolutional neural networks(CNNs)to learn the sparse image structure automatically from the training data under different noise levels,and we show that CNNs-based SSC can achieve similar results compared with Zp-norm but are more robust to noise.Extensive experiments demonstrate that the proposed adaptive optimization scheme with two types of SSC achieves the state-of-the-art results on both synthetic data and real-world images.
基金supported by the National Key Research and Development Program of China under Grant No.2021YFA1003003the National Natural Science Foundation of China under Grant Nos.61936002 and T2225012.
文摘The long-tailed data distribution poses an enormous challenge for training neural networks in classification.A classification network can be decoupled into a feature extractor and a classifier.This paper takes a semi-discrete optimal transport(OT)perspective to analyze the long-tailed classification problem,where the feature space is viewed as a continuous source domain,and the classifier weights are viewed as a discrete target domain.The classifier is indeed to find a cell decomposition of the feature space with each cell corresponding to one class.An imbalanced training set causes the more frequent classes to have larger volume cells,which means that the classifier's decision boundary is biased towards less frequent classes,resulting in reduced classification performance in the inference phase.Therefore,we propose a novel OTdynamic softmax loss,which dynamically adjusts the decision boundary in the training phase to avoid overfitting in the tail classes.In addition,our method incorporates the supervised contrastive loss so that the feature space can satisfy the uniform distribution condition.Extensive and comprehensive experiments demonstrate that our method achieves state-ofthe-art performance on multiple long-tailed recognition benchmarks,including CIFAR-LT,ImageNet-LT,iNaturalist 2018,and Places-LT.