By analyzing the internal features of counting sorting algorithm. Two improvements of counting sorting algorithms are proposed, which have a wide range of applications and better efficiency than the original counting ...By analyzing the internal features of counting sorting algorithm. Two improvements of counting sorting algorithms are proposed, which have a wide range of applications and better efficiency than the original counting sort while maintaining the original stability. Compared with the original counting sort, it has a wider scope of application and better time and space efficiency. In addition, the accuracy of the above conclusions can be proved by a large amount of experimental data.展开更多
We apply the recent important result of serial sorting of n real numbers in time to the design of a parallel algorithm for sorting real numbers in time and operations. This is the first NC algorithm known to take oper...We apply the recent important result of serial sorting of n real numbers in time to the design of a parallel algorithm for sorting real numbers in time and operations. This is the first NC algorithm known to take operations for sorting real numbers.展开更多
A unilied vector sorting algorithm (VSA) is proposed, which sorts N arbitrary num-bers with clog. N-bits on an SIMD multi-processor system (SMMP) with processors and a composite interconnected network in time, where c...A unilied vector sorting algorithm (VSA) is proposed, which sorts N arbitrary num-bers with clog. N-bits on an SIMD multi-processor system (SMMP) with processors and a composite interconnected network in time, where c is an arbitrary positive constant. When is an arbitrary small posi-tive constant and u = log2 N, it is an O(logN) algorithm and when it is an optimal algorithm,pT = O(N log N)); where u = 1, c = 1 and e = 0.5 (a constant).展开更多
In the present era,a very huge volume of data is being stored in online and offline databases.Enterprise houses,research,medical as well as healthcare organizations,and academic institutions store data in databases an...In the present era,a very huge volume of data is being stored in online and offline databases.Enterprise houses,research,medical as well as healthcare organizations,and academic institutions store data in databases and their subsequent retrievals are performed for further processing.Finding the required data from a given database within the minimum possible time is one of the key factors in achieving the best possible performance of any computer-based application.If the data is already sorted,finding or searching is comparatively faster.In real-life scenarios,the data collected from different sources may not be in sorted order.Sorting algorithms are required to arrange the data in some order in the least possible time.In this paper,I propose an intelligent approach towards designing a smart variant of the bubble sort algorithm.I call it Smart Bubble sort that exhibits dynamic footprint:The capability of adapting itself from the average-case to the best-case scenario.It is an in-place sorting algorithm and its best-case time complexity isΩ(n).It is linear and better than bubble sort,selection sort,and merge sort.In averagecase and worst-case analyses,the complexity estimates are based on its static footprint analyses.Its complexity in worst-case is O(n2)and in average-case isΘ(n^(2)).Smart Bubble sort is capable of adapting itself to the best-case scenario from the average-case scenario at any subsequent stages due to its dynamic and intelligent nature.The Smart Bubble sort outperforms bubble sort,selection sort,and merge sort in the best-case scenario whereas it outperforms bubble sort in the average-case scenario.展开更多
This paper presents a new tree sorting algorithm whose average time complexity is much better than the sorting methods using AVL-Tree or other balanced trees. The experiment shows that our algorithm is much faster tha...This paper presents a new tree sorting algorithm whose average time complexity is much better than the sorting methods using AVL-Tree or other balanced trees. The experiment shows that our algorithm is much faster than the sorting methods using AVL-Thee or other balanced trees.展开更多
A novel framework of hyper-heuristic algorithm was proposed to improve the adaption of evolutionary algorithms( EAs)in optimization. The algorithm could be changed during the evolutionary progress according to their p...A novel framework of hyper-heuristic algorithm was proposed to improve the adaption of evolutionary algorithms( EAs)in optimization. The algorithm could be changed during the evolutionary progress according to their performances. In addition,a large number of elite individuals were employed in the algorithm and the elite individuals helped algorithm achieve a better performance,while such number of elite individuals stagnated the global convergence in conventional single algorithm. The time complexity was analyzed to demonstrate the novel framework did not increase the time complexity. The simulation results indicate that the proposed framework outperforms any single algorithm that composes the framework.展开更多
In this paper we consider a parallel algorithm that detects the maximizer of unimodal function f(x) computable at every point on unbounded interval (0, ∞). The algorithm consists of two modes: scanning and detecting....In this paper we consider a parallel algorithm that detects the maximizer of unimodal function f(x) computable at every point on unbounded interval (0, ∞). The algorithm consists of two modes: scanning and detecting. Search diagrams are introduced as a way to describe parallel searching algorithms on unbounded intervals. Dynamic programming equations, combined with a series of liner programming problems, describe relations between results for every pair of successive evaluations of function f in parallel. Properties of optimal search strategies are derived from these equations. The worst-case complexity analysis shows that, if the maximizer is located on a priori unknown interval (n-1], then it can be detected after cp(n)=「2log「p/2」+1(n+1)」-1 parallel evaluations of f(x), where p is the number of processors.展开更多
It’s a very popular issue regarding the minimum cost spanning tree which is of great practical and economical significance to solve it in a concise and accelerated way. In this paper, the basic ideas of Kruskal algor...It’s a very popular issue regarding the minimum cost spanning tree which is of great practical and economical significance to solve it in a concise and accelerated way. In this paper, the basic ideas of Kruskal algorithm were discussed and then presented a new improved algorithm—two branch Kruskal algorithm, which is improved to choose a middle value. Finally, because the time complexity is reduced, and the process is more convenient, it is concluded that the improved Kruskal algorithm is more effective in most cases compared with the Kruskal algorithm.展开更多
Cluster analysis is one of the major data analysis methods widely used for many practical applications in emerging areas of data mining. A good clustering method will produce high quality clusters with high intra-clus...Cluster analysis is one of the major data analysis methods widely used for many practical applications in emerging areas of data mining. A good clustering method will produce high quality clusters with high intra-cluster similarity and low inter-cluster similarity. Clustering techniques are applied in different domains to predict future trends of available data and its uses for the real world. This research work is carried out to find the performance of two of the most delegated, partition based clustering algorithms namely k-Means and k-Medoids. A state of art analysis of these two algorithms is implemented and performance is analyzed based on their clustering result quality by means of its execution time and other components. Telecommunication data is the source data for this analysis. The connection oriented broadband data is given as input to find the clustering quality of the algorithms. Distance between the server locations and their connection is considered for clustering. Execution time for each algorithm is analyzed and the results are compared with one another. Results found in comparison study are satisfactory for the chosen application.展开更多
Genetic algorithms offer very good performances for solving large optimization problems, especially in the domain of error-correcting codes. However, they have a major drawback related to the time complexity and memor...Genetic algorithms offer very good performances for solving large optimization problems, especially in the domain of error-correcting codes. However, they have a major drawback related to the time complexity and memory occupation when running on a uniprocessor computer. This paper proposes a parallel decoder for linear block codes, using parallel genetic algorithms (PGA). The good performance and time complexity are confirmed by theoretical study and by simulations on BCH(63,30,14) codes over both AWGN and flat Rayleigh fading channels. The simulation results show that the coding gain between parallel and single genetic algorithm is about 0.7 dB at BER = 10﹣5 with only 4 processors.展开更多
Most of works on the time complexity analysis of evolutionary algorithms havealways focused on some artificial binary problems. The time complexity of the algorithms forcombinatorial optimisation has not been well und...Most of works on the time complexity analysis of evolutionary algorithms havealways focused on some artificial binary problems. The time complexity of the algorithms forcombinatorial optimisation has not been well understood. This paper considers the time complexity ofan evolutionary algorithm for a classical combinatorial optimisation problem, to find the maximumcardinality matching in a graph. It is shown that the evolutionary algorithm can produce a matchingwith nearly maximum cardinality in average polynomial time.展开更多
As far as we know, the testing problem of legal firing sequence is NP-complete for gener-al Petri net, the related results of this problem on the polynomial-time solvability are limited only to some special net classe...As far as we know, the testing problem of legal firing sequence is NP-complete for gener-al Petri net, the related results of this problem on the polynomial-time solvability are limited only to some special net classes, such as persistent Petri nets, conflict-free Petri nets and state machine Petri nets. In this paper, the language properties of synchronous composition net are discussed. Based on these results, the testing algorithm polynomial-time complexity for legal firing sequence is proposed. Therefore, net classification of polynomial-time solvability for testing legal firing sequence is extended.展开更多
Analyzing the average-case complexity of algorithms is a very practical but very difficult problem in computer science. In the past few years I we have demonstrated that Kolmogorov complexity is an important tool for...Analyzing the average-case complexity of algorithms is a very practical but very difficult problem in computer science. In the past few years I we have demonstrated that Kolmogorov complexity is an important tool for analyzing the average-case complexity of algorithms. We have developed the incompressibility method. In this paper, several simple examples are used to further demonstrate the power and simplicity of such method. We prove bounds on the average-case number of stacks (queues) required for sorting sequential or parallel Queuesort or Stacksort.展开更多
We analyze a common feature of p-Kemeny AGGregation(p-KAGG) and p-One-Sided Crossing Minimization(p-OSCM) to provide new insights and findings of interest to both the graph drawing community and the social choice ...We analyze a common feature of p-Kemeny AGGregation(p-KAGG) and p-One-Sided Crossing Minimization(p-OSCM) to provide new insights and findings of interest to both the graph drawing community and the social choice community. We obtain parameterized subexponential-time algorithms for p-KAGG—a problem in social choice theory—and for p-OSCM—a problem in graph drawing. These algorithms run in time O*(2O(√k log k)),where k is the parameter, and significantly improve the previous best algorithms with running times O.1.403k/and O.1.4656k/, respectively. We also study natural "above-guarantee" versions of these problems and show them to be fixed parameter tractable. In fact, we show that the above-guarantee versions of these problems are equivalent to a weighted variant of p-directed feedback arc set. Our results for the above-guarantee version of p-KAGG reveal an interesting contrast. We show that when the number of "votes" in the input to p-KAGG is odd the above guarantee version can still be solved in time O*(2O(√k log k)), while if it is even then the problem cannot have a subexponential time algorithm unless the exponential time hypothesis fails(equivalently, unless FPT D M[1]).展开更多
The Graph isomorphism problem involves determining whether two graphs are isomorphic and the computational complexity required for this determination.In general,the problem is not known to be solvable in polynomial ti...The Graph isomorphism problem involves determining whether two graphs are isomorphic and the computational complexity required for this determination.In general,the problem is not known to be solvable in polynomial time,nor to be NP-complete.In this paper,by analyzing the algebraic properties of the adjacency matrices of the undirected graph,we first established the connection between graph isomorphism and matrix row and column interchanging operations.Then,we prove that for undirected graphs,the complexity in determining whether two graphs are isomorphic is at most O(n^(3)).展开更多
Most traditional merging and merging-based sorting algorithms are based on 2 sorters or 2 comparators A new merging technique is developed, namely sloping-and-shaking multiway merging, and a corresponding mul-tiway so...Most traditional merging and merging-based sorting algorithms are based on 2 sorters or 2 comparators A new merging technique is developed, namely sloping-and-shaking multiway merging, and a corresponding mul-tiway sorting method based only on k-sorters is proposed The sloping-and-shaking merging algorithm merges k sorted lists into one, where k can be any prime number The merging process is not a series of recursive applications of 2-way morging It sorts the keys on the m × k plane in vertical and horizontal directions, then along sloping lines with various slope rates step by step Only k-sorters are needed in the merging or sorting process. The time needed to merge ksorted lists, with m of each, is ( k + [log2( m / k) ]) tk, and the time for sorting N keys is (1 + (p - 1) k + 1/2( p -1) (p - 2)[ log2k])tk, where p - logkN, and tk is the time to sort k keys. The proposed algorithms can be implemented either by hardwared sorting networks, or on general purpose parallel and vector machines The traditional odd-even merging can be viewed as a special case of the multiway merging proposed (when k is 2) While theoretically the proposed algorithms provide a new understanding of parallel merging and sorting processes, they may be used in prac-tice to construct sorting circuits fasler than 2-sorter based sorting methods.展开更多
Provided an algorithm for the distribution search and proves the time complexity of the algorithm. This algorithm uses a mathematical formula to search n elements in the sequence of n elements in O(n)expected time,and...Provided an algorithm for the distribution search and proves the time complexity of the algorithm. This algorithm uses a mathematical formula to search n elements in the sequence of n elements in O(n)expected time,and experimental reesult proves that distribution search is superior to binary search.展开更多
文摘By analyzing the internal features of counting sorting algorithm. Two improvements of counting sorting algorithms are proposed, which have a wide range of applications and better efficiency than the original counting sort while maintaining the original stability. Compared with the original counting sort, it has a wider scope of application and better time and space efficiency. In addition, the accuracy of the above conclusions can be proved by a large amount of experimental data.
文摘We apply the recent important result of serial sorting of n real numbers in time to the design of a parallel algorithm for sorting real numbers in time and operations. This is the first NC algorithm known to take operations for sorting real numbers.
文摘A unilied vector sorting algorithm (VSA) is proposed, which sorts N arbitrary num-bers with clog. N-bits on an SIMD multi-processor system (SMMP) with processors and a composite interconnected network in time, where c is an arbitrary positive constant. When is an arbitrary small posi-tive constant and u = log2 N, it is an O(logN) algorithm and when it is an optimal algorithm,pT = O(N log N)); where u = 1, c = 1 and e = 0.5 (a constant).
文摘In the present era,a very huge volume of data is being stored in online and offline databases.Enterprise houses,research,medical as well as healthcare organizations,and academic institutions store data in databases and their subsequent retrievals are performed for further processing.Finding the required data from a given database within the minimum possible time is one of the key factors in achieving the best possible performance of any computer-based application.If the data is already sorted,finding or searching is comparatively faster.In real-life scenarios,the data collected from different sources may not be in sorted order.Sorting algorithms are required to arrange the data in some order in the least possible time.In this paper,I propose an intelligent approach towards designing a smart variant of the bubble sort algorithm.I call it Smart Bubble sort that exhibits dynamic footprint:The capability of adapting itself from the average-case to the best-case scenario.It is an in-place sorting algorithm and its best-case time complexity isΩ(n).It is linear and better than bubble sort,selection sort,and merge sort.In averagecase and worst-case analyses,the complexity estimates are based on its static footprint analyses.Its complexity in worst-case is O(n2)and in average-case isΘ(n^(2)).Smart Bubble sort is capable of adapting itself to the best-case scenario from the average-case scenario at any subsequent stages due to its dynamic and intelligent nature.The Smart Bubble sort outperforms bubble sort,selection sort,and merge sort in the best-case scenario whereas it outperforms bubble sort in the average-case scenario.
文摘This paper presents a new tree sorting algorithm whose average time complexity is much better than the sorting methods using AVL-Tree or other balanced trees. The experiment shows that our algorithm is much faster than the sorting methods using AVL-Thee or other balanced trees.
基金National Natural Science Foundations of China(Nos.70871091,61075064,61034004,61005090)Program for New Century Excellent Talents in University of Ministry of Education of ChinaPh.D.Programs Foundation of Ministry of Education of China(No.20100072110038)
文摘A novel framework of hyper-heuristic algorithm was proposed to improve the adaption of evolutionary algorithms( EAs)in optimization. The algorithm could be changed during the evolutionary progress according to their performances. In addition,a large number of elite individuals were employed in the algorithm and the elite individuals helped algorithm achieve a better performance,while such number of elite individuals stagnated the global convergence in conventional single algorithm. The time complexity was analyzed to demonstrate the novel framework did not increase the time complexity. The simulation results indicate that the proposed framework outperforms any single algorithm that composes the framework.
文摘In this paper we consider a parallel algorithm that detects the maximizer of unimodal function f(x) computable at every point on unbounded interval (0, ∞). The algorithm consists of two modes: scanning and detecting. Search diagrams are introduced as a way to describe parallel searching algorithms on unbounded intervals. Dynamic programming equations, combined with a series of liner programming problems, describe relations between results for every pair of successive evaluations of function f in parallel. Properties of optimal search strategies are derived from these equations. The worst-case complexity analysis shows that, if the maximizer is located on a priori unknown interval (n-1], then it can be detected after cp(n)=「2log「p/2」+1(n+1)」-1 parallel evaluations of f(x), where p is the number of processors.
文摘It’s a very popular issue regarding the minimum cost spanning tree which is of great practical and economical significance to solve it in a concise and accelerated way. In this paper, the basic ideas of Kruskal algorithm were discussed and then presented a new improved algorithm—two branch Kruskal algorithm, which is improved to choose a middle value. Finally, because the time complexity is reduced, and the process is more convenient, it is concluded that the improved Kruskal algorithm is more effective in most cases compared with the Kruskal algorithm.
文摘Cluster analysis is one of the major data analysis methods widely used for many practical applications in emerging areas of data mining. A good clustering method will produce high quality clusters with high intra-cluster similarity and low inter-cluster similarity. Clustering techniques are applied in different domains to predict future trends of available data and its uses for the real world. This research work is carried out to find the performance of two of the most delegated, partition based clustering algorithms namely k-Means and k-Medoids. A state of art analysis of these two algorithms is implemented and performance is analyzed based on their clustering result quality by means of its execution time and other components. Telecommunication data is the source data for this analysis. The connection oriented broadband data is given as input to find the clustering quality of the algorithms. Distance between the server locations and their connection is considered for clustering. Execution time for each algorithm is analyzed and the results are compared with one another. Results found in comparison study are satisfactory for the chosen application.
文摘Genetic algorithms offer very good performances for solving large optimization problems, especially in the domain of error-correcting codes. However, they have a major drawback related to the time complexity and memory occupation when running on a uniprocessor computer. This paper proposes a parallel decoder for linear block codes, using parallel genetic algorithms (PGA). The good performance and time complexity are confirmed by theoretical study and by simulations on BCH(63,30,14) codes over both AWGN and flat Rayleigh fading channels. The simulation results show that the coding gain between parallel and single genetic algorithm is about 0.7 dB at BER = 10﹣5 with only 4 processors.
基金Engineering and Physical Sciences Research Council(GR/R52541/01)a,武汉大学校科研和教改项目
文摘Most of works on the time complexity analysis of evolutionary algorithms havealways focused on some artificial binary problems. The time complexity of the algorithms forcombinatorial optimisation has not been well understood. This paper considers the time complexity ofan evolutionary algorithm for a classical combinatorial optimisation problem, to find the maximumcardinality matching in a graph. It is shown that the evolutionary algorithm can produce a matchingwith nearly maximum cardinality in average polynomial time.
基金This work was supported by the National Natural Science Foundation of China (Grant Nos. 69973029 and 69933020) the National Key Basic Science Foundation of P. R. China (973 Project, Grant No. G1998030604) the Key Project of National Science & Techn
文摘As far as we know, the testing problem of legal firing sequence is NP-complete for gener-al Petri net, the related results of this problem on the polynomial-time solvability are limited only to some special net classes, such as persistent Petri nets, conflict-free Petri nets and state machine Petri nets. In this paper, the language properties of synchronous composition net are discussed. Based on these results, the testing algorithm polynomial-time complexity for legal firing sequence is proposed. Therefore, net classification of polynomial-time solvability for testing legal firing sequence is extended.
文摘Analyzing the average-case complexity of algorithms is a very practical but very difficult problem in computer science. In the past few years I we have demonstrated that Kolmogorov complexity is an important tool for analyzing the average-case complexity of algorithms. We have developed the incompressibility method. In this paper, several simple examples are used to further demonstrate the power and simplicity of such method. We prove bounds on the average-case number of stacks (queues) required for sorting sequential or parallel Queuesort or Stacksort.
基金supported by a GermanNorwegian PPP grantsupported by the Indo-German Max Planck Center for Computer Science (IMPECS)
文摘We analyze a common feature of p-Kemeny AGGregation(p-KAGG) and p-One-Sided Crossing Minimization(p-OSCM) to provide new insights and findings of interest to both the graph drawing community and the social choice community. We obtain parameterized subexponential-time algorithms for p-KAGG—a problem in social choice theory—and for p-OSCM—a problem in graph drawing. These algorithms run in time O*(2O(√k log k)),where k is the parameter, and significantly improve the previous best algorithms with running times O.1.403k/and O.1.4656k/, respectively. We also study natural "above-guarantee" versions of these problems and show them to be fixed parameter tractable. In fact, we show that the above-guarantee versions of these problems are equivalent to a weighted variant of p-directed feedback arc set. Our results for the above-guarantee version of p-KAGG reveal an interesting contrast. We show that when the number of "votes" in the input to p-KAGG is odd the above guarantee version can still be solved in time O*(2O(√k log k)), while if it is even then the problem cannot have a subexponential time algorithm unless the exponential time hypothesis fails(equivalently, unless FPT D M[1]).
基金supported in part by the U.S.National Science Foundation(CCF1919154 and ECCS-1923409).
文摘The Graph isomorphism problem involves determining whether two graphs are isomorphic and the computational complexity required for this determination.In general,the problem is not known to be solvable in polynomial time,nor to be NP-complete.In this paper,by analyzing the algebraic properties of the adjacency matrices of the undirected graph,we first established the connection between graph isomorphism and matrix row and column interchanging operations.Then,we prove that for undirected graphs,the complexity in determining whether two graphs are isomorphic is at most O(n^(3)).
基金Project supported by the National "863"High-Tech Program of China and the National Natural Science Foundation of China.
文摘Most traditional merging and merging-based sorting algorithms are based on 2 sorters or 2 comparators A new merging technique is developed, namely sloping-and-shaking multiway merging, and a corresponding mul-tiway sorting method based only on k-sorters is proposed The sloping-and-shaking merging algorithm merges k sorted lists into one, where k can be any prime number The merging process is not a series of recursive applications of 2-way morging It sorts the keys on the m × k plane in vertical and horizontal directions, then along sloping lines with various slope rates step by step Only k-sorters are needed in the merging or sorting process. The time needed to merge ksorted lists, with m of each, is ( k + [log2( m / k) ]) tk, and the time for sorting N keys is (1 + (p - 1) k + 1/2( p -1) (p - 2)[ log2k])tk, where p - logkN, and tk is the time to sort k keys. The proposed algorithms can be implemented either by hardwared sorting networks, or on general purpose parallel and vector machines The traditional odd-even merging can be viewed as a special case of the multiway merging proposed (when k is 2) While theoretically the proposed algorithms provide a new understanding of parallel merging and sorting processes, they may be used in prac-tice to construct sorting circuits fasler than 2-sorter based sorting methods.
文摘Provided an algorithm for the distribution search and proves the time complexity of the algorithm. This algorithm uses a mathematical formula to search n elements in the sequence of n elements in O(n)expected time,and experimental reesult proves that distribution search is superior to binary search.