In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the ...In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.展开更多
Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony opt...Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony optimization(U-ACO-B) to solve the drawbacks of the ant colony optimization(ACO-B).In this algorithm,firstly,an unconstrained optimization problem is solved to obtain an undirected skeleton,and then the ACO algorithm is used to orientate the edges,thus returning the final structure.In the experimental part of the paper,we compare the performance of the proposed algorithm with ACO-B algorithm.The experimental results show that our method is effective and greatly enhance convergence speed than ACO-B algorithm.展开更多
In this paper a hybrid algorithm which combines the pattern search method and the genetic algorithm for unconstrained optimization is presented. The algorithm is a deterministic pattern search algorithm,but in the sea...In this paper a hybrid algorithm which combines the pattern search method and the genetic algorithm for unconstrained optimization is presented. The algorithm is a deterministic pattern search algorithm,but in the search step of pattern search algorithm,the trial points are produced by a way like the genetic algorithm. At each iterate, by reduplication,crossover and mutation, a finite set of points can be used. In theory,the algorithm is globally convergent. The most stir is the numerical results showing that it can find the global minimizer for some problems ,which other pattern search algorithms don't bear.展开更多
In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wol...In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.展开更多
Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology ...Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology has become particularly important and widely used in the field of optimization.In this study,a new CG method was put forward,which combined subspace technology and a cubic regularization model.Besides,a special scaled norm in a cubic regularization model was analyzed.Under certain conditions,some significant characteristics of the search direction were given and the convergence of the algorithm was built.Numerical comparisons show that for the 145 test functions under the CUTEr library,the proposed method is better than two classical CG methods and two new subspaces conjugate gradient methods.展开更多
Two new formulaes of the main parameter βk of the conjugate gradient method are presented, which respectively can be seen as the modifications of method HS and PRP. In comparison with classic conjugate gradient metho...Two new formulaes of the main parameter βk of the conjugate gradient method are presented, which respectively can be seen as the modifications of method HS and PRP. In comparison with classic conjugate gradient methods, the new methods take both available gradient and function value information. Furthermore, their modifications are proposed. These methods are shown to be global convergent under some assumptions. Numerical results are also reported.展开更多
We present an improved method. If we assume that the objective function is twice continuously differentiable and uniformly convex, we discuss global and superlinear convergence of the improved quasi-Newton method.
In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line...In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems.展开更多
Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining wit...Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining with nonmonotonic line search,we prove its global convergence.Furthermore,the proposed algorithms have sublinear convergence rate for general convex functions,and R-linear convergence rate for strongly convex problems.Numerical experiments show that the proposed methods are effective compared to the state of the arts.展开更多
This paper puts forward a two-parameter family of nonlinear conjugate gradient(CG)method without line search for solving unconstrained optimization problem.The main feature of this method is that it does not rely on a...This paper puts forward a two-parameter family of nonlinear conjugate gradient(CG)method without line search for solving unconstrained optimization problem.The main feature of this method is that it does not rely on any line search and only requires a simple step size formula to always generate a sufficient descent direction.Under certain assumptions,the proposed method is proved to possess global convergence.Finally,our method is compared with other potential methods.A large number of numerical experiments show that our method is more competitive and effective.展开更多
This paper presents a new class of quasi-Newton methods for solving unconstrained minimization problems. The methods can be regarded as a generalization of Huang class of quasi-Newton methods. We prove that the direct...This paper presents a new class of quasi-Newton methods for solving unconstrained minimization problems. The methods can be regarded as a generalization of Huang class of quasi-Newton methods. We prove that the directions and the iterations generated by the methods of the new class depend only on the parameter p if the exact line searches are made in each steps.展开更多
Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of n...Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of nonmonotone line search is used. Under mildassumptions, we prove the global convergence of the method. Some numerical results arealso presented.展开更多
Focuses on a study which examined the modification of type approximate trust region methods via two curvilinear paths for unconstrained optimization. Properties of the curvilinear paths; Description of a method which ...Focuses on a study which examined the modification of type approximate trust region methods via two curvilinear paths for unconstrained optimization. Properties of the curvilinear paths; Description of a method which combines line search technique with an approximate trust region algorithm; Information on the convergence analysis; Details on the numerical experiments.展开更多
In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and th...In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and the second five hybrid CG methods. A collection of medium-scale and large-scale test problems are drawn from a standard code of test problems, CUTE. The conjugate gradient methods are ranked according to the numerical results. Some remarks are given.展开更多
In this report we present some new numerical methods for unconstrained optimization. These methods apply update formulae that do not satisfy the quasi-Newton equation. We derive these new formulae by considering diffe...In this report we present some new numerical methods for unconstrained optimization. These methods apply update formulae that do not satisfy the quasi-Newton equation. We derive these new formulae by considering different techniques of approximating the objective function. Theoretical analyses are given to show the advantages of using non-quasi-Newton updates. Under mild conditions we prove that our new update formulae preserve global convergence properties. Numerical results are also presented.展开更多
In this paper,we propose an improved trust region method for solving unconstrained optimization problems.Different with traditional trust region methods,our algorithm does not resolve the subproblem within the trust r...In this paper,we propose an improved trust region method for solving unconstrained optimization problems.Different with traditional trust region methods,our algorithm does not resolve the subproblem within the trust region centered at the current iteration point,but within an improved one centered at some point located in the direction of the negative gradient,while the current iteration point is on the boundary set.We prove the global convergence properties of the new improved trust region algorithm and give the computational results which demonstrate the effectiveness of our algorithm.展开更多
Trust region (TR) algorithms are a class of recently developed algorithms for nonlinear optimization. A new family of TR algorithms for unconstrained optimization, which is the extension of the usual TR method, is pre...Trust region (TR) algorithms are a class of recently developed algorithms for nonlinear optimization. A new family of TR algorithms for unconstrained optimization, which is the extension of the usual TR method, is presented in this paper. When the objective function is bounded below and continuously, differentiable, and the norm of the Hesse approximations increases at most linearly with the iteration number, we prove the global convergence of the algorithms. Limited numerical results are reported, which indicate that our new TR algorithm is competitive.展开更多
In this paper,we present a new adaptive trust-region method for solving nonlinear unconstrained optimization problems.More precisely,a trust-region radius based on a nonmonotone technique uses an approximation of Hes...In this paper,we present a new adaptive trust-region method for solving nonlinear unconstrained optimization problems.More precisely,a trust-region radius based on a nonmonotone technique uses an approximation of Hessian which is adaptively chosen.We produce a suitable trust-region radius;preserve the global convergence under classical assumptions to the first-order critical points;improve the practical performance of the new algorithm compared to other exiting variants.Moreover,the quadratic convergence rate is established under suitable conditions.Computational results on the CUTEst test collection of unconstrained problems are presented to show the effectiveness of the proposed algorithm compared with some exiting methods.展开更多
A new algorithm for unconstrained optimization is developed, by using the product form of the OCSSR1 update. The implementation is especially useful when gradient information is estimated by difference formulae. Preli...A new algorithm for unconstrained optimization is developed, by using the product form of the OCSSR1 update. The implementation is especially useful when gradient information is estimated by difference formulae. Preliminary tests show that new algorithm can perform well.展开更多
This paper studies a substitution secant/finite difference (SSFD) method for solving large scale sparse unconstrained optimization problems. This method is a combination of a secant method and a finite difference me...This paper studies a substitution secant/finite difference (SSFD) method for solving large scale sparse unconstrained optimization problems. This method is a combination of a secant method and a finite difference method, which depends on a consistent partition of the columns of the lower triangular part of the Hessian matrix. A q-superlinear convergence result and an r-convergence rate estimate show that this method has good local convergence properties. The numerical results show that this method may be competitive with some currently used algorithms.展开更多
文摘In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.
基金supported by the National Natural Science Foundation of China (60974082,11171094)the Fundamental Research Funds for the Central Universities (K50510700004)+1 种基金the Foundation and Advanced Technology Research Program of Henan Province (102300410264)the Basic Research Program of the Education Department of Henan Province (2010A110010)
文摘Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony optimization(U-ACO-B) to solve the drawbacks of the ant colony optimization(ACO-B).In this algorithm,firstly,an unconstrained optimization problem is solved to obtain an undirected skeleton,and then the ACO algorithm is used to orientate the edges,thus returning the final structure.In the experimental part of the paper,we compare the performance of the proposed algorithm with ACO-B algorithm.The experimental results show that our method is effective and greatly enhance convergence speed than ACO-B algorithm.
文摘In this paper a hybrid algorithm which combines the pattern search method and the genetic algorithm for unconstrained optimization is presented. The algorithm is a deterministic pattern search algorithm,but in the search step of pattern search algorithm,the trial points are produced by a way like the genetic algorithm. At each iterate, by reduplication,crossover and mutation, a finite set of points can be used. In theory,the algorithm is globally convergent. The most stir is the numerical results showing that it can find the global minimizer for some problems ,which other pattern search algorithms don't bear.
基金Supported by the Fund of Chongqing Education Committee(KJ091104)
文摘In this paper,an efficient conjugate gradient method is given to solve the general unconstrained optimization problems,which can guarantee the sufficient descent property and the global convergence with the strong Wolfe line search conditions.Numerical results show that the new method is efficient and stationary by comparing with PRP+ method,so it can be widely used in scientific computation.
基金Sponsored by the National Natural Science Foundation of China(Grant No.11901561).
文摘Many methods have been put forward to solve unconstrained optimization problems,among which conjugate gradient method(CG)is very important.With the increasing emergence of large⁃scale problems,the subspace technology has become particularly important and widely used in the field of optimization.In this study,a new CG method was put forward,which combined subspace technology and a cubic regularization model.Besides,a special scaled norm in a cubic regularization model was analyzed.Under certain conditions,some significant characteristics of the search direction were given and the convergence of the algorithm was built.Numerical comparisons show that for the 145 test functions under the CUTEr library,the proposed method is better than two classical CG methods and two new subspaces conjugate gradient methods.
基金supported by the Teaching and Research Award Program for the Outstanding Young Teachers in Higher Education Institutesof Ministry of Educationthe Natural Science Foundation of Inner Mongolia Autonomous Region (2010BS0108)SPH-IMU (Z20090135)
文摘Two new formulaes of the main parameter βk of the conjugate gradient method are presented, which respectively can be seen as the modifications of method HS and PRP. In comparison with classic conjugate gradient methods, the new methods take both available gradient and function value information. Furthermore, their modifications are proposed. These methods are shown to be global convergent under some assumptions. Numerical results are also reported.
文摘We present an improved method. If we assume that the objective function is twice continuously differentiable and uniformly convex, we discuss global and superlinear convergence of the improved quasi-Newton method.
文摘In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems.
基金supported by the National Natural Science Foundation of China(Nos.12171051 and 11871115)。
文摘Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining with nonmonotonic line search,we prove its global convergence.Furthermore,the proposed algorithms have sublinear convergence rate for general convex functions,and R-linear convergence rate for strongly convex problems.Numerical experiments show that the proposed methods are effective compared to the state of the arts.
基金Supported by 2023 Inner Mongolia University of Finance and Economics,General Scientific Research for Universities directly under Inner Mon‐golia,China (NCYWT23026)2024 High-quality Research Achievements Cultivation Fund Project of Inner Mongolia University of Finance and Economics,China (GZCG2479)。
文摘This paper puts forward a two-parameter family of nonlinear conjugate gradient(CG)method without line search for solving unconstrained optimization problem.The main feature of this method is that it does not rely on any line search and only requires a simple step size formula to always generate a sufficient descent direction.Under certain assumptions,the proposed method is proved to possess global convergence.Finally,our method is compared with other potential methods.A large number of numerical experiments show that our method is more competitive and effective.
文摘This paper presents a new class of quasi-Newton methods for solving unconstrained minimization problems. The methods can be regarded as a generalization of Huang class of quasi-Newton methods. We prove that the directions and the iterations generated by the methods of the new class depend only on the parameter p if the exact line searches are made in each steps.
基金the National Natural Science Foundation of China(19801033,10171104).
文摘Abstract. Conjugate gradient methods are very important methods for unconstrainedoptimization, especially for large scale problems. In this paper, we propose a new conjugategradient method, in which the technique of nonmonotone line search is used. Under mildassumptions, we prove the global convergence of the method. Some numerical results arealso presented.
基金the Chinese National Science Foundation Grant 10071050, the Science andTechnology Foundation of Shanghai Higher Education.
文摘Focuses on a study which examined the modification of type approximate trust region methods via two curvilinear paths for unconstrained optimization. Properties of the curvilinear paths; Description of a method which combines line search technique with an approximate trust region algorithm; Information on the convergence analysis; Details on the numerical experiments.
基金Research partially supported by Chinese NSF grants 19801033,19771047 and 10171104
文摘In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and the second five hybrid CG methods. A collection of medium-scale and large-scale test problems are drawn from a standard code of test problems, CUTE. The conjugate gradient methods are ranked according to the numerical results. Some remarks are given.
文摘In this report we present some new numerical methods for unconstrained optimization. These methods apply update formulae that do not satisfy the quasi-Newton equation. We derive these new formulae by considering different techniques of approximating the objective function. Theoretical analyses are given to show the advantages of using non-quasi-Newton updates. Under mild conditions we prove that our new update formulae preserve global convergence properties. Numerical results are also presented.
基金supported by National Natural Science Foundation of China(Grant Nos.60903088 and 11101115)the Natural Science Foundation of Hebei Province(Grant No.A2010000188)Doctoral Foundation of Hebei University(Grant No.2008136)
文摘In this paper,we propose an improved trust region method for solving unconstrained optimization problems.Different with traditional trust region methods,our algorithm does not resolve the subproblem within the trust region centered at the current iteration point,but within an improved one centered at some point located in the direction of the negative gradient,while the current iteration point is on the boundary set.We prove the global convergence properties of the new improved trust region algorithm and give the computational results which demonstrate the effectiveness of our algorithm.
基金Research partly supported by Chinese NSF grants 19731001 and 19801033. The second author gratefully acknowledges the support of Natoinal 973 Information Fechnology and High-Performance Software Program of China with grant No. G1998030401 and K. C. Wong E
文摘Trust region (TR) algorithms are a class of recently developed algorithms for nonlinear optimization. A new family of TR algorithms for unconstrained optimization, which is the extension of the usual TR method, is presented in this paper. When the objective function is bounded below and continuously, differentiable, and the norm of the Hesse approximations increases at most linearly with the iteration number, we prove the global convergence of the algorithms. Limited numerical results are reported, which indicate that our new TR algorithm is competitive.
文摘In this paper,we present a new adaptive trust-region method for solving nonlinear unconstrained optimization problems.More precisely,a trust-region radius based on a nonmonotone technique uses an approximation of Hessian which is adaptively chosen.We produce a suitable trust-region radius;preserve the global convergence under classical assumptions to the first-order critical points;improve the practical performance of the new algorithm compared to other exiting variants.Moreover,the quadratic convergence rate is established under suitable conditions.Computational results on the CUTEst test collection of unconstrained problems are presented to show the effectiveness of the proposed algorithm compared with some exiting methods.
文摘A new algorithm for unconstrained optimization is developed, by using the product form of the OCSSR1 update. The implementation is especially useful when gradient information is estimated by difference formulae. Preliminary tests show that new algorithm can perform well.
基金Supported by the National Natural Science Foundation of China (No.10471015) and the State Foundation of Ph.D Units of China (No.20020141013)
文摘This paper studies a substitution secant/finite difference (SSFD) method for solving large scale sparse unconstrained optimization problems. This method is a combination of a secant method and a finite difference method, which depends on a consistent partition of the columns of the lower triangular part of the Hessian matrix. A q-superlinear convergence result and an r-convergence rate estimate show that this method has good local convergence properties. The numerical results show that this method may be competitive with some currently used algorithms.