Global convergence of some modified PRP nonlinear conjugate gradient methods

Published on Nov 1, 2011in Optimization Letters1.502
· DOI :10.1007/S11590-010-0224-8
Zhifeng Dai10
Estimated H-index: 10
(Hunan University),
Bo-Shi Tian1
Estimated H-index: 1
(CSUST: Changsha University of Science and Technology)
Sources
Abstract
Recently, similar to Hager and Zhang (SIAM J Optim 16:170–192, 2005), Yu (Nonlinear self-scaling conjugate gradient methods for large-scale optimization problems. Thesis of Doctors Degree, Sun Yat-Sen University, 2007) and Yuan (Optim Lett 3:11–21, 2009) proposed modified PRP conjugate gradient methods which generate sufficient descent directions without any line searches. In order to obtain the global convergence of their algorithms, they need the assumption that the stepsize is bounded away from zero. In this paper, we take a little modification to these methods such that the modified methods retain sufficient descent property. Without requirement of the positive lower bound of the stepsize, we prove that the proposed methods are globally convergent. Some numerical results are also reported.
📖 Papers frequently viewed together
3,589 Citations
5,756 Citations
736 Citations
References19
Newest
#1Jianguo Zhang (Henan University)H-Index: 1
#2Yunhai XiaoH-Index: 14
Last. Zengxin WeiH-Index: 22
view all 3 authors...
Two nonlinear conjugate gradient-type methods for solving unconstrained optimization problems are proposed. An attractive property of the methods, is that, without any line search, the generated directions always descend. Under some mild conditions, global convergence results for both methods are established. Preliminary numerical results show that these proposed methods are promising, and competitive with the well-known PRP method.
28 CitationsSource
#1Gonglin Yuan (Xida: Guangxi University)H-Index: 1
It is well known that the sufficient descent condition is very important to the global convergence of the nonlinear conjugate gradient method. In this paper, some modified conjugate gradient methods which possess this property are presented. The global convergence of these proposed methods with the weak Wolfe–Powell (WWP) line search rule is established for nonconvex function under suitable conditions. Numerical results are reported.
108 CitationsSource
Abstract A modification of the Dai–Yuan conjugate gradient algorithm is proposed. Using exact line search, the algorithm reduces to the original version of the Dai and Yuan computational scheme. For inexact line search the algorithm satisfies both sufficient descent and conjugacy conditions. A global convergence result is proved when the Wolfe line search conditions are used. Computational results, for a set consisting of 750 unconstrained optimization test problems, show that this new conjugate...
24 CitationsSource
#1Li ZhangH-Index: 3
#2Weijun ZhouH-Index: 5
Last. Dong-Hui LiH-Index: 17
view all 3 authors...
In this paper, we propose a modified Polak-Ribiere-Polyak (PRP) conjugate gradient method. An attractive property of the proposed method is that the direction generated by the method is always a descent direction for the objective 'function. This property is independent of the line search used. Moreover, if exact line search is used, the method reduces to the ordinary PRP method. Under appropriate conditions, we show that the modified PRP method with Armijo-type line search is globally convergen...
305 CitationsSource
#1Li Zhang (CSUST: Changsha University of Science and Technology)H-Index: 3
#2Weijun Zhou (CSUST: Changsha University of Science and Technology)H-Index: 5
Last. Dong-Hui Li (Hunan University)H-Index: 17
view all 3 authors...
In this paper, we are concerned with the conjugate gradient methods for solving unconstrained optimization problems. It is well-known that the direction generated by a conjugate gradient method may not be a descent direction of the objective function. In this paper, we take a little modification to the Fletcher–Reeves (FR) method such that the direction generated by the modified method provides a descent direction for the objective function. This property depends neither on the line search used,...
177 CitationsSource
A new nonlinear conjugate gradient method and an associated implementation, based on an inexact line search, are proposed and analyzed. With exact line search, our method reduces to a nonlinear version of the Hestenes--Stiefel conjugate gradient scheme. For any (inexact) line search, our scheme satisfies the descent condition {\bf g}_k^{\sf T} {\bf d}_k \le -\frac{7}{8} \|{\bf g}_k\|^2 Moreover, a global convergence result is established when the line search fulfills the Wolfe conditions. A n...
686 CitationsSource
#1William W. Hager (UF: University of Florida)H-Index: 57
#2Hongchao Zhang (UF: University of Florida)H-Index: 24
This paper reviews the development of dierent versions of nonlinear conjugate gradient methods, with special attention given to global convergence properties.
526 Citations
#1Elizabeth D. Dolan (Argonne National Laboratory)H-Index: 3
#2Jorge J. Moré (Argonne National Laboratory)H-Index: 50
We propose performance profiles — distribution functions for a performance metric — as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.
2,657 CitationsSource
Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. The strong Wolfe conditions are usually used in the analyses and implementations of conjugate gradient methods. This paper presents a new version of the conjugate gradient method, which converges globally, provided the line search satisfies the standard Wolfe conditions. The conditions on the objective function are also weak, being similar to those required by the Zoutendijk condition.
736 CitationsSource
#1Luigi Grippo (Sapienza University of Rome)H-Index: 19
#2Stefano Lucidi (Sapienza University of Rome)H-Index: 34
In this paper we propose a new line search algorithm that ensures global convergence of the Polak-Ribiere conjugate gradient method for the unconstrained minimization of nonconvex differentiable functions. In particular, we show that with this line search every limit point produced by the Polak-Ribiere iteration is a stationary point of the objective function. Moreover, we define adaptive rules for the choice of the parameters in a way that the first stationary point along a search direction can...
194 CitationsSource
Cited By25
Newest
#1Junyue Cao (CAS: Chinese Academy of Sciences)
view all 3 authors...
It is well known that the nonlinear conjugate gradient algorithm is one of the effective algorithms for optimization problems since it has low storage and simple structure properties. This motivates us to make a further study to design a modified conjugate gradient formula for the optimization model, and this proposed conjugate gradient algorithm possesses several properties: (1) the search direction possesses not only the gradient value but also the function value; (2) the presented direction h...
Source
#1Muhammad Umair Khan (University of Gaziantep)H-Index: 1
#2Tolgay Kara (University of Gaziantep)H-Index: 6
The objective of this study is to design an optimal control scheme for the control of a class of nonlinear flexible multi-body systems with extremely coupled dynamics and infinite dimensions. The assumed mode method (AMM) acquires a finite-dimensional model, but there are uncertainties in the truncated model that make the system a difficult control problem. The proposed control scheme is a hybrid of the sliding mode control (SMC) and the type-2 neural fuzzy system (NFS). A new modified conjugate...
1 CitationsSource
In this article, a modified Polak-Ribiere-Polyak (PRP) conjugate gradient method is proposed for image restoration. The presented method can generate sufficient descent directions without any line search conditions. Under some mild conditions, this method is globally convergent with the Armijo line search. Moreover, the linear convergence rate of the modified PRP method is established. The experimental results of unconstrained optimization, image restoration, and compressive sensing show that th...
2 CitationsSource
#1Junyue Cao (CAS: Chinese Academy of Sciences)H-Index: 1
#2Jinzhao Wu (Xida: Guangxi University)H-Index: 2
Abstract In this paper, a nonlinear conjugate gradient algorithm is presented. This given algorithm possesses the following properties: (i) the sufficient descent property is satisfied; (ii) the trust region feature holds; (iii) the global convergence is proved for nonconvex functions; (iv) the applications for image restoration problem are done to show the performance of the proposed algorithm. Notice that the properties (i) and (ii) will be obtained without other special conditions.
5 CitationsSource
#1Muhammad Umair Khan (University of Gaziantep)H-Index: 1
#2Tolgay Kara (University of Gaziantep)H-Index: 6
This paper presents a simple novel intelligent control scheme. The devised control scheme is a Takagi Sugeno Kang (TSK)- based type-2 neural fuzzy system (NFS) with a self-tuning mechanism optimized via a conjugate gradient (CG) method. Defuzzification phase of the NFS comprises T-norms rather than the conventional product–sum combination. The proposed control scheme is incorporated with a two-link flexible manipulator (TLFM), which belongs to the class of multi-body discrete/distributed, nonlin...
2 CitationsSource
#1Zhifeng DaiH-Index: 1
#2Huan ZhuH-Index: 1
The goal of this paper is to extend the modified Hestenes-Stiefel method to solve large-scale nonlinear monotone equations. The method is presented by combining the hyperplane projection method (Solodov, M.V.; Svaiter, B.F. A globally convergent inexact Newton method for systems of monotone equations, in: M. Fukushima, L. Qi (Eds.)Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, Kluwer Academic Publishers. 1998, 355-369) and the modified Hestenes-Stiefel method in Da...
22 CitationsSource
#1Bakhtawar Baluch (UMT: Universiti Malaysia Terengganu)H-Index: 2
#2Zabidin Salleh (UMT: Universiti Malaysia Terengganu)H-Index: 8
Last. Ahmad Alhawarat (Isra University)H-Index: 5
view all 3 authors...
This paper describes a modified three-term Hestenes–Stiefel (HS) method. The original HS method is the earliest conjugate gradient method. Although the HS method achieves global convergence using an exact line search, this is not guaranteed in the case of an inexact line search. In addition, the HS method does not usually satisfy the descent property. Our modified three-term conjugate gradient method possesses a sufficient descent property regardless of the type of line search and guarantees glo...
7 CitationsSource
#1Xiaoliang Wang (DUT: Dalian University of Technology)
#2Wujie Hu (Xida: Guangxi University)H-Index: 1
Last. Gonglin Yuan (Xida: Guangxi University)H-Index: 2
view all 3 authors...
This paper presents a modified Wei-Yao-Liu conjugate gradient method, which automatically not only has sufficient descent property but also owns trust region property without carrying out any line search technique. The global convergence property for unconstrained optimization problems is satisfied with weak Wolfe-Powell (WWP) line search. Meanwhile, the present method can be extended to solve nonlinear equations problems. Under some mild condition, line search method and project technique, the ...
Source
#1Gonglin Yuan (Xida: Guangxi University)H-Index: 2
#2Tingting Li (Xida: Guangxi University)
It is well know that DY conjugate gradient is one of the most efficient optimization algorithms, which sufficiently utilizes the current information of the search direction and gradient function. It is regrettable that DY conjugate gradient algorithm fails to address large scale optimization model and few scholars and writers paid much attention to modifying it. Thus, to solve large scale unconstrained optimization problems, a modified DY conjugate gradient algorithm under Yuan-Wei-Lu line searc...
Source
A new modified three-term conjugate gradient (CG) method is shown for solving the large scale optimization problems. The idea relates to the famous Polak-Ribiere-Polyak (PRP) formula. As the numerator of PRP plays a vital role in numerical result and not having the jamming issue, PRP method is not globally convergent. So, for the new three-term CG method, the idea is to use the PRP numerator and combine it with any good CG formula’s denominator that performs well. The new modification of three-t...
5 CitationsSource