New coefficient of three-term conjugate gradient method for solving unconstrained optimization problems

Published on Dec 1, 2019
路 DOI :10.1063/1.5136480
Nurul Hafawati Fadhilah , Mohd Rivaie8
Estimated H-index: 8
+ 1 AuthorsNur Idalisa
Source
Abstract
馃摉 Papers frequently viewed together
2017
2019
1 Author (Nur Idalisa)
References14
Newest
A new modified three-term conjugate gradient (CG) method is shown for solving the large scale optimization problems. The idea relates to the famous Polak-Ribiere-Polyak (PRP) formula. As the numerator of PRP plays a vital role in numerical result and not having the jamming issue, PRP method is not globally convergent. So, for the new three-term CG method, the idea is to use the PRP numerator and combine it with any good CG formula鈥檚 denominator that performs well. The new modification of three-t...
Source
#1Xiaoliang DongH-Index: 9
#2Hongwei Liu (Xidian University)H-Index: 16
Last. Yu Bo He (Huaihua University)H-Index: 2
view all 3 authors...
In this paper, a general form of three-term conjugate gradient method is presented, in which the search directions simultaneously satisfy the Dai-Liao conjugacy condition and sufficient descent property. In addition, the choice for an optimal parameter is suggested in the sense that the condition number of the iteration matrix could arrives at its minimum, which can be regarded as the inheritance and development of the spectral scaling quasi-Newton equation. Different from the existent methods, ...
Source
#1Qingna Li (Hunan University)H-Index: 9
#2Dong-Hui Li (SCNU: South China Normal University)H-Index: 17
Source
#1Gonglin Yuan (Xida: Guangxi University)H-Index: 8
#2Xiwen Lu (ECUST: East China University of Science and Technology)H-Index: 3
Last. Zengxin Wei (Xida: Guangxi University)H-Index: 21
view all 3 authors...
A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe-Powell line search technique; (iv) This method inherits an important property of the well-known Polak-Ribiere-Polyak (PRP) method: the tendency to turn towards the ste...
Source
#1Li ZhangH-Index: 3
#2Weijun ZhouH-Index: 5
Last. Dong-Hui LiH-Index: 17
view all 3 authors...
In this paper, we propose a modified Polak-Ribiere-Polyak (PRP) conjugate gradient method. An attractive property of the proposed method is that the direction generated by the method is always a descent direction for the objective 'function. This property is independent of the line search used. Moreover, if exact line search is used, the method reduces to the ordinary PRP method. Under appropriate conditions, we show that the modified PRP method with Armijo-type line search is globally convergen...
Source
#1Elizabeth D. Dolan (Argonne National Laboratory)H-Index: 3
#2Jorge J. Mor茅 (Argonne National Laboratory)H-Index: 51
We propose performance profiles 鈥 distribution functions for a performance metric 鈥 as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.
Source
#1Jean Charles Gilbert (IRIA: French Institute for Research in Computer Science and Automation)H-Index: 10
#2Jorge NocedalH-Index: 59
This paper explores the convergence of nonlinear conjugate gradient methods without restarts, and with practical line searches. The analysis covers two classes of methods that are globally convergent on smooth, nonconvex functions. Some properties of the Fletcher鈥揜eeves method play an important role in the first family, whereas the second family shares an important property with the Polak鈥揜ibiere method. Numerical experiments are presented.
Source
#1Yifan Hu (Lboro: Loughborough University)H-Index: 22
#2C. Storey (Lboro: Loughborough University)H-Index: 7
Conjugate gradient optimization algorithms depend on the search directions, $\begin{gathered} s^{(1)} = - g^{(1)} , \hfill \\ s^{(k + 1)} = - g^{(k + 1)} + \beta ^{(k)} s^{(k)} ,k \geqslant 1, \hfill \\ \end{gathered} with different methods arising from different choices for the scalar 尾(k). In this note, conditions are given on 尾(k) to ensure global convergence of the resulting algorithms.
Source
This paper reviews some of the most successful methods for unconstrained, constrained and nondifferentiable optimization calculations. Particular attention is given to the contribution that theoretical analysis has made to the development of algorithms. It seems that practical considerations provide the main new ideas, and that subsequent theoretical studies give improvements to algorithms, coherence to the subject, and better understanding.
Source
A simulation test methodology was developed to evaluate unconstrained nonlinear optimization computer algorithms. The test technique simulates problems optimization algorithms encounter in practice by employing a repertoire of problems representing various topographies (descending curved valleys, saddle points, ridges, etc.), dimensions, degrees of nonlinearity (e.g., linear to exponential) and minima, addressing them from various randomly generated initial approximations to the solution and rec...
Source
Cited By0
Newest
This website uses cookies.
We use cookies to improve your online experience. By continuing to use our website we assume you agree to the placement of these cookies.
To learn more, you can find in our Privacy Policy.