New Three-Term Conjugate Gradient Method with Exact Line Search

Published on Dec 1, 2020in Mathematika0.844
路 DOI :10.11113/MATEMATIKA.V36.N3.1214
Nurul Hafawati Fadhilah (UiTM: Universiti Teknologi MARA), Mohd Rivaie8
Estimated H-index: 8
(UiTM: Universiti Teknologi MARA)
+ 1 AuthorsNur Idalisa (UiTM: Universiti Teknologi MARA)
Sources
Abstract
Conjugate Gradient (CG) methods have an important role in solving large scale unconstrained optimization problems. Nowadays, the Three-Term CG method has become a research trend of the CG methods. However, the existing Three-Term CG methods could only be used with the inexact line search. When the exact line search is applied, this Three-Term CG method will be reduced to the standard CG method. Hence in this paper, a new Three-Term CG method that could be used with the exact line search is proposed. This new Three-Term CG method satisfies the descent condition using the exact line search. Performance profile based on numerical results show that this proposed method outperforms the well-known classical CG method and some related hybrid methods. In addition, the proposed method is also robust in term of number of iterations and CPU time.
馃摉 Papers frequently viewed together
2016
4 Authors (Nurul Hajar, ..., Ibrahim Jusoh)
2017
7 Authors (N Shapiee, ..., Mustafa Mamat)
References12
Newest
#1Wan KhadijahH-Index: 1
#2Mohd RivaieH-Index: 8
Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for...
Source
#1Xiaoliang DongH-Index: 9
#2Hongwei Liu (Xidian University)H-Index: 16
Last. Yu Bo He (Huaihua University)H-Index: 2
view all 3 authors...
In this paper, a general form of three-term conjugate gradient method is presented, in which the search directions simultaneously satisfy the Dai-Liao conjugacy condition and sufficient descent property. In addition, the choice for an optimal parameter is suggested in the sense that the condition number of the iteration matrix could arrives at its minimum, which can be regarded as the inheritance and development of the spectral scaling quasi-Newton equation. Different from the existent methods, ...
Source
#1Qingna Li (Hunan University)H-Index: 9
#2Dong-Hui Li (SCNU: South China Normal University)H-Index: 17
Source
#2Neculai AndreiH-Index: 21
The paper presents some open problems associated to the nonlin- ear conjugate gradient algorithms for unconstrained optimization. Mainly, these problems refer to the initial direction, the conjugacy condition, the step length computation, new formula for conjugate gradient parameter computation based on function's values, the inuence of accuracy of line search procedure, how we can take the problem's structure on conjugate gradient algorithms, how we can consider the second order information in ...
#1Gonglin Yuan (Xida: Guangxi University)H-Index: 3
#2Sha LuH-Index: 3
Last. Zengxin WeiH-Index: 1
view all 3 authors...
It is well known that the line search methods play a very important role for optimization problems. In this paper a new line search method is proposed for solving unconstrained optimization. Under weak conditions, this method possesses global convergence and R-linear convergence for nonconvex function and convex function, respectively. Moreover, the given search direction has sufficiently descent property and belongs to a trust region without carrying out any line search rule. Numerical results ...
Source
#1Z. K. Silagadze (BINP: Budker Institute of Nuclear Physics)H-Index: 34
Two-dimensional generalization of the original peak finding algorithm suggested earlier is given. The ideology of the algorithm emerged from the well-known quantum mechanical tunneling property which enables small bodies to penetrate through narrow potential barriers. We merge this 鈥渜uantum鈥 ideology with the philosophy of Particle Swarm Optimization to get the global optimization algorithm which can be called Quantum Swarm Optimization. The functionality of the newborn algorithm is tested on so...
Source
#1Li ZhangH-Index: 3
#2Weijun ZhouH-Index: 5
Last. Dong-Hui LiH-Index: 17
view all 3 authors...
In this paper, we propose a modified Polak-Ribiere-Polyak (PRP) conjugate gradient method. An attractive property of the proposed method is that the direction generated by the method is always a descent direction for the objective 'function. This property is independent of the line search used. Moreover, if exact line search is used, the method reduces to the ordinary PRP method. Under appropriate conditions, we show that the modified PRP method with Armijo-type line search is globally convergen...
Source
#1Yifan Hu (Lboro: Loughborough University)H-Index: 22
#2C. Storey (Lboro: Loughborough University)H-Index: 7
Conjugate gradient optimization algorithms depend on the search directions, $\begin{gathered} s^{(1)} = - g^{(1)} , \hfill \\ s^{(k + 1)} = - g^{(k + 1)} + \beta ^{(k)} s^{(k)} ,k \geqslant 1, \hfill \\ \end{gathered} with different methods arising from different choices for the scalar 尾(k). In this note, conditions are given on 尾(k) to ensure global convergence of the resulting algorithms.
Source
A simulation test methodology was developed to evaluate unconstrained nonlinear optimization computer algorithms. The test technique simulates problems optimization algorithms encounter in practice by employing a repertoire of problems representing various topographies (descending curved valleys, saddle points, ridges, etc.), dimensions, degrees of nonlinearity (e.g., linear to exponential) and minima, addressing them from various randomly generated initial approximations to the solution and rec...
Source
#1Bruno F. W. Witte (GD: General Dynamics)H-Index: 1
#2William R. Holst (GD: General Dynamics)H-Index: 1
The method can somewhat vaguely be classified as a modified "steepest descent." It is, of course, an iterative procedure. Each cycle, to be iterated on, consists essentially of two parts: in Part I a "best" line is found, in Part II an attempt is made to minimize the given function along this line. Thus, each cycle resembles the corresponding cycle in the method of steepest descent. The method of steepest descent differs from our method in the manner in which in Part I the "best" line is found. ...
Source
Cited By0
Newest
This website uses cookies.
We use cookies to improve your online experience. By continuing to use our website we assume you agree to the placement of these cookies.
To learn more, you can find in our Privacy Policy.