The Proof of Sufficient Descent Condition for a New Type of Conjugate Gradient Methods

Published on Jun 19, 2014
· DOI :10.1063/1.4882502
Abdelrhaman Abashar4
Estimated H-index: 4
,
Mustafa Mamat15
Estimated H-index: 15
+ 2 AuthorsOsman Omer2
Estimated H-index: 2
Sources
Abstract
Conjugate gradient methods are effective in solving linear equations and solving non-linear optimization. In this work we compare our new conjugate gradient coefficient βk with classical formula under strong Wolfe line search; our method contains sufficient descent condition. Numerical results have shown that the new βk performs better than classical formula.Conjugate gradient methods are effective in solving linear equations and solving non-linear optimization. In this work we compare our new conjugate gradient coefficient βk with classical formula under strong Wolfe line search; our method contains sufficient descent condition. Numerical results have shown that the new βk performs better than classical formula.
📖 Papers frequently viewed together
56 Citations
3,531 Citations
References6
Newest
#1Mohd Rivaie (UiTM: Universiti Teknologi MARA)H-Index: 7
#2Mustafa Mamat (UMT: Universiti Malaysia Terengganu)H-Index: 15
Last. Ismail Mohd (UMT: Universiti Malaysia Terengganu)H-Index: 8
view all 4 authors...
Abstract Nonlinear conjugate gradient (CG) methods have played an important role in solving large-scale unconstrained optimization. Their wide application in many fields is due to their low memory requirements and global convergence properties. Numerous studies and modifications have been conducted recently to improve this method. In this paper, a new class of conjugate gradient coefficients ( β k ) that possess global convergence properties is presented. The global convergence result is establi...
56 CitationsSource
#1Gonglin Yuan (Xida: Guangxi University)H-Index: 8
#2Xiwen Lu (ECUST: East China University of Science and Technology)H-Index: 3
Last. Zengxin Wei (Xida: Guangxi University)H-Index: 22
view all 3 authors...
A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe-Powell line search technique; (iv) This method inherits an important property of the well-known Polak-Ribiere-Polyak (PRP) method: the tendency to turn towards the ste...
69 CitationsSource
#1Neculai AndreiH-Index: 20
A collection of unconstrained optimization test functions is presented. The purpose of this collection is to give to the optimization community a large number of general test functions to be used in testing the unconstrained optimization algorithms and comparisons studies. For each function we give its algebraic expression and the standard initial point. Some of the test fnctions are from the CUTE collection established by Bongartz, Conn, Gould and Toint, (1995), others are from More, Garbow and...
295 Citations
#1Elizabeth D. Dolan (Argonne National Laboratory)H-Index: 3
#1Elizabeth D. Dolan (Argonne National Laboratory)H-Index: 6
Last. Jorge J. Moré (Argonne National Laboratory)H-Index: 50
view all 2 authors...
We propose performance profiles — distribution functions for a performance metric — as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.
2,603 CitationsSource
#1Jean Charles Gilbert (IRIA: French Institute for Research in Computer Science and Automation)H-Index: 10
#2Jorge NocedalH-Index: 58
This paper explores the convergence of nonlinear conjugate gradient methods without restarts, and with practical line searches. The analysis covers two classes of methods that are globally convergent on smooth, nonconvex functions. Some properties of the Fletcher–Reeves method play an important role in the first family, whereas the second family shares an important property with the Polak–Ribiere method. Numerical experiments are presented.
727 CitationsSource
#1G. ZoutendijkH-Index: 1
295 Citations
Cited By14
Newest
#2Srimazzura BasriH-Index: 2
Last. Mustafa MamatH-Index: 15
view all 3 authors...
Unconstrained optimization is a widespread problem that can be solved by a mathematical technique known as the conjugate gradient method. This method is chosen because of its simplicity and less use of time in solving problems that can be seen when the result has less number of iteration with a faster time of the central processing unit (CPU). Motivated by this study, we are interested in researching as there are many modifications taking place in the conjugate gradient parameter. Therefore, in ...
Source
#2Srimazzura BasriH-Index: 2
Last. Mustafa MamatH-Index: 15
view all 3 authors...
The method of the nonlinear conjugate gradient is widely used in solving large-scale unconstrained optimization since been proven in solving optimization problems without using large memory storage. In this paper, we proposed a new modification of the Hestenes-Stiefel conjugate gradient parameter that fulfils the condition of sufficient descent using a strong Wolfe-Powell line search. Besides, the conjugate gradient method with the proposed conjugate gradient also guarantees low computation of i...
Source
#2SukonoH-Index: 5
Last. Mustafa MamatH-Index: 15
view all 4 authors...
2 CitationsSource
#2Mustafa MamatH-Index: 15
Last. Mohd RivaieH-Index: 7
view all 4 authors...
The most well-known technique or method in unconstrained optimization is the conjugate gradient method. It is used to get the greatest solution for the unconstrained optimization problems. This method is used in many fields especially, computer science, and engineering due to its convergence speed, simplicity, and the low memory requirements. A new modified conjugate gradient method is presented in this paper. This method is proved with the strong Wolfe-Powell (SWP) line search that it possesses...
Source
1 CitationsSource
#1Srimazzura Basri (UniSZA: Universiti Sultan Zainal Abidin)H-Index: 2
#2Mustafa Mamat (UTHM: Universiti Tun Hussein Onn Malaysia)H-Index: 15
Last. Mustafa Mamat (UTHM: Universiti Tun Hussein Onn Malaysia)
view all 2 authors...
Abstract Nonlinear conjugate gradient methods are widely used in solving large scale unconstrained optimization. Their wide application in many fields are due to their low memory. Numerous studies have been conducted recently to improve these methods. In this paper, a new class of conjugate gradient coefficients that possess global convergence properties is proposed. The global convergence result using exact line searches are discussed. Numerical result shows that the proposed method is more eff...
2 CitationsSource
#2Mustafa MamatH-Index: 15
Last. Mohd RivaieH-Index: 7
view all 3 authors...
A nonlinear conjugate gradient method (CG) plays an important role in solving a large-scale unconstrained optimization problem. This method is widely used due to its simplicity. The method is known to possess sufficient descend condition and global convergence properties. In this paper, a new nonlinear of CG coefficient βk is presented by employing the Strong Wolfe-Powell inexact line search. The new βk performance is tested based on number of iterations and central processing unit (CPU) time by...
1 CitationsSource
#2Mustafa MamatH-Index: 15
Last. Mohd RivaieH-Index: 7
view all 3 authors...
Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.
Source
#1Norrlaili ShapieeH-Index: 3
#2Mohd RivaieH-Index: 7
Last. Mustafa MamatH-Index: 15
view all 3 authors...
In this paper, we proposed a new classical conjugate gradient method. The global convergence is established using exact line search. Numerical results are presented based on number of iterations and CPU time. This numerical result shows that our method is performs better than classical CG method for a given standard test problems.
5 CitationsSource
#1Nurul HajarH-Index: 2
#2Mustafa MamatH-Index: 15
Last. Ibrahim JusohH-Index: 2
view all 4 authors...
Nowadays, conjugate gradient (CG) methods are impressive for solving nonlinear unconstrained optimization problems. In this paper, a new CG method is proposed and analyzed. This new CG method satisfies descent condition and its global convergence is established using exact line search. Numerical results show that this new CG method substantially outperforms the previous CG methods. This new CG method is considered robust, efficient and provided faster and stable convergence.
7 CitationsSource