An efficient modification of the Hestenes-Stiefel nonlinear conjugate gradient method with restart property

Published on Apr 6, 2016in Journal of Inequalities and Applications1.47
· DOI :10.1186/S13660-016-1049-5
Zabidin Salleh8
Estimated H-index: 8
(UMT: Universiti Malaysia Terengganu),
Ahmad Alhawarat5
Estimated H-index: 5
(UMT: Universiti Malaysia Terengganu)
Sources
Abstract
The conjugate gradient (CG) method is one of the most popular methods to solve nonlinear unconstrained optimization problems. The Hestenes-Stiefel (HS) CG formula is considered one of the most efficient methods developed in this century. In addition, the HS coefficient is related to the conjugacy condition regardless of the line search method used. However, the HS parameter may not satisfy the global convergence properties of the CG method with the Wolfe-Powell line search if the descent condition is not satisfied. In this paper, we use the original HS CG formula with a mild condition to construct a CG method with restart using the negative gradient. The convergence and descent properties with the strong Wolfe-Powell (SWP) and weak Wolfe-Powell (WWP) line searches are established. Using this condition, we guarantee that the HS formula is non-negative, its value is restricted, and the number of restarts is not too high. Numerical computations with the SWP line search and some standard optimization problems demonstrate the robustness and efficiency of the new version of the CG parameter in comparison with the latest and classical CG formulas. An example is used to describe the benefit of using different initial points to obtain different solutions for multimodal optimization functions.
📖 Papers frequently viewed together
9 Citations
9 Citations
3,531 Citations
References21
Newest
#1Gonglin Yuan (Xida: Guangxi University)H-Index: 2
#2Zehong Meng (Zhejiang University of Finance and Economics)H-Index: 1
Last. Yong LiH-Index: 1
view all 3 authors...
It is well known that nonlinear conjugate gradient methods are very effective for large-scale smooth optimization problems. However, their efficiency has not been widely investigated for large-scale nonsmooth problems, which are often found in practice. This paper proposes a modified Hestenes---Stiefel conjugate gradient algorithm for nonsmooth convex optimization problems. The search direction of the proposed method not only possesses the sufficient descent property but also belongs to a trust ...
62 CitationsSource
#1Ahmad AlhawaratH-Index: 5
#2Mustafa MamatH-Index: 15
Last. Ismail MohdH-Index: 8
view all 4 authors...
Conjugate Gradient (CG) method has been enormously used to solve large scale unconstrained optimization problems due to the number of iteration, memory, CPU time, and convergence property, in this paper we proposed a new class of nonlinear conjugate gradient coefficient with global convergence properties proved by exact line search. The numerical results for our CG method new present an efficient numerical result when it compared with well-known formulas. Keywords—Conjugate gradient method, conj...
2 Citations
Conjugate gradient (CG) method is an interesting tool to solve optimization problems in many fields, such as design, economics, physics, and engineering. In this paper, we depict a new hybrid of CG method which relates to the famous Polak-Ribiere-Polyak (PRP) formula. It reveals a solution for the PRP case which is not globally convergent with the strong Wolfe-Powell (SWP) line search. The new formula possesses the sufficient descent condition and the global convergent properties. In addition, w...
8 CitationsSource
#1Zhifeng Dai (CSUST: Changsha University of Science and Technology)H-Index: 10
#2Fenghua Wen (CSUST: Changsha University of Science and Technology)H-Index: 28
Abstract Recently, Zhang [13] take a little modification to the Wei–Yao–Liu nonlinear conjugate gradient method proposed by Wei et al. [10] such that the modified method (called NPRP method in Zhang [13] ) satisfies sufficient descent condition with greater parameter σ ∈ 0 , 1 2 in the strong Wolfe line search and converges globally for nonconvex minimization with the strong Wolfe line search. In this paper, we take a little modification to the NPRP method such that the modified NPRP method poss...
38 CitationsSource
#1Wenyu SunH-Index: 6
#1Wenyu SunH-Index: 10
Last. Ya-xiang YuanH-Index: 33
view all 2 authors...
Preface 1 Introduction 1.1 Introduction 1.2 Mathematics Foundations 1.2.1 Norm 1.2.2 Inverse and Generalized Inverse of a Matrix 1.2.3 Properties of Eigenvalues 1.2.4 Rank-One Update 1.2.5 Function and Differential 1.3 Convex Sets and Convex Functions 1.3.1 Convex Sets 1.3.2 Convex Functions 1.3.3 Separation and Support of Convex Sets 1.4 Optimality Conditions for Unconstrained Case 1.5 Structure of Optimization Methods Exercises 2 Line Search 2.1 Introduction 2.2 Convergence Theory for Exact Li...
572 Citations
#1Li Zhang (CSUST: Changsha University of Science and Technology)H-Index: 3
In this paper, we take a little modification to the Wei-Yao-Liu nonlinear conjugate gradient method proposed by Wei et al. [Z. Wei, S. Yao, L. Liu, The convergence properties of some new conjugate gradient methods, Appl. Math. Comput. 183 (2006) 1341-1350] such that the modified method possesses better convergence properties. In fact, we prove that the modified method satisfies sufficient descent condition with greater parameter @[email protected]?0,12 in the strong Wolfe line search and converg...
36 CitationsSource
#1Neculai AndreiH-Index: 20
A collection of unconstrained optimization test functions is presented. The purpose of this collection is to give to the optimization community a large number of general test functions to be used in testing the unconstrained optimization algorithms and comparisons studies. For each function we give its algebraic expression and the standard initial point. Some of the test fnctions are from the CUTE collection established by Bongartz, Conn, Gould and Toint, (1995), others are from More, Garbow and...
295 Citations
#1Zengxin Wei (Xida: Guangxi University)H-Index: 22
#2Shengwei Yao (Xida: Guangxi University)H-Index: 2
Last. Liying Liu (Liaocheng University)H-Index: 3
view all 3 authors...
Abstract In this paper, a new conjugate gradient formula β k ∗ is given to compute the search directions for unconstrained optimization problems. General convergence results for the proposed formula with some line searches such as the exact line search, the Wolfe–Powell line search and the Grippo–Lucidi line search are discussed. Under the above line searches and some assumptions, the global convergence properties of the given methods are discussed. The given formula β k ∗ ⩾ 0 , and has the simi...
126 CitationsSource
#1Elizabeth D. Dolan (Argonne National Laboratory)H-Index: 3
#1Elizabeth D. Dolan (Argonne National Laboratory)H-Index: 6
Last. Jorge J. Moré (Argonne National Laboratory)H-Index: 50
view all 2 authors...
We propose performance profiles — distribution functions for a performance metric — as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.
2,603 CitationsSource
#1Yu-Hong Dai (CAS: Chinese Academy of Sciences)H-Index: 35
#2Li-Zhi Liao (Hong Kong Baptist University)H-Index: 25
Conjugate gradient methods are a class of important methods for unconstrained optimization, especially when the dimension is large. This paper proposes a new conjugacy condition, which considers an inexact line search scheme but reduces to the old one if the line search is exact. Based on the new conjugacy condition, two nonlinear conjugate gradient methods are constructed. Convergence analysis for the two methods is provided. Our numerical results show that one of the methods is very efficient ...
260 CitationsSource
Cited By7
Newest
#1Lukas Exl (University of Vienna)H-Index: 12
#2Johann Fischbacher (Danube University Krems)H-Index: 12
Last. Thomas Schrefl (Danube University Krems)H-Index: 54
view all 6 authors...
Abstract Fast computation of demagnetization curves is essential for the computational design of soft magnetic sensors or permanent magnet materials. We show that a sparse preconditioner for a nonlinear conjugate gradient energy minimizer can lead to a speed up by a factor of 3 and 7 for computing hysteresis in soft magnetic and hard magnetic materials, respectively. As a preconditioner an approximation of the Hessian of the Lagrangian is used, which only takes local field terms into account. Pr...
10 CitationsSource
#1Bakhtawar Baluch (UMT: Universiti Malaysia Terengganu)H-Index: 2
#2Zabidin Salleh (UMT: Universiti Malaysia Terengganu)H-Index: 8
Last. Ahmad Alhawarat (Isra University)H-Index: 5
view all 3 authors...
This paper describes a modified three-term Hestenes–Stiefel (HS) method. The original HS method is the earliest conjugate gradient method. Although the HS method achieves global convergence using an exact line search, this is not guaranteed in the case of an inexact line search. In addition, the HS method does not usually satisfy the descent property. Our modified three-term conjugate gradient method possesses a sufficient descent property regardless of the type of line search and guarantees glo...
6 CitationsSource
#1Tai-Wen Yuan (University of Electronic Science and Technology of China)
#2Dongjie Bi (University of Electronic Science and Technology of China)H-Index: 6
Last. Jue Lyu (University of Electronic Science and Technology of China)H-Index: 1
view all 5 authors...
ABSTRACTThis paper investigates an efficient compressed sensing (CS) approach that can be used to reconstruct radial magnetic resonance (MR) images from under-sampled measurements. In this approach, we propose a hybrid conjugate gradient (CG) method with a hybrid update parameter to optimize the CS cost function. With detailed mathematical proofs, the proposed CG method has proved to have sufficient descent and global convergence properties. In order to show efficiency of the proposed approach, ...
Source
A new modified three-term conjugate gradient (CG) method is shown for solving the large scale optimization problems. The idea relates to the famous Polak-Ribiere-Polyak (PRP) formula. As the numerator of PRP plays a vital role in numerical result and not having the jamming issue, PRP method is not globally convergent. So, for the new three-term CG method, the idea is to use the PRP numerator and combine it with any good CG formula’s denominator that performs well. The new modification of three-t...
5 CitationsSource
#2Mustafa MamatH-Index: 15
Last. Mohd RivaieH-Index: 7
view all 3 authors...
Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.
Source
#1Johann Fischbacher (Danube University Krems)H-Index: 12
#2Alexander Kovacs (Danube University Krems)H-Index: 11
Last. Akira ManabeH-Index: 16
view all 12 authors...
Conjugate gradient methods for energy minimization in micromagnetics are compared. The comparison of analytic results with numerical simulation shows that standard conjugate gradient method may fail to produce correct results. A method that restricts the step length in the line search is introduced, in order to avoid this problem. When the step length in the line search is controlled, conjugate gradient techniques are a fast and reliable way to compute the hysteresis properties of permanent magn...
26 CitationsSource
Conjugate gradient (CG) method is used to find the optimum solution for the large scale unconstrained optimization problems. Based on its simple algorithm, low memory requirement, and the speed of obtaining the solution, this method is widely used in many fields, such as engineering, computer science, and medical science. In this paper, we modified CG method to achieve the global convergence with various line searches. In addition, it passes the sufficient descent condition without any line sear...
3 CitationsSource