A new steepest descent method with global convergence properties

Published on Jun 2, 2016
· DOI :10.1063/1.4952550
Zubai’ah Zainal Abidin2
Estimated H-index: 2
,
Mohd Rivaie8
Estimated H-index: 8
Sources
Abstract
One of the earliest and the best method to minimize a function is the classical steepest descent (SD) method. In this paper, a new modification of SD method is suggested using a new search direction, d k. The numerical results are presented based on number of iterations and CPU time. It shows that the new d k are efficient when compared to the classical SD.
📖 Papers frequently viewed together
References7
Newest
#1Mohd Rivaie (UiTM: Universiti Teknologi MARA)H-Index: 8
#2Mustafa Mamat (UMT: Universiti Malaysia Terengganu)H-Index: 16
Last. Ismail Mohd (UMT: Universiti Malaysia Terengganu)H-Index: 8
view all 4 authors...
Abstract Nonlinear conjugate gradient (CG) methods have played an important role in solving large-scale unconstrained optimization. Their wide application in many fields is due to their low memory requirements and global convergence properties. Numerous studies and modifications have been conducted recently to improve this method. In this paper, a new class of conjugate gradient coefficients ( β k ) that possess global convergence properties is presented. The global convergence result is establi...
Source
#1Edwin K. P. ChongH-Index: 46
#2Stanislaw H. ŻakH-Index: 15
Source
In this paper we introduce an acceleration of gradient descent algorithm with backtracking. The idea is to modify the steplength tk by means of a positive parameter θk, in a multiplicative manner, in such a way to improve the behaviour of the classical gradient algorithm. It is shown that the resulting algorithm remains linear convergent, but the reduction in function value is significantly improved.
Source
#1Elizabeth D. Dolan (Argonne National Laboratory)H-Index: 3
#2Jorge J. Moré (Argonne National Laboratory)H-Index: 51
We propose performance profiles — distribution functions for a performance metric — as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.
Source
Computing Centre, Academia Sinica, Beijing 100080, China[Received 25 April 1989 and in revised form 16 October 1990]In this paper we present a modified BFGS algorithm for unconstrainedoptimization. The BFGS algorithm updates an approximate Hessian whichsatisfies the most recent quasi-Newton equation. The quasi-Newton condition canbe interpreted as the interpolation condition that the gradient value of the localquadratic model matches that of the objective function at the previous iterate.Our mod...
Source
#1Jonathan Barzilai (Dal: Dalhousie University)H-Index: 9
#2Jonathan M. Borwein (Dal: Dalhousie University)H-Index: 85
Etude de nouvelles methodes de descente suivant le gradient pour la solution approchee du probleme de minimisation sans contrainte. Analyse de la convergence
Source
A simulation test methodology was developed to evaluate unconstrained nonlinear optimization computer algorithms. The test technique simulates problems optimization algorithms encounter in practice by employing a repertoire of problems representing various topographies (descending curved valleys, saddle points, ridges, etc.), dimensions, degrees of nonlinearity (e.g., linear to exponential) and minima, addressing them from various randomly generated initial approximations to the solution and rec...
Source
Cited By4
Newest
#1Siti Farhana HusinH-Index: 1
Last. Mohd RivaieH-Index: 8
view all 4 authors...
This study employs exact line search iterative algorithms for solving large scale unconstrained optimization problems in which the direction is a three-term modification of iterative method with two different scaled parameters. The objective of this research is to identify the effectiveness of the new directions both theoretically and numerically. Sufficient descent property and global convergence analysis of the suggested methods are established. For numerical experiment purposes, the methods a...
Source
#1Siti Farhana HusinH-Index: 1
Last. Mohd RivaieH-Index: 8
view all 4 authors...
Source
#2Nurul ‘AiniH-Index: 1
Last. Mohd RivaieH-Index: 8
view all 5 authors...
In this paper, we focus on the steepest descent and quasi-Newton method in solving unconstrained optimization problem. Therefore, we develop a new search direction for hybrid BFGS-ZMRI method with global convergence properties. Based on the numerical result, our method shows significant improvement in the number of iteration and CPU time.In this paper, we focus on the steepest descent and quasi-Newton method in solving unconstrained optimization problem. Therefore, we develop a new search direct...
Source
#3N Shapiee (UniSZA: Universiti Sultan Zainal Abidin)H-Index: 1
#6Mohd Rivaie (UiTM: Universiti Teknologi MARA)H-Index: 8
Last. Mustafa Mamat (UniSZA: Universiti Sultan Zainal Abidin)H-Index: 16
view all 7 authors...
Conjugate gradient (CG) method is an evolution of computational method in solving unconstrained optimization problems. This approach is easy to implement due to its simplicity and has been proven to be effective in solving real-life application. Although this field has received copious amount of attentions in recent years, some of the new approaches of CG algorithm cannot surpass the efficiency of the previous versions. Therefore, in this paper, a new CG coefficient which retains the sufficient ...
Source
This website uses cookies.
We use cookies to improve your online experience. By continuing to use our website we assume you agree to the placement of these cookies.
To learn more, you can find in our Privacy Policy.