A new steepest descent method

Published on Jun 19, 2014
· DOI :10.1063/1.4882499
Zubai’ah Zainal Abidin2
Estimated H-index: 2
,
Mustafa Mamat15
Estimated H-index: 15
+ 1 AuthorsIsmail Mohd8
Estimated H-index: 8
Sources
Abstract
The classical steepest descent (SD) method is known as one of the earliest and the best method to minimize a function. Even though the convergence rate is quite slow, but its simplicity has made it one of the easiest methods to be used and applied especially in the form of computer codes. In this paper, a new modification of SD method is proposed using a new search direction (dk) in the form of two parameters. Numerical results shows that this new SD has far superior convergence rate and more efficient than the classical SD method.
📖 Papers frequently viewed together
4 Citations
5 Citations
References0
Newest
Cited By4
Newest
#2Nurul ‘AiniH-Index: 2
Last. Mustafa MamatH-Index: 4
view all 5 authors...
In this paper, we focus on the steepest descent and quasi-Newton method in solving unconstrained optimization problem. Therefore, we develop a new search direction for hybrid BFGS-ZMRI method with global convergence properties. Based on the numerical result, our method shows significant improvement in the number of iteration and CPU time.In this paper, we focus on the steepest descent and quasi-Newton method in solving unconstrained optimization problem. Therefore, we develop a new search direct...
Source
1 CitationsSource
#1Norrlaili ShapieeH-Index: 3
#2Mohd RivaieH-Index: 7
Last. Mustafa MamatH-Index: 15
view all 3 authors...
In this paper, we proposed a new classical conjugate gradient method. The global convergence is established using exact line search. Numerical results are presented based on number of iterations and CPU time. This numerical result shows that our method is performs better than classical CG method for a given standard test problems.
5 CitationsSource
#1Nurul HajarH-Index: 2
#2Mustafa MamatH-Index: 15
Last. Ibrahim JusohH-Index: 2
view all 4 authors...
Nowadays, conjugate gradient (CG) methods are impressive for solving nonlinear unconstrained optimization problems. In this paper, a new CG method is proposed and analyzed. This new CG method satisfies descent condition and its global convergence is established using exact line search. Numerical results show that this new CG method substantially outperforms the previous CG methods. This new CG method is considered robust, efficient and provided faster and stable convergence.
7 CitationsSource