# A new sufficient descent conjugate gradient method with exact line search

Published on Dec 1, 2019

Â· DOI :10.1063/1.5136479

Published on Dec 1, 2019

Â· DOI :10.1063/1.5136479

References12

Newest

Abstract Devising ways for handling problem optimization is an important yet a challenging task. The aims are always for methods that can effectively and quickly discover the global optimum for rather complicated mathematical functions that model real-world settings. Typically these functions are too difficult to discover their global optima because they may (1) lack the continuity and differentiability, (2) have multiple local optima, and (3) have complex expressions. In this paper, we address ...

The three-term conjugate gradient methods solving large-scale optimization problems are favored by many researchers because of their nice descent and convergent properties. In this paper, we extend some new conjugate gradient methods, and construct some three-term conjugate gradient methods. An remarkable property of the proposed methods is that the search direction always satisfies the sufficient descent condition without any line search. Under the standard Wolfe line search, the global converg...

Comparing, or benchmarking, of optimization algorithms is a complicated task that involves many subtle considerations to yield a fair and unbiased evaluation. In this paper, we systematically review the benchmarking process of optimization algorithms, and discuss the challenges of fair comparison. We provide suggestions for each step of the comparison process and highlight the pitfalls to avoid when evaluating the performance of optimization algorithms. We also discuss various methods of reporti...

Test functions are important to validate and compare the performance of optimization algorithms. There have been many test or benchmark functions reported in the literature; however, there is no standard list or set of benchmark functions. Ideally, test functions should have diverse properties so that can be truly useful to test new algorithms in an unbiased way. For this purpose, we have reviewed and compiled a rich set of 175 benchmark functions for unconstrained optimization problems with div...

Test functions are important to validate and compare the performance of optimisation algorithms. There have been many test or benchmark functions reported in the literature; however, there is no standard list or set of benchmark functions. Ideally, test functions should have diverse properties to be truly useful to test new algorithms in an unbiased way. For this purpose, we have reviewed and compiled a rich set of 175 benchmark functions for unconstrained optimisation problems with diverse prop...

1 Optimization Problem Formulation.- 1.1 Optimization Problem Formulation.- 1.2 The Standard Form of an Optimization Problem.- 1.3 Solution of Optimization Problems.- 1.4 Time Value of Money.- 1.5 Concluding Remarks.- 1.6 Problems.- 2 Graphical Optimization.- 2.1 Procedure for Graphical Optimization.- 2.2 GraphicalSolution function.- 2.3 Graphical Optimization Examples.- 2.4 Problems.- 3 Mathematical Preliminaries.- 3.1 Vectors and Matrices.- 3.2 Approximation Using the Taylor Series.- 3.3 Solut...

In this paper, we propose a three-term conjugate gradient method which can produce sufficient descent condition, that is, [image omitted]Â . This property is independent of any line search used. When an exact line search is used, this method reduces to the standard Hestenes-Stiefel conjugate gradient method. We also introduce two variants of the proposed method which still preserve the sufficient descent property, and prove that these two methods converge globally with standard Wolfe line search ...

We propose performance profiles â€” distribution functions for a performance metric â€” as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.

The conjugate gradient method is particularly useful for minimizing functions of very many variables because it does not require the storage of any matrices. However the rate of convergence of the algorithm is only linear unless the iterative procedure is "restarted" occasionally. At present it is usual to restart everyn or (n + 1) iterations, wheren is the number of variables, but it is known that the frequency of restarts should depend on the objective function. Therefore the main purpose of t...

Cited By0

Newest