An acceleration of gradient descent algorithm with backtracking for unconstrained optimization
Abstract
In this paper we introduce an acceleration of gradient descent algorithm with backtracking. The idea is to modify the steplength tk by means of a positive parameter θk, in a multiplicative manner, in such a way to improve the behaviour of the classical gradient algorithm. It is shown that the resulting algorithm remains linear convergent, but the reduction in function value is significantly...
Paper Details
Title
An acceleration of gradient descent algorithm with backtracking for unconstrained optimization
Published Date
May 1, 2006
Journal
Volume
42
Issue
1
Pages
63 - 73
Citation AnalysisPro
You’ll need to upgrade your plan to Pro
Looking to understand the true influence of a researcher’s work across journals & affiliations?
- Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
- Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.
Notes
History