A New Method with Sufficient Descent Property for Unconstrained Optimization

Published on Feb 13, 2014in Abstract and Applied Analysis
· DOI :10.1155/2014/940120
Weiyi Qian1
Estimated H-index: 1
,
Haijuan Cui1
Estimated H-index: 1
Sources
Abstract
Recently, sufficient descent property plays an important role in the global convergence analysis of some iterative methods. In this paper, we propose a new iterative method for solving unconstrained optimization problems. This method provides a sufficient descent direction for objective function. Moreover, the global convergence of the proposed method is established under some appropriate conditions. We also report some numerical results and compare the performance of the proposed method with some existing methods. Numerical results indicate that the presented method is efficient.
📖 Papers frequently viewed together
References17
Newest
#1Zhifeng Dai (CSUST: Changsha University of Science and Technology)H-Index: 12
#2Fenghua Wen (CSUST: Changsha University of Science and Technology)H-Index: 28
Abstract Recently, Zhang [13] take a little modification to the Wei–Yao–Liu nonlinear conjugate gradient method proposed by Wei et al. [10] such that the modified method (called NPRP method in Zhang [13] ) satisfies sufficient descent condition with greater parameter σ ∈ 0 , 1 2 in the strong Wolfe line search and converges globally for nonconvex minimization with the strong Wolfe line search. In this paper, we take a little modification to the NPRP method such that the modified NPRP method poss...
Source
In this paper, a new spectral PRP conjugate gradient algorithm is developed for solving nonconvex unconstrained optimization problems. The search direction in this algorithm is proved to be a sufficient descent direction of the objective function independent of line search. To rule out possible unacceptably short step in the Armijo line search, a modified Armijo line search strategy is presented. The obtained step length is improved by employing the properties of the approximate Wolfe conditions...
Source
#1Dong-Hui Li (SCNU: South China Normal University)H-Index: 17
#2Bo-Shi Tian (PolyU: Hong Kong Polytechnic University)H-Index: 1
Abstract It is well-known that the PRP conjugate gradient method with exact line search is globally and linearly convergent. If a restart strategy is used, the convergence rate of the method can be an n -step superlinear/quadratic convergence. Recently, Zhang et al. [L. Zhang, W. Zhou, D.H. Li, A descent modified Polak–Ribiere–Polyak conjugate gradient method and its global convergence, IMA J. Numer. Anal. 26 (2006) 629–640] developed a modified PRP (MPRP) method that is globally convergent if a...
Source
#1Xiao-Min An (Hunan University)H-Index: 1
#2Dong-Hui Li (Hunan University)H-Index: 17
Last. Yunhai Xiao (Henan University)H-Index: 14
view all 3 authors...
Descent property is very important for an iterative method to be globally convergent. In this paper, we propose a way to construct sufficient descent directions for unconstrained optimization. We then apply the technique to derive a PSB (Powell-Symmetric-Broyden) based method. The PSB based method locally reduces to the standard PSB method with unit steplength. Under appropriate conditions, we show that the PSB based method with Armijo line search or Wolfe line search is globally and superlinear...
Source
We develop a sufficient descent method for solving large-scale unconstrained optimization problems. At each iteration, the search direction is a linear combination of the gradient at the current and the previous steps. An attractive property of this method is that the generated directions are always descent. Under some appropriate conditions, we show that the proposed method converges globally. Numerical experiments on some unconstrained minimization problems from CUTEr library are reported, whi...
Source
#1Zhifeng Dai (CSUST: Changsha University of Science and Technology)H-Index: 1
In this article, we propose a new conjugate gradient type formula for computing unconstrained optimization problems. Its form is similar to the original PRP formula and it inherits all nice properties of the PRP method. By utilizing the technique of the three-term PRP method in Zhang [20] and modified PRP method in Yu [17], we propose two modified methods of the new formula. The two modified methods all can generate sufficient descent directions which is independent of the line search used. Unde...
Source
#1Kenji Ueda (Kyoto University)H-Index: 4
#2Nobuo Yamashita (Kyoto University)H-Index: 22
The regularized Newton method (RNM) is one of the efficient solution methods for the unconstrained convex optimization. It is well-known that the RNM has good convergence properties as compared to the steepest descent method and the pure Newton’s method. For example, Li, Fukushima, Qi and Yamashita showed that the RNM has a quadratic rate of convergence under the local error bound condition. Recently, Polyak showed that the global complexity bound of the RNM, which is the first iteration k such ...
Source
#1Yunhai Xiao (Henan University)H-Index: 14
#2Zengxin Wei (Xida: Guangxi University)H-Index: 21
Last. Zhiguo Wang (Henan University)H-Index: 3
view all 3 authors...
In this paper, a new numerical method for solving large-scale unconstrained optimization problems is presented. It is derived from a modified BFGS-type update formula by Wei, Li, and Qi. It is observed that the update formula can be extended to the framework of limited memory scheme with hardly more storage or arithmetic operations. Under some suitable conditions, the global convergence property is established. The implementations of the method on a set of CUTE problems indicate that this extens...
Source
#1Neculai AndreiH-Index: 21
A collection of unconstrained optimization test functions is presented. The purpose of this collection is to give to the optimization community a large number of general test functions to be used in testing the unconstrained optimization algorithms and comparisons studies. For each function we give its algebraic expression and the standard initial point. Some of the test fnctions are from the CUTE collection established by Bongartz, Conn, Gould and Toint, (1995), others are from More, Garbow and...
In this paper, by the use of the project of the PRP (Polak–Ribiere–Polyak) conjugate gradient direction, we develop a PRP-based descent method for solving unconstrained optimization problem. The method provides a sufficient descent direction for the objective function. Moreover, if exact line search is used, the method reduces to the standard PRP method. Under suitable conditions, we show that the method with some backtracking line search or the generalized Wolfe-type line search is globally con...
Source
Cited By2
Newest
#1Siti Farhana HusinH-Index: 1
Last. Mohd RivaieH-Index: 8
view all 4 authors...
This study employs exact line search iterative algorithms for solving large scale unconstrained optimization problems in which the direction is a three-term modification of iterative method with two different scaled parameters. The objective of this research is to identify the effectiveness of the new directions both theoretically and numerically. Sufficient descent property and global convergence analysis of the suggested methods are established. For numerical experiment purposes, the methods a...
Source
#1Yuan Wang (NU: Northeastern University)H-Index: 2
#2Xiaochuan Luo (NU: Northeastern University)H-Index: 1
Last. Qiqi Xie (NU: Northeastern University)H-Index: 2
view all 4 authors...
Our work is devoted to an inverse problem for three-dimensional parabolic partial differential equations. When the surface temperature data are given, the problem of reconstructing the heat flux and the source term is investigated. There are two main contributions of this paper. First, an adjoint problem approach is used for analysis of the Frechet gradient of the cost functional. Second, an improved conjugate gradient method is proposed to solve this problem. Based on Lipschitz continuity of th...
Source
This website uses cookies.
We use cookies to improve your online experience. By continuing to use our website we assume you agree to the placement of these cookies.
To learn more, you can find in our Privacy Policy.