Quasi-Newton type method via weak secant equations for unconstrained optimization

In this thesis, variants of quasi-Newton methods are developed for solving unconstrained optimization problems. These quasi-Newton type methods are differed from the standard quasi-Newton methods in the calculation and storage of the approximate Hessian matrix at every iteration. Using the concep...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Lim, Keat Hee
التنسيق: أطروحة
اللغة:English
English
منشور في: 2021
الموضوعات:
الوصول للمادة أونلاين:http://psasir.upm.edu.my/id/eprint/112655/1/FS%202021%2069%20-%20IR.pdf
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
id my-upm-ir.112655
record_format uketd_dc
spelling my-upm-ir.1126552024-09-30T03:25:57Z Quasi-Newton type method via weak secant equations for unconstrained optimization 2021-02 Lim, Keat Hee In this thesis, variants of quasi-Newton methods are developed for solving unconstrained optimization problems. These quasi-Newton type methods are differed from the standard quasi-Newton methods in the calculation and storage of the approximate Hessian matrix at every iteration. Using the concept of least change updating strategy, two updating formulas are derived by the mean of variational problem, via weak secant equation and some other non-secant equations. The convergence analysis for these methods are presented under inexact line search with Armijo condition. Motivated by the idea of memoryless scheme, the proposed formulas are further modified such that only vector computations and storage are required at every iteration. In other words, these memoryless updating formulas can provide some approximation to the quasi-Newton direction in matrix free setting. Armijo condition is implemented in the algorithms to generate monotone property in each iteration. The convergence analysis is presented for these memoryless methods under some standard assumptions. Numerical experiments are carried out using standard test problems and show that the proposed methods are superior to some existing conjugate gradient methods, in terms of iterations and function evaluations required to reach the optimal solutions. The possible variants of matrix free quasi-Newton methods are further explored, using the weak secant equation. A diagonal updating formula is derived by the mean of minimizing the magnitude of the updating matrix under the Frobenius norm. This formula is then generalized by using weighted Frobenius norm in the derivation, which gives a new version of diagonal updating formula where the previous diagonal matrix is chosen to be the weighting matrix, for the weighted diagonal updating formula. The convergence analysis is established for these formulas under two sets of line search strategies, namely monotone and non-monotone line search. Numerical experiments are conducted to test their effectiveness in solving standard test set. The results obtained indicate that the diagonal updating formula is superior to its weighted version and some conjugate gradient methods. The formula works best when monotone line search is implemented, which often requires fewer number of iterations and function evaluations to obtain the solutions. Overall, numerical results prove that these proposed methods are superior in terms of number of iterations and function evaluations. Furthermore, the diagonal updating formulas with non-monotone line search are more efficient than some classical conjugate gradient methods in all three performance measures, including the CPU time. Newton-Raphson method Constrained optimization 2021-02 Thesis http://psasir.upm.edu.my/id/eprint/112655/ http://psasir.upm.edu.my/id/eprint/112655/1/FS%202021%2069%20-%20IR.pdf text en public doctoral Universiti Putra Malaysia Newton-Raphson method Constrained optimization Leong, Wah June English
institution Universiti Putra Malaysia
collection PSAS Institutional Repository
language English
English
advisor Leong, Wah June
topic Newton-Raphson method
Constrained optimization

spellingShingle Newton-Raphson method
Constrained optimization

Lim, Keat Hee
Quasi-Newton type method via weak secant equations for unconstrained optimization
description In this thesis, variants of quasi-Newton methods are developed for solving unconstrained optimization problems. These quasi-Newton type methods are differed from the standard quasi-Newton methods in the calculation and storage of the approximate Hessian matrix at every iteration. Using the concept of least change updating strategy, two updating formulas are derived by the mean of variational problem, via weak secant equation and some other non-secant equations. The convergence analysis for these methods are presented under inexact line search with Armijo condition. Motivated by the idea of memoryless scheme, the proposed formulas are further modified such that only vector computations and storage are required at every iteration. In other words, these memoryless updating formulas can provide some approximation to the quasi-Newton direction in matrix free setting. Armijo condition is implemented in the algorithms to generate monotone property in each iteration. The convergence analysis is presented for these memoryless methods under some standard assumptions. Numerical experiments are carried out using standard test problems and show that the proposed methods are superior to some existing conjugate gradient methods, in terms of iterations and function evaluations required to reach the optimal solutions. The possible variants of matrix free quasi-Newton methods are further explored, using the weak secant equation. A diagonal updating formula is derived by the mean of minimizing the magnitude of the updating matrix under the Frobenius norm. This formula is then generalized by using weighted Frobenius norm in the derivation, which gives a new version of diagonal updating formula where the previous diagonal matrix is chosen to be the weighting matrix, for the weighted diagonal updating formula. The convergence analysis is established for these formulas under two sets of line search strategies, namely monotone and non-monotone line search. Numerical experiments are conducted to test their effectiveness in solving standard test set. The results obtained indicate that the diagonal updating formula is superior to its weighted version and some conjugate gradient methods. The formula works best when monotone line search is implemented, which often requires fewer number of iterations and function evaluations to obtain the solutions. Overall, numerical results prove that these proposed methods are superior in terms of number of iterations and function evaluations. Furthermore, the diagonal updating formulas with non-monotone line search are more efficient than some classical conjugate gradient methods in all three performance measures, including the CPU time.
format Thesis
qualification_level Doctorate
author Lim, Keat Hee
author_facet Lim, Keat Hee
author_sort Lim, Keat Hee
title Quasi-Newton type method via weak secant equations for unconstrained optimization
title_short Quasi-Newton type method via weak secant equations for unconstrained optimization
title_full Quasi-Newton type method via weak secant equations for unconstrained optimization
title_fullStr Quasi-Newton type method via weak secant equations for unconstrained optimization
title_full_unstemmed Quasi-Newton type method via weak secant equations for unconstrained optimization
title_sort quasi-newton type method via weak secant equations for unconstrained optimization
granting_institution Universiti Putra Malaysia
publishDate 2021
url http://psasir.upm.edu.my/id/eprint/112655/1/FS%202021%2069%20-%20IR.pdf
_version_ 1811767790006173696