Optimization Methods In Training Neural Networks

Terdapat beberapa teknik pengekstremuman bagi menyelesaikan masalah aIjabar linear dan tak linear. Kaedah Newton mempunyai sifat yang dipanggil penamatan kuadratik yang bermaksud ia meminimumkan suatu fungsi kuadratik dalam bilangan le1aran yang terhingga. Walaubagaimanapun, kaedah ini memerlukan...

Full description

Saved in:
Bibliographic Details
Main Author: Sathasivam, Saratha
Format: Thesis
Language:English
Published: 2003
Subjects:
Online Access:http://eprints.usm.my/31158/1/SARATHA_SATHASIVAM.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
id my-usm-ep.31158
record_format uketd_dc
spelling my-usm-ep.311582017-07-27T04:46:59Z Optimization Methods In Training Neural Networks 2003-07 Sathasivam, Saratha QA1 Mathematics (General) Terdapat beberapa teknik pengekstremuman bagi menyelesaikan masalah aIjabar linear dan tak linear. Kaedah Newton mempunyai sifat yang dipanggil penamatan kuadratik yang bermaksud ia meminimumkan suatu fungsi kuadratik dalam bilangan le1aran yang terhingga. Walaubagaimanapun, kaedah ini memerlukan pengiraan dan pengstoran terbitan kedua bagi fungsi kuadratik yang terlibat. Apabila bilangan parameter n adalah besar, ianya mungkin tidak praktikat· untuk mengira semua terbitap kedua. Hal ini adalah benar bagi rangkaian neural di mana kebanyakan aplikasi praktikal memerlukan beberapa ratus atau ribu pemberat. Bagi masalah-masalah sedemikian, kaedah pengoptimuman yang hanya memerlukan terbitan pertama tetapi masih mempunyai sifat penamatan kuadratik lebih diutamakan. There are a number of extremizing techniques to solve linear and nonlinear algebraic • problems. Newton's method has a property called quadratic termination~ which means that it minimizes a quadratic function exactly in a finite number of iterations. Unfortunately, it requires calculation and storage of the second derivatives of the quadratic function involved. When the number of parameters, n, is large, it may be impractical to compute all the second derivatives. This is especially true for neural networks, where practical applications can require several hundred to many thousands weights. Eor these particular cases, methods that require ,only first derivatives bMt still have quadratic termination are preferred. 2003-07 Thesis http://eprints.usm.my/31158/ http://eprints.usm.my/31158/1/SARATHA_SATHASIVAM.pdf application/pdf en public masters Universiti Sains Malaysia Pusat Pengajian Sains Matematik
institution Universiti Sains Malaysia
collection USM Institutional Repository
language English
topic QA1 Mathematics (General)
spellingShingle QA1 Mathematics (General)
Sathasivam, Saratha
Optimization Methods In Training Neural Networks
description Terdapat beberapa teknik pengekstremuman bagi menyelesaikan masalah aIjabar linear dan tak linear. Kaedah Newton mempunyai sifat yang dipanggil penamatan kuadratik yang bermaksud ia meminimumkan suatu fungsi kuadratik dalam bilangan le1aran yang terhingga. Walaubagaimanapun, kaedah ini memerlukan pengiraan dan pengstoran terbitan kedua bagi fungsi kuadratik yang terlibat. Apabila bilangan parameter n adalah besar, ianya mungkin tidak praktikat· untuk mengira semua terbitap kedua. Hal ini adalah benar bagi rangkaian neural di mana kebanyakan aplikasi praktikal memerlukan beberapa ratus atau ribu pemberat. Bagi masalah-masalah sedemikian, kaedah pengoptimuman yang hanya memerlukan terbitan pertama tetapi masih mempunyai sifat penamatan kuadratik lebih diutamakan. There are a number of extremizing techniques to solve linear and nonlinear algebraic • problems. Newton's method has a property called quadratic termination~ which means that it minimizes a quadratic function exactly in a finite number of iterations. Unfortunately, it requires calculation and storage of the second derivatives of the quadratic function involved. When the number of parameters, n, is large, it may be impractical to compute all the second derivatives. This is especially true for neural networks, where practical applications can require several hundred to many thousands weights. Eor these particular cases, methods that require ,only first derivatives bMt still have quadratic termination are preferred.
format Thesis
qualification_level Master's degree
author Sathasivam, Saratha
author_facet Sathasivam, Saratha
author_sort Sathasivam, Saratha
title Optimization Methods In Training Neural Networks
title_short Optimization Methods In Training Neural Networks
title_full Optimization Methods In Training Neural Networks
title_fullStr Optimization Methods In Training Neural Networks
title_full_unstemmed Optimization Methods In Training Neural Networks
title_sort optimization methods in training neural networks
granting_institution Universiti Sains Malaysia
granting_department Pusat Pengajian Sains Matematik
publishDate 2003
url http://eprints.usm.my/31158/1/SARATHA_SATHASIVAM.pdf
_version_ 1747820400697409536