Back to Results
First PageMeta Content
Neuroscience / Multilayer perceptron / Perceptron / Backpropagation / Artificial neural network / Recurrent neural network / Neural networks / Cybernetics / Statistics


On dynamic programming-like recursive gradient formula for alleviating MLP hidden-node saturation in the parity problem Eiji Mizutani 1,
Add to Reading List

Document Date: 2008-07-16 21:59:51


Open Document

File Size: 186,48 KB

Share Result on Facebook

City

Natick / Hsinchu / Washington / D.C. / Como / Derivative / /

Company

Princeton University Press / MathWorks Inc. / Neural Information Processing Systems / BP / Neural Networks / /

Country

Taiwan / Italy / /

/

Facility

Prentice Hall / National Tsing Hua University / /

IndustryTerm

ve algorithms / adaptive networks / direct dogleg algorithms / representative batch-mode algorithms / ective algorithms / derivative-computing process / gradient algorithms / online-mode steepest descent / deterministic algorithm / batch-mode direct dogleg trust-region algorithm / direct dogleg algorithm / dogleg algorithms / learning algorithms / est descent-type algorithm / /

Organization

World Congress / Princeton University / National Tsing Hua University / Univ. of California / /

Person

James W. Demmel / S. Roger Jang / Stuart E. Dreyfus / Eiji Mizutani / Shing Roger Jang / H. Kelley Adjoint / /

Position

representative / /

ProgrammingLanguage

J / /

ProvinceOrState

South Dakota / California / Massachusetts / /

Technology

ve algorithms / est descent-type algorithm / direct dogleg algorithm / existing NNlearning algorithms / Neural Network / MLP-learning algorithms / deterministic algorithm / gradient algorithms / direct dogleg algorithms / algorithm Dogleg algorithm / dogleg algorithms / quasi-Newton algorithms / two representative batch-mode algorithms / NN algorithms / Five algorithms / batch-mode direct dogleg trust-region algorithm / /

SocialTag