The Complete SPRLIB & ANNLIB

learn_bfgs_ffnet

- apply the BFGS optimization method to train a feedforward neural network

SYNOPSIS

int learn_bfgs_ffnet (net, dset, steps, options)

ARGUMENTS

NET *net Pointer to feedforward or radial basis network.
DATASET *dset Pointer to a matching IOSET or LEARNSET.
long steps Number of steps to be performed.
long options If TRUE, reports the MSE to stdout after each iteration.

RETURNS

TRUE if an error was detected, FALSE if no error was detected.

FUNCTION

Adapt the weights of the network under investigation using the Broyden-Fletcher-Goldfarb-Shanno variant of the Davidon-Fletcher-Powell minimization algorithm. These weights are entered into a vector p, which is then used in a call to the dfpminimize function.

NOTE

The functions bfgs_init and bfgs_free are called by this function; the user program does not have to do this. Note that, unlike the back-propagation algorith, the number of cycles specified is important. The algorithm tries to approximate the second order derivative matrix (the Hessian) during the process. This information is discarded when the function exits. Therefore, for example, 10 calls to this function with steps = 2 (the minimum) might yield different results than 2 calls with steps = 10. Each call to this function is essentially a restart of the algorithm. Finally, it is important to note that this function does currently not distinguish between active or inactive weights (i.e., weights with either a VARWEIGHT or FIXWEIGHT weight flag; see WEIGHT-flags). Fixed weights are trained as well as variable weights.

SEE ALSO

dfpminimize, erf_bfgs, derf_bfgs, bfgs_free, bfgs_init, BFGS-variables, NET-flags, DATASET-flags, CDES-flags

This document was generated using api2html on Thu Mar 5 09:00:00 MET DST 1998