The Complete SPRLIB & ANNLIB

bp_adapt_net

- perform one learning cycle using the backpropagation rule

SYNOPSIS

int bp_adapt_net (net, eta, alpha, target, options)

ARGUMENTS

NET *net A pointer to a NET.
double eta The coefficient eta (the learning rate) of the backpropagation rule.
double alpha The coefficient alpha (the momentum term) of the backpropagation rule.
double *target The target output of the network.
long options See the note in bp_adapt_unit.

RETURNS

TRUE if an error was detected, FALSE if no error was detected.

FUNCTION

The bp_adapt_unit is called for each UNIT in net. If the network is a shared weights network (i.e., when NetFlag = SHAREDNET), another algorithm is invoked: starting at the output layer, for all units in one layer bp_adapt_unit is called, followed by one call to bp_adapt_weights; then the units in the next layer are processed, and so on until the first layer is reached.

NOTE

In the options-parameter the following flags are to be specified:
HISTU (Don't) store history of unit values at each update.
HISTT (Don't) store history of unit thetas at each update.
HISTW (Don't) store history of weights at each update.
BPACCUM (Don't) accumulate the delta's.
BPUPDATE (Don't) update the weights.

See bp_adapt_unit for an explanation of the possible flag combinations. This function is used by bp_learn.

SEE ALSO

bp_adapt_unit, bp_adapt_weights, bp_learn

This document was generated using api2html on Thu Mar 5 09:00:00 MET DST 1998