The Complete SPRLIB & ANNLIB

pn_adapt_unit

- adapt a unit according to the pseudo-Newton variation on the generalized delta rule

SYNOPSIS

int pn_adapt_unit (unit, eta, alpha, mu, target, options, netflag)

ARGUMENTS

UNIT *unit A pointer to a UNIT.
double eta The coefficient eta (the learning rate) of the backpropagation rule.
double alpha The coefficient alpha (the momentum term) of the backpropagation rule.
double mu A small constant used in diverting inflection points in the pseudo-Newton method.
double target The target output for a unit (only required for output units).
long options See the note below.
long netflag Specify whether a unit from a standard or shared weights feedforward network is to be adapted.

RETURNS

TRUE if an error was detected, FALSE if no error was detected.

FUNCTION

Performs the generalized delta rule to the unit under investigation, i.e. it computes the first and second order error gradients w.r.t. its feeding units, and changes their weights accordingly (depending on the flags specified in options). In this case, the generalized delta rule is slightly modified, i.e. the peudo-Newton rule for updating the weights is performed - see also: Becker, S. and Le Cun, Y.: Improving the convergence of back-propagation learning with second order methods. In: Touretzky, D., Hinton, G., and Sejnowski, T., eds. , Proc. 1988 Connectionists Summer School - CMU, Morgan-Kauffmann, 1989. When netflag is SHAREDNET, a shared weights network modification of the rule is performed. Depending on the activation function of the unit, different computations are being performed. The usual case with the generalized delta rule is an inner product activation funtion (ActInprod). Alternative activation functions are: Euclidean and squared Euclidean distance (e.g. to be used when training a radial basis function (RBF) - neural network) - see ActSqEucDist, ActEucDist.

NOTE

In options, the following flags can be specified:
HISTU (Don't) store history of unit values at each update.
HISTT (Don't) store history of unit thetas at each update.
HISTW (Don't) store history of weights at each update.
BPACCUM (Don't) accumulate the delta's.
BPUPDATE (Don't) update the weights.
BPOLDPRP (Don't) backpropagate with old = original weights.

Note that the HIST-flags and BP-flags may be ORed together. The backpropagation-process is divided into two phases: first, the weight change is computed (optionally, if the BPACCUM flag is set), then the actual weight-change is done (optionally, when the BPUPDATE flag is set). Also, note that default (when BPOLDPRP is not set) the newly calculated weight-change is performed immediately (assuming BPACCUM and BPUPDATE are set - backpropagation), while the standard backpropagation-rule, in which weight changes are made effective only in the next cycle, requires BPOLDPRP to be set. In effect, not setting BPOLDPRP causes a random disturbance of the minimization process which might speed it up, but could cause problems in difficult learning processes. Furthermore, note the following for (BPACCUM, BPUPDATE) - pairs:
0 0 Combination does not make sense (nothing is done).
0 1 Yields no backpropagation, only updates.
1 0 Yields only backpropagation, no updates.
1 1 Specifies normal mode, i.e. backpropagation and update.

SEE ALSO

pn_learn

This document was generated using api2html on Thu Mar 5 09:00:00 MET DST 1998