next up previous
Next: Kohonen networks Up: Learning algorithms Previous: Learning algorithms

Feed forward networks

The library offers a number of powerful routines for the standard feed forward network type (see [Rumelhart 1986]).

Network creation and manipulation
There are routines available to create and manipulate a network. A feed forward network can easily be created with a library function, allowing any number of layers and any number of hidden units per layer. A routine is also available to add hidden units to an existing network. The function used in this context is for example, create_ff_net.
Network evaluation and performance estimation
The evaluation of a feed forward network, for a given input is evaluated with one function call. The performance of a network on a test set can be estimated per sample and for complete datasets. The following measurements are supported: the mean squared error, the maximum error, the minimum error, the gradient with respect to the weights, the percentage of the samples correctly classified and a Boolean that is true if all samples (or outputs) are correct, see the network error stucture (ERROR_STRUCT). A function used in this context is for example grad_net_mse.
Network training
The standard backpropagation (see [Rumelhart 1986]) algorithm is implemented with low level functions. This gives the possibility to make changes to the algorithm without rewriting the whole algorithm. The user who just wants to use the vanilla backpropagation, can use the high level routines. This set-up allows a maximum flexibility for experiments. The function learn_mlnet for instance is used to train a maximum likelihood network.
Network code generation
A network that is trained and is found to be satisfactory can be converted to a C source code equivalent. No loading of a network is needed for the use of a network solution within an application. The library does not need to be linked, a compact and fast solution is generated. To make a source file the function make_source) can be used.

Besides the standard backpropagation different learning methods can be used to train the network. A method is implemented which uses the conjugate gradient descent method
(learn_cgdes_ffnet), see [Press 1988], for learning the network weights. The Levenberg-Marquardt method (learn_marquardt_ffnet) is a well-known method in numerical optimization (see [Sydenham 1982]) just as the pseudo-Newton method (pn_learn), both methods can be used for network training. For fixed weights in the hidden layers, the network output unit, if chosen linear, can be determined by using a pseudo inverse matrix method. This is a common method in regression analysis and is called the Wiener weight solution in [Widrow 1985]. This pseudo matrix inverse method is supported in the library (wien_output_ff).

The feedforward networks currently supported are:

Multilayer perceptron
This is the classical feed forward network which is usually trained with backpropagation, but also can be trained using the aforementioned algorithms.
Shared weights network
This network type was introduced by leCun [le Cun 1989] and uses a receptive fields approach. Instead of separate links from unit to unit, units share links. For a detailed description of this network and its applications, the reader is referred to de Ridder [de Ridder 1996].
Maximum likelihood network
This type of network is also known as the radial basis function network and uses special updating and activation routines.

Both the shared weights network and the multilayer perceptron can handle the learning algorithms implemented by the library, see the reference part for this. The maximum likelihood network uses its own routines for training.


next up previous
Next: Kohonen networks Up: Learning algorithms Previous: Learning algorithms

Created by LaTeX2HTML on Thu Mar 5 16:02:50 MET 1998