The Complete SPRLIB & ANNLIB

train_som

- a generic front-end for SOM-training

SYNOPSIS

void train_som (net, dset, alpha, alpha_c, rad, rad_c, opt, cycles)

ARGUMENTS

NET *net The SOM to be trained
DATASET *dset Dataset to be used for training
double alpha Initial learning rate
double alpha_c Learning rate in convergence phase
double rad Initial neighbourhood width
double rad_c Final neighbourhood width
long opt A flag (see below)
long cycles A parameter determining the number of cycles the SOM is trained

FUNCTION

A generic front-end for SOM-training. It assumes that an initial (rad) and final (rad_c) neighbourhood, initial (alpha) and final (alpha_c) learning rate, total number of learn cycles (cycles), a training set (dset) and an already created and initialized SOM (net) are supplied. Three constants are used (and might be customized): CPE = 100 -> the SOM is trained in batches of 100 cycles; (ORDI = 0.1, CONV = 9) -> determines the amount of cycles in each phase: I = unfolding phase (first stage: ORDI*max_cycles, second stage: (1-ORDI)*max_cycles), II = tuning phase: CONV*max_cycles. Hence, the SOM is trained for (CONV+1)*max_cycles cycles. During the first stage of the unfolding phase, the map is trained with initial learning rate and initial neighbourhood width. During the second stage, the map is trained with initial learning rate and the initial neighbourhood is decreased exponentially to 1. During the tuning phase, final learning rate and neighbourhood width are used.

RETURNS

Nothing.

NOTE

This function uses the function koh_learn. The options-parameter in the argumentlist corresponds to the options-parameter in this function. Moreover, it is assumed that the SOM is initialized prior to this function.

SEE ALSO

koh_learn

This document was generated using api2html on Thu Mar 5 09:00:00 MET DST 1998