next up previous
Next: System requirements Up: Introduction to SPRANNLIB Previous: Introduction to SPRANNLIB

Introduction

An obvious starting point for neural network research is the purchase of a commercially available neural networks simulator, see e.g. [Korn 1989], [NeuralWorks 1991] or one of the 66 other simulators in the overview of Murre [Murre 1992]. Generally, the simulator is equipped with an excellent graphical user interface and many network paradigms are supported. Building an application, using one of the standard built-in methods is easy, and the first results can be obtained after hours of working, rather than days.

Problems arise when the application does not seem to work with the standard solutions and when it is not understood why. Generally, the possibility to adapt the algorithm or to add a newly developed algorithm to the simulator is either not available, or very difficult to perform. Even if facilities are available, the documentation is complicated, and the user has to use strictly prescribed rules to access or add new parameters. More time is then spent in meeting the requirements of the simulator, than in doing the creative work of algorithm design or building the application. Apparently, this seems to be a problem of many available neural network simulators.

In the environment where this manual was written, that of pattern recognition research, the aforementioned problems appeared to be a crucial problem in performing our research. In most cases our research goal is not to solve a particular pattern recognition task, but to gain more understanding on the advantages and limitations of pattern classifiers. For example, the influence of various learning rules and strategies, network topologies, network paradigms, and various kinds of training data on the classification performance and the dynamics of learning are being investigated. For our purposes, the simulation environment should therefore offer much more flexibility in accessing low-level data than is generally possible in commercially available simulation environments.

On the other hand, the obvious alternative of implementing an algorithm with custom made software does not seem to be very attractive either, since a large amount of tedious and laborious coding is involved. Also, it is not sure that code that was developed today, can be reused for new implementations and/or algorithms tomorrow. This is rather inefficient, since a large part of a simulation program generally exists of code for disk I/O, presentation to the user, estimation of performance, etc.

An intermediate approach, that is advocated here, considers a library of subroutines that operates on a set of powerful data structures, as the ideal vehicle to develop and simulate neural network algorithms. Among the subroutines should be general ones, like those performing disk I/O and error handling, but also the standard set of learning algorithms. The advantage of this solution is that the same functionality is offered as with a purchased simulator, but with much more flexibility. The price that is paid, however, is that the user interface is not so refined. This is something that is easily overcome in a research environment.

The subject of the introductory part is the design philosophy of such a simulation environment, and an implementation that is called SPRANNLIB.


next up previous
Next: System requirements Up: Introduction to SPRANNLIB Previous: Introduction to SPRANNLIB

Created by LaTeX2HTML on Thu Mar 5 16:02:50 MET 1998