Perceptron Learning

Introduction

This applet demonstrates a simple form of supervised learning called the perceptron learning rule.

Using this applet, you can train the perceptron to act as a binary logic unit. It can compute or approximate most 2-input Boolean functions. However, a problem arises when trying to train the perceptron on the XOR (or XNOR) function. The applet provides a "work-around" for this problem by introducing an extra input.

Credits

The original applet was written by Fred Corbett, and is available here.    These pages modified by Olivier Michel and Alix Herrmann.


Theory

Click on each topic to learn more.  Then scroll down to the applet.


Applet

(You may need to resize your screen to see the whole applet window. )

Like the simple neuron in the first tutorial, the simple perceptron below has just two inputs.  The difference is that the learning rule has been implemented.

Click here to see the instructions.  You may find it helpful to open a separate browser window for the instructions, so you can view them at the same time as the applet window.
 

 


Questions

  1. Find out which patterns can be learned with the unit step activation function.    How many iterations are needed on average?
  2. As above, for the sigmoid activation function.
  3. As above, for the piecewise linear activation function.
  4. As above, for the gaussian activation function, but first try to guess what is going to happen.  Can it learn anything at all?
  5. The linear associator has no nonlinearity (identity activation function). Can it learn the same patterns as the unit step, sigmoid, or piecewise linear neuron? What is the role of the nonlinearity?


[Neural Java home page]