Neural Networks

From: Pete Turnbull <pete_at_dunnington.u-net.com>
Date: Thu Apr 2 20:25:19 1998

On Apr 2, 17:33, Max Eskin wrote:
> Subject: Neural Networks
> I know we discussed this earlier, so the replies can be private, if
> you wish, but it seems that some people here are familiar with the
> field.
>
> My question is this. My understanding of neural networks is a bunch
> of neurons, all more or less randomly connected, with one output
> and an arbitrary number of inputs; if the sum of the inputs equals
> a certain predetermined level, the neuron sends a pulse on the output,
> to trigger other neurons.
> Could someone please complicate the picture for me?

Sure :-)

What you describe is not quite right; they're not usually totally randomly
interconnected. If there are lots of neurons, they're usually in three layers.
At least, for a "conventional" multilayer network, they are. There are various
methods for adjusting the weights on the perceptron inputs, and propagating
changes backwards through the network, in order to "teach" it. It's a slow
process, involving a lot of repetition, large amounts of test data, and various
formulae to do the back-propagation and also to determine when you've done
enough teaching (which basically means deciding when you've minimised the
errors). If you overteach such networks, the performance can actually decline.

You can also build a single-layer network with just one neuron (they're usually
called perceptrons, BTW). However, single-layer networks are restricted to
distinguishing linearly-separable entities. In other words, if you plotted a
scatter chart with all the possible inputs represented as dots, you could
separate them into two types just by drawing a straight line through the chart.
 If you have more than two types, then more lines. More than two input
criteria, more dimensions (and use planes etc instead of lines). The problem
is, not all of the world is like that. A single layer network can't separate
types if they aren't arranged in an appropriate way -- the simplest
non-linearly-solvable example is the XOR problem: two types, but arranged like
the pattern of 1s and 0s in an XOR truth table:

       1 0

       0 1

You can't draw a single line that separates the 0s from the 1s; so by
definition they're not linearly seperable. You can easily do this with a
multilayer network, of course.

Then there are Hopfield networks. All the neurons are connected to all the
others in a Hopfield network. The feedback equations get quite interesting.

And Kohonen networks. You don't teach them; they learn. *What* they learn may
take some figuring out...

And binary networks like the one I was working with recently. You should be
able to find some information about that on our Department's web server (and
several other places too).
http://www.cs.york.ac.uk/arch/neural/

Have I confused you yet? :-)



-- 
Pete						Peter Turnbull
						Dept. of Computer Science
						University of York
Received on Thu Apr 02 1998 - 20:25:19 BST

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:30:39 BST