Having established the basis of neural nets in the previous chapters, let’s now have a look at some practical networks, their applications and how they are trained.
Many hundreds of Neural Network types have been proposed over the years. In fact, because Neural Nets are so widely studied (for example, by Computer Scientists,
Electronic Engineers, Biologists and Psychologists), they are given many different names. You’ll see them referred to as Artificial Neural Networks (ANNs),
Connectionism or Connectionist Models, Multi-layer Percpetrons (MLPs) and
Parallel Distributed Processing (PDP).
However, despite all the different terms and different types, there are a small group of
“classic” networks which are widely used and on which many others are based. These are: Back Propagation, Hopfield Networks, Competitive Networks and networks using Spiky Neurons. There are many variations even on these themes. We’ll consider these networks in this and the following chapters, starting with Back Propagation.
3.1 The algorithm
Most people would consider the Back Propagation network to be the quintessential
Neural Net. Actually, Back Propagation1,2,3 is the training or learning algorithm rather than the network itself. The network used is generally of the simple type shown in figure 1.1, in chapter 1 and in the examples up until now. These are called FeedForward Networks (we’ll see why in chapter 7 on Hopfield Networks) or occasionally Multi-Layer Perceptrons (MLPs).
The network operates in exactly the same way as the others we’ve seen (if you need to remind yourself, look at worked example 2.3). Now, let’s consider what Back
Propagation is and how to use it.
A Back Propagation network learns by example. You give the algorithm examples of what you want the network to do and it changes the network’s weights so that, when training is finished, it will give you the required output for a particular input. Back
Propagation