An artificial neural network is an information or signal processing system composed of a large number of simple processing elements, called artificial neurons or simply nodes, which are interconnected by direct links called connections and which cooperate to perform parallel distributed processing in order to solve a desired computational task. The potential benefits of neural networks extend beyond the high computation rates provided by massive parallelism. The neural network models are specified by the net topology, node characteristics, and training or learning rules. These rules specify an initial set of weights and indicate how weights should be adapted during use to improve performance. Roughly speaking, these computations fall into two categories: natural problems and optimization problems. Natural problems, such as pattern recognition, are typically implemented on a feed-forward neural network.
Optimization problems are typically implemented on a feedback network. These networks interconnect the neurons with a feedback path. A typical feedback neural network is the Hopfield neural network [Hop85]. Figure 4 shows the circuit structure of the neuron and its functional structure. This differential equation describes the neuron: [pic]
(1)
where [pic](j=1,2..., n), and g( ) is the sigmoid activation function. It is shown in [Hop85] how to choose the values of synapse [pic]so that (1) represents the dynamics corresponding to a given energy function. If the energy function corresponds to an optimization objective, the initialization of the [pic]'s to an initial configuration will result in an equilibration which settles to a local minimum of the objective function. One famous example using the neural networks is the Traveling Salesman Problem (TSP) [Wil88], in which a salesman is supposed to tour a number of cities (visiting each exactly once, then returning to where he started) and desires to minimize the total distance of the