(redirected from Artificial neural network)

[Home]Neural network

HomePage | Recent Changes | Preferences

Neural networks, or more properly Artificial neural networks are computer systems based on a connectionist approach to computation. Simple nodes (or "neurons", or "units") are connected together to form a network of nodes - hence the term "neural network". The orginal inspiration for the technique was from examination of the structures of the brain, and particularly an examination of neurons.

Most researchers today would agree that artificial neural networks are quite different from the brain in terms of structure. Like a brain, however, a neural net is a [massively parallel]? collection of small and simple processing units where the interconnections form a large part of the networks´ intelligence; however in terms of scale a brain is massively larger than a neural network, the units used in a neural network are typically far simpler than neurons, and the learning algorithms of the brain (whilst unknown) are almost certainly distinct from those of neural networks.

A typical feedforward neural network will consist of a set of nodes; some of these are designated input nodes, some output nodes, and those which are neither are referred to as hidden nodes. There will be connections between the neurons and a weight is associated with each connection. When the network is in operation, values will be applied to the input nodes; these are then passed through weights and a simple computation is performed in each node; taking the sigmoid of the sum of products of the inputs and the weights is typical. These results are then passed through each node in turn until it reaches the output node.

Alternative calculation models in neural networks include models with loops, where some kind of time delay process must be used, and "winner takes all" models, where the neuron with the highest value from the calculation fires and takes a value 1, and all other neurons take the value 0.

Typically the weights in a neural network are set to small random values; this represents the network knowing nothing. As the training process proceeds, these weights will converge to values allowing them to perform a useful computation. Thus it can be said that the neural network commences knowing nothing and moves on to gain some real knowledge.

Neural networks are particularly useful for dealing with bounded real-valued data, where a real-valued output is desired; in this way neural networks will perform classification by degrees, and are capable of expressing values equivalent to "not sure".

(More here on capabilities) Neural networks have been known to provide correct output even with problems that humans do not know how to solve.

Types of Neural Networks

The earliest kind of neural network is a single layer perceptron network, which consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of weights. In this way it can be considered the simplest kind of feedforward network. The sum of the products of the weights and the inputs is calculated in each node, and if the value is above some threshold (typically 0) the neuron fires and takes the value 1; otherwise it takes the value -1. This is accompanied by a learning algorithm which calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights.

Single layer perceptrons are only capable of learning linearly seperable patterns; in 1969 in a famous monograph "Perceptrons", Minsky and Pratt showed that it was impossible for a single layer perceptron network to learn an XOR function. They hypothesized (incorrectly) that a similar result would hold true for a multi-layer perceptron network.

When a neural net is first started, it is nothing but a set of input nodes, hidden nodes, and output nodes. A node is just the term for one of the pseudo-neurons. An outside system (environmental sensors, or perhaps some other program) provides the input by placing values in the input nodes. By performing a set of calculations upon those nodes, the internal nodes are calculated, and then the output nodes.

Multi-layer perceptron networks use a veriety of learning techniques, the most popular being backpropagation. Here the output values are compared with the correct answer, and through various techniques the error is fed back through the network, which adjusts the calculation performed by each node to make it slightly closer to correct. After repeating this process many, many times, with many, many sets of input and output, the network appears to "learn" the correct answer.

It is provable that a multi-layer perceptron network (given sufficient nodes) is capable of learning any continuous real function to arbitrary accuracy.

Support Vector Machines

Committee Machines

Jordan and Elman Networks

Self-organizing Maps
Self-organizing maps (SOMs), invented by Professor Teuvo Kohonen, are a data processing technique which reduce the dimensions of data through the use of self-organizing neural networks.

Other Network Types

Data Representation

[PAC Learning]?


/Talk?

HomePage | Recent Changes | Preferences
This page is read-only | View other revisions
Last edited October 18, 2001 10:55 am by Iwnbap (diff)
Search: