The network begins by finding linear relationships between the inputs and the output. Weight values are assigned to the links between the input and output neurons. After those relationships are found, neurons are added to the hidden layer so that nonlinear relationships can be found. Input values in the first layer are multiplied by the weights and passed to the second (hidden) layer. Neurons in the hidden layer “fire” or produce outputs that are based upon the sum of the weighted values passed to them. The hidden layer passes values to the output layer in the same fashion, and the output layer produces the desired results (predictions).
The network “learns” by adjusting the interconnection weights between layers. The answers the network is producing are repeatedly compared with the correct answers, and each time the connecting weights are adjusted slightly in the direction of the correct answers. Additional hidden neurons are added as necessary to capture features in the data set.
Eventually, if the problem can be learned, a stable set of weights evolves and will produce good answers for all of the sample decisions or predictions. The real power of neural networks is evident when the trained network is able to produce good results for data that the network has never “seen” before.