When you compare nets with regression (and other modeling techniques for that matter) you have to be careful to compare apples to apples.
In all of our documentation, we emphasize evaluating your neural nets on “out-of-sample” data, not the data with which you train your net.
However, practitioners of regression analysis often do NOT do this. They report results of the regression “training set”, and sometimes fail to use out of sample testing. This is as misleading as reporting the results of the neural network training set.
Therefore, when you compare nets with regression, either make both out-of-sample or make both in-sample. Use exactly the same data, or the comparison isn’t fair.
NeuroShell 2 users have to be careful about another thing that NeuroShell Predictor and NeuroShell Classifier users don’t have to worry about: calibration. Calibration in NeuroShell 2 means that a test set is extracted from the training set. This is never done with regression analysis and most other modeling techniques. So to make sure you are comparing apples to apples, turn OFF calibration and use the whole training set for training, just like regression does.
Here’s another thing NeuroShell 2 users have to worry about. Activation functions in the output layer can make quite a difference. When predicting numeric amounts, the linear output activation function is usually the best to use in the output layer (you’ll probably need low learning rate and momentum.) The logistic output function is best for classification problems (e.g., comparing to logistic regression analysis).
Note on the genetic method of the NeuroShell Predictor and Classifier: this unique method always trains everything in an out-of-sample mode; it is essentially doing a “one-hold-out” technique, also called “jackknife” or “cross validation”. If you train using this method, you are essentially looking at the training set out-of-sample. The same is true if you turn on enhanced generalization when you apply the net to the training set. This method is therefore great when you do not have many patterns on which to train. However, the training set error statistics will not look as great as some method where one-hold-out is not being performed.
And while we’re on statistics, we’ve seen many cases where “R squared” in our products is compared to “r squared” in other products, especially regression. They are different measures with different formulas. The tricky thing is that they result in the same value when using regression! In non-linear models like neural nets, they AREN’T the same thing. Be very careful here!
Note to NeuroShell Trader users: although the NeuroShell Trader Professional can fire NeuroShell Predictor and Classifier nets, the “one hold out” method will not help you. That is because in financial predictions it is not really out-of-sample to predict day X when day X-1 and day X+1 are in the training set. You can’t trade that way. That is why the Trader does not allow any “random extractions” of evaluation data.
Note to NeuroShell 2 financial users: Since NeuroShell 2 does allow random extractions, your predictions look much better than they really are if you have a randomly extracted production set! Don’t use random extractions for either the test set or the production set.
Scaling could cause you to be comparing apples to oranges. The Predictor, Classifier, and Trader all scale data before building a model; there is no way to turn it off. So your regression models should be scaled too, with something like the Z-score (that is standard statistical practice). You can turn off scaling with NeuroShell 2, but most of the time regression models are scaled, so use it. But NeuroShell 2 also has clipping, which should be off, because regression models usually don’t clip. To make NeuroShell 2 use Z-score, use the scaling function that is mean + or – 1 standard deviation, and turn clipping off.
Now here’s the last thing to be wary of. If your data is essentially linear, there isn’t anything that can beat linear regression. You shouldn’t even be using neural nets for linear data, and no comparisons can ever be fair. If you are supposed to be comparing methods, then make sure you include non-linear data. Neural nets are highly non-linear models that will excel with non-linear data. If you have sparse linear data, neural nets will try to fit all the noise and make a non-linear model out of it, and then they’ll surely look worse than regression. If you suspect your data might be linear, then make sure your nets are linear too, especially if there isn’t a lot of training data (less than 300-500 training patterns), or if there are more than 5 inputs. The NeuroShell Predictor and Classifier and Trader Professional are made linear with zero hidden neurons. There is no way to force the genetic method to be linear. In NeuroShell 2, all activation and scaling functions have to be linear for a linear model.
Final note to NeuroShell 2 users: if you have read all of the above, and you feel overwhelmed by issues like test sets, activation functions, scaling functions, clipping, learning rates, momentum, setting hidden neurons, etc., then you now know why we have been trying to get all of our NeuroShell 2 users, except college professors, to start using the NeuroShell Predictor and Classifier, or the NeuroShell Trader. The latter programs give you better models (when apples are compared to apples) without all the tweaking and knowledge required. (College professors have to teach the classic algorithms everyone else uses, and tweaking gives them more to teach anyway!)