Prediction Parameters – Optimization (Professional Only)

The prediction parameters optimization tab is used to set up the criteria for the neural network optimization. For more information on optimization parameters see Neural Network – Network Criteria Discussion (Advanced).

The parameters on this tab may or may not be visible depending upon the currently selected Prediction Wizard Interface Options. Press the Options button to change the Prediction Wizard Interface Options. For more information see Prediction Wizard Interface Options.

  1. Setup the Optimization parameters.

Maximum number of inputs (Input Selection and Full Optimization Only) ‘ Limits the number of inputs considered by the optimization at any one time. The more inputs that you allow the network to optimize over the more combinations the optimization process will need to try, and thus the length of the optimization process will extend exponentially. In general, the neural network needs less than 10 good inputs to successfully model a problem.

Optimize for exactly – Forces the optimization to take a specified amount of time. Because of the nature of genetic optimization, it is impossible to determine the exact optimization time or even when the best neural network has been found. Without this parameter selected, optimization will automatically stop after it has decided that a better neural network is unlikely to be found in the future. Set this parameter to a longer time to make absolutely sure that the best neural network has been found. Set this parameter to a shorter time to stop optimization before the best neural network has been found.

Optimize across all chart pages – Forces the optimization to choose the same inputs and parameters for each chart page. The optimization will find the inputs and parameters that perform best across all chart pages by using the average result across all chart pages instead of trying to optimize the individual results for each chart page. Use of this parameter will result in worse results than optimizing each chart page individually, but will provide more consistent and generalized inputs and parameters across the chart pages.

Shortest Average Trade Span ‘ Causes the optimization to give preference to neural networks with an average trade span greater than or equal to the Shortest Average Trade Span over the optimization period (represents the range of data used during the optimization process – set in the Dates tab). Use this option is to decrease the number of trades if you find that optimization produces too many trades over the optimization period. It is recommended that you choose this option only if you are unable to achieve your goals using other methods.

Longest Average Trade Span – Causes the optimization to give preference to neural networks with an average trade span less than or equal to the Longest Average Trade Span over the optimization period (represents the range of data used during the optimization process – set in the Dates tab). Use this option to increase the number of trades if you find that optimization produces too few trades over the optimization period. It is recommended that you choose this option only if you are unable to achieve your goals using other methods.

  1. Setup the Optimization parameters.
  • Gene Hunter Optimization ‘ The classic genetic algorithm used in previous NeuroShell Trader versions. For more information about genetic algorithms, see the help topic What are Genetic Algorithms?

  • Evolution Strategy Optimization ‘ Evolution Strategies are variants of genetic algorithms that use real numbers instead of integers in chromosomes, and therefore do not cross segments of a chromosome, but instead cross whole chromosomes. The individuals represent potential solutions to a problem. The individuals are tested by a fitness function and the results are used to determine if the individual will be included in the next generation of potential solutions. For more information refer to the following book: Michalewicz, Z., “Genetic Algorithms + Data Structures = Evolution Programs”, Second, Extended Edition, Springer-Verlag, New York, NY, 1992, chapter 8, Evolution Strategies and Other Methods.

  • Swarm Optimization ‘ Like genetic algorithms, Particle Swarm Optimization begins with a random population of solutions in the form of individuals. (Individuals represent a set of problem values that are being optimized.) As time progresses, the individuals “swarm” generally towards the best individuals, but not directly as some randomness is involved. The best individuals are judged by a fitness function relevant to the problem, e.g., maximize the number of correct classifications or minimize the number of false negatives. For more information refer to the following paper: Eberhart, R. C. and Kennedy, J., A New Optimizer Using Particle Swarm Theory. Proceedings of the Sixth International Symposium on Micromachine and Human Science, Nagoya, Japan, pp 39-43, 1995

  • Brute Optimization ‘ This is an exhaustive brute force search of all possible parameter combinations as used in most other technical analysis packages. Note that if the parameter range is a floating point parameter range (1.1 to 1.5), then the brute force algorithm splits the range into 20 increments instead of searching the unlimited floating point precision possibilities). Note also that if you have a large number of parameters or a very wide parameter search space, brute force effectively becomes useless as it could take weeks, months or years to search every parameter combination of a large parameter space.

When you are satisfied with the prediction parameters to be used when training and optimizing press the OK button to return to the Prediction Wizard.
Note:

  • During optimization the objective function is calculated at exactly the Number of hidden nodes during training. During training the objective function is calculated for 0 hidden nodes to the Number of hidden nodes during training, and the number of hidden nodes with the best results is used. This means that even though the number of hidden nodes during optimization is exactly the same as the number of hidden nodes during training, the training results may be better than the optimization results on the same training set.

Topics of Interest:
What are Neural Networks?
Neural Network – Network Criteria Discussion (General)
Neural Network – Network Criteria Discussion (Objective)
Neural Network – Network Criteria Discussion (Advanced)
Troubleshooting Your Model – What to Do if You Feel You Haven’t Been Successful
Using Predictions

Was this article helpful?

Related Articles