I. “April in Paris” sale
We’re having a sale, but you need to act fast because the sale ends on April Fools Day! You get 15% off on NeuroShell Trader, Trader Professional, DayTrader Professional, or upgrades to any of those. The sale also applies to Trader add-ons and the AI Trilogy. A discounted AI Trilogy is an especially good deal because it is an already highly discounted bundle of our most powerful and popular business and scientific AI products. You can read more about the sale on our websites if you wish (www.neuroshell.com and www.wardsystems.com).
Note: You must mention “April in Paris” when you order to receive the sale price.
********************************************************
II. Editorial Opinion from Steve Ward, CEO
Recently a user of NeuroShell 2 pointed out that we have many new offerings for Traders with no upgrades to NeuroShell 2 in quite some time. We responded that our “upgrades” to NeuroShell 2 had come in a new chassis in the form of the NeuroShell Predictor and NeuroShell Classifier, which he had never purchased. It surprised me when the user said he thought NeuroShell 2 contained all of the algorithms in the Predictor and Classifier. In particular he believed that the Turboprop algorithm in NeuroShell 2 is the same as Turboprop 2 in the Predictor, Classifier, and Trader. In the context of that question then, I thought I should retell a long story we probably haven’t told enough.
NeuroShell 2 was born in 1993 as a successor to our most popular product of all time: NeuroShell 1, a DOS product then just called NeuroShell. Through several years and four major upgrades NeuroShell 2 became an extremely popular general purpose neural net system. It contained a powerful enhanced backprop version which we named Turboprop. If you are a NeuroShell 2 user and you’ve never noticed how to turn on Turboprop, we aren’t a bit surprised. That’s because NeuroShell 2 has a multitude of knobs and switches to keep the neural net scientist of the 1990s very happy “tweaking” them way into the night.
NeuroShell 2 had so many options it was perfect for experimenters and college professors who wanted to teach with it. It is still the product of choice for those people, but it became a technical support nightmare as users attempted to use it to actually solve problems rather than experiment. We needed new algorithms and a new interface that was not only easier to use but more powerful at the same time.
Frankly, I grew tired of reading technical papers done with NeuroShell 2 in which the writer really didn’t understand the system! Fortunately, the ones that made it to print got good results in spite of the sometime misuse of the system. We imagined how much better their work would have been if they could concentrate on the basics of modeling instead of the basics of neural nets. We imagined a system which got great results with very little to tweak. We imagined a system with two parts – one optimized for prediction and one optimized for classification. We imagined the NeuroShell Predictor and the NeuroShell Classifier.
Then we spent about 4 years building a new neural net algorithm to go with the new system. I personally worked on this paradigm every chance I got, even taking my laptop to the beach so I could work on it there! I set out just to improve on backprop, but I wound up building a neural net that was one or two orders of magnitude faster than backprop, while more accurate and more reliable. Best of all, it didn’t have all of those features that required endless guestimates to get good results.
For example, the days of trying to figure out how to set learning rates, momentum, initial weight ranges, etc. were suddenly history. Suddenly it wasn’t even necessary to guess at how many layers of hidden neurons to use, or how many neurons to put in each layer. Suddenly, it didn’t make big differences when you changed activation functions or even scaling functions. You just got the best net almost every time on the first try! That meant that users could spend their time not on tweaking paradigms, but on the real issue – how to select and present inputs and training sets.
It was a major milestone, and we wanted a meaningful name. This was really a turbo driven algorithm. We really liked the name Turboprop, so we decided on Turboprop 2, trying to be careful in those early years to distinguish our new paradigm from the older one in NeuroShell 2. They aren’t in any way related.
Then we renovated the GRNN and PNN algorithms in NeuroShell 2 and put the newer more capable versions in the NeuroShell Predictor and Classifier as the “genetic” method. No more extracting test sets to optimize on – these reincarnations use a “one hold out” jacknife so you can use all of your data for training while the genetic algorithm figured out the importance of each input. Best of all, it was for the first time possible to use training sets with less than 100 exemplars without serious risk of overfitting. We carefully didn’t even call PNN and GRNN neural nets, since they had originally evolved from an old, little known statistical method. Users who were afraid to tell their boss they were using neural nets could now say they were using a non-linear statistical modeling technique.
But as we then celebrated what was probably the most important new neural network breakthrough since 1988, we made a whopper of a mistake, the biggest one since we started in business in 1982! We introduced our new software as the NeuroShell Easy Predictor and Easy Classifier, and we renamed NeuroShell 2 to NeuroShell 2 Professional. People erroneously assumed that Easy meant simple minded, and we couldn’t even convince new and novice users to purchase the new software instead of NeuroShell 2. The old salty neural net users couldn’t bring themselves to believe that you can build great nets without endless tweaking. So after about a year we stopped misleading people by taking “Easy” and “Professional” out of the names.
Later we put Turboprop 2 in the NeuroShell Trader, since it was the only algorithm fast enough to stand up to the multitude of testing required in a financial environment. Of course, we reintroduced GRNN and PNN in yet a new formulation in our Adaptive Net Indicators add-on.
Today, we still sell lots of NeuroShell 2s to colleges and universities, especially in the form of “lab packs” so the whole class can use the software. Why do the professors like it so much? One of them put it best, “If I used NeuroShell Predictor and Classifier in my class I couldn’t make it last a whole semester!”.
Now I happen to disagree with that, because I think that instead of being taught so much about net tweaking, neurons, weights, activation functions, etc., students need to be taught selecting, presenting, and preprocessing inputs and training and evaluation sets. Also there is much to learn about measures of success, over-fitting, under-fitting, prediction vs. classification, outliers, and other more important aspects of modeling. I’m afraid that what might be happening is analogous to teaching a driving course where the theory of the internal combustion engine is the concentration, while driving laws and practical road experience are only briefly touched upon.
Too many people are second guessing educators today, so I’ll refrain from doing so further. However, I may make my own course for American Web Intelligence this summer. Send us an email if you think you’d enroll in such a course (online) later in the fall based around the NeuroShell Predictor, Classifier, and Runtime Server. I’ll see what kind of response I get.
*******************************************************
III. March’s featured product – Neural Indicators Add-on
The Neural Indicators add-on for the NeuroShell Trader Professional and DayTrader Professional is our own favorite neural net for financial modeling. We actually like it in many ways more than we like Turboprop 2 which comes standard. The reason we like it more is because you do not have to predict anything, like tomorrow’s close. It just gives you buy and sell signals based directly on the inputs.
So how can it give you buy and sell signals if it doesn’t make any kind of prediction? Think about how the genetic algorithm optimizer finds indicators, rules, etc. that make the most profit when they’re used in a trading strategy. All we do is tell the optimizer to build the neural net that makes the most money when its buy and sell signals are used in a trading strategy.
The neural net itself is pretty dumb and isn’t learning a thing. All it does at first is put out random signals. Then the optimizer changes the weights in the neural net until those signals are no longer random, and they start making money. The optimizer continues improving the net until the signals are making more and more profit. The net didn’t really learn anything internally; instead the optimizer evolved a net that works better in the environment (in the environment, the fittest nets are the ones that make the most money).
Actually, in a trading strategy the optimizer evolves separate nets for enter long, enter short, exit long, and exit short. So you’ll be inserting Neural indicators as conditions for those entries and exits. That means you’ll be using at least two at a time and maybe four. Each net evolves to handle only its specialty entry or exit, so we don’t need to evolve one that knows how to do everything. It’s like an evolved panel of experts to help you make money.
You put inputs into these nets as usual, and the optimizer works on them at the same time it is evolving the net. Your training set is just the optimization period, and your evaluation set is just the backtest period.
There are several neural net architectures you can choose from that are based on the old backprop algorithms we used in NeuroShell 2, except that the training is done by genetic algorithm instead of backprop.
For more details on Neural Indicators, see www.neuroshell.com in the add-ons section. Paste the following link into your browser:
http://www.neuroshell.com/addons.asp?ne
*******************************************************
IV. Traders’ Tips
The NeuroShell Trader does not place a stop on the same bar as the entry UNLESS the stops are not based upon the trading strategy. So using any stop that is based on the Trading Strategy itself (i.e. %Trailing Stops, EntryPrice+/-some amount, etc.) will not create a stop until the next bar after the entry. However a stop price that is not based on the Trading Strategy (e.g., Low Price minus X points, MovAvg, …) can put a stop in on the same bar as the entry.
Stop prices for the next bar are computed at the end of the previous bar. In the first case above, the stop values can’t be computed before the bar because the entry occurs inside the bar (and NeuroShell Trader only performs calculations at the end of each bar). However in the second case, the stop price can be calculated before the bar because no information about the entry price (which hasn’t occurred yet) is required.
The same is true of limit and stop limit orders.
*******************************************************
V. No more newsletters in the future?
No, we aren’t ending this newsletter. But you will think we did if you change your email address and don’t send us your new one!
*******************************************************