Please Note: Blog posts are not selected, edited or screened by Seeking Alpha editors.

# Predicting probable outcomes and how it works within the F-Shift Forecaster

What's 18% of 60?? The answer is 10.8. How about 21% of 60?? The answer is 12.6.  The problem with both results is that they are NOT whole numbers, which in itself is not an issue, but when using the F-Shift Forecaster© effectively, it generates nonsensical results. To address this challenge I took the 60 historical data points that I populate the Forecaster with and multiplied each number by 10 thereby producing a data "pool" of 600 with which the forecaster will randomly draw from to forecast where a particular stock, future or index has the highest probability of ending up over the next twelve periods. Those periods can be daily, weekly, or monthly depending on your preferred time frame. As you know by now, the Forecaster uses a unique function within the Excel platform called RANDbetween which basically takes "X" number of data points (in this case 60) and randomly selects any of those 60 data points in any order by repeatedly tapping the F9 key on your keyboard. Unlike its cousin RAND, these data points are selected from the "pool" of 60 whereas RAND generates ANY random number. Again, this still does NOT address why I multiplied the 60 data points by 10. The simple reason is the weighting component that I built into the Forecaster's capabilities. Weighting is nothing more than an "importance" factor that I have written extensively about in my previous .Without the weighting component, each time I tap the F9 key to generate the RANDbetween outcome - each of the 60 data points has an EQUAL probability of being selected. I want to change that. I want the more RECENT data points, a.k.a. the percentage change in closing prices, to have more of an importance. Again, refer to my previous blogs above concerning weighting. When I say more of an importance, I mean I want to INCREASE the likelihood or probability of those closing prices to be selected relative to the oldest closing prices in that group of 60 historical data points. So, with that objective in mind, I added weighting spinners into the Forecaster platform which define how much greater of a chance the more recent data points have of being selected. Here's how I achieved this objective.

First I took the core 60 percentage in change closing prices or data points and divided them by 10 thereby creating 10 equal "groupings" of 6 each. Then as previously mentioned I multiplied each of those "groupings" by 10. So now, the most recent grouping - let's call it "grouping# 1", consists of 60 numbers but really only 6 different data points with each carrying ten data points (6x10=60). For example, if the most recent grouping of 6 data points were the following within the original 60 (1.2%, 1.5%,-2.1%, 1%,-1.4%,-1.1% ), then as already mentioned, they would each have an equal chance at being selected. Now if I wanted to take those same 6 results and make them more important because they were the most recent percentage changes in closing prices, I could take that "grouping" of 6 (which represents 1 tenth of the 60) and weight them with 18%. This increases the likelihood of any of those 6 results being selected by 8% points (10 groupings of 6 equals 60 or 10% each). This is how I began this post, by asking you "What is 18% of 60"? Telling the Forecaster to weight the first grouping of 6 data points to 10.8 made no sense at all which led me to just apply a multiple of 10 to all 60 data points. Everything else remains the same except that the groupings are all equal to 60 unless they are weighted. So the new question now becomes "what's 18% of 600" The answer is 108. Now the Forecaster can adjust the 600 data points so that the 1st 108 data points are "grouping#1" - which, if you remember would normally only carry an equal weighting to its counterparts of 60. When the RANDbetween function is now engaged, we have "weighted' the 600 numbers from which it has to chose from so that there are MORE of the 6 most recent data points (in bold above) within the probable outcome forecast!  The great thing about this platform is the user's ability to define his or her own individual preference as to the size of weighting across all 10 groupings of 6!!

Along with the weighting component, I have added a "Biasing" spinner as well which allows you the trader to override the results with an opinion or bias. Personally I will only use this tool when things reach extreme levels. For example if a market is continually rallying, as we are now, we reach extremely overbought conditions. My experience has taught me that nothing goes up (or sells off ) forever and that either profit taking, bad news or just plain technical levels would suggest a pullback is at hand over the next "X" number of periods. With that opinion in mind, I can use the BIAS spinner and adjust the Forecaster results downwards by say -5% in anticipation of any of the above mentioned events coming into play (profit taking, bad news or just plain technical levels). The F-Shift Forecaster© does NOT have the ability to form an opinion - that's the beauty of the software. It populates with actual historical data, and then randomly generates probable outcomes with or without the weighting component. When the end user chooses to use the BIASING feature he or she is now projecting their opinion into the analysis and that is fine but just remember, you are skewing actual results based on what you think has an increased probability of happening and you should trade accordingly. This added feature actually came about by a number of requests from professionals who stated that there should always be some degree of human override should one wish to do so and I agree. My approach is to first analyze the core results without weighting. Next is to compare those numbers with a weighting component. I prefer to weight the 1st 2 groupings of 6 - so the 12 most recent data points which when multiplied by 10 comes to the most recent 120 data points of the 600. Remember - each time the F9 key is tapped any one of the 60 core numbers can be selected and by multiplying each of those 60 by 10 we have created a "pool" which is conducive to weighting. By weighting the most recent 6 data points grouping to say 15% and then the next 6 data point grouping to say 12% I have effectively increased the importance of the 2 most recent groupings to 26% of the 600 total data points (15% weighting = 90, 12% weighting = 70, total 160 or 26%). Finally I will then review those weighted results against a bias I may wish to impose on the overall outlook.

As always - thanks for reading along - I've tried to be as thorough yet straight forward as possible (if there can be such a thing)! If you click the link below you can watch me visually demonstrate the use of this tool to broaden your understanding of the power within this application. If you are reading this prior to the previous web tutorial, I strongly suggest you click this link  http://www.screencast.... to bring yourself up to speed on the use of the F-Shift Forecaster©. Look for 2 more web - tutorials later this week on the dynamic use of the F-Shift Forecaster AND the results on actual future forecasts that I made using the F-Shift Forecaster© on IBM, RIMM and BA.

http://www.screencast....