So this is really killing me at the moment.
In a 6/45 game I've worked out:
- How to consistently get 5 numbers from a selection pool of between 18 to 24 numbers. The 5 number hit comes up on either the next game or the one that follows, consistently!!
- How to intermittently get 6 numbers from a selection pool of between 18 to 24 numbers. The 6 number hit comes up randomly, I can not get a fix on what triggers it.
What this is showing me is that I have found a way to reduce the number pool with good consistency, it does not bring the number pool down to a small enough pool of numbers to be able to play ALL the numbers at once. So I'm trying to figure a next step in my selection process to cull further numbers from the selection pool.
Firstly I need to get a consistent 6/xx number combination, if I start with 45 numbers I can hit 6 numbers every time ;-)
I need to get down to say 24 numbers with a consistent 6 number winning combination within the next 4 games. Beyond that and the "dynamic links" break down.
My selection process is based on using a non dimensional dynamic data sets, it's a reasonably complex process to figure out but once you have the logic and the probabilities laid out correctly you look at it and think, wow, that's actually not that complex in hind sight.
My findings are showing me that there is a link to past results and their effect on future outcomes. I have found this not just for my target game but also for other sample games I have run the same process on. It seems to be independent of the selection or drop process used for the actual draws, I know this sounds daft and illogical but I can not ignore what I have found.
If my results had been inconsistent then I would say "yeah good fluke" but they are testing out with good consistency.
I have back traced all my equations to double check that I have not inadvertently forward contaminated my results. That is that I have not included future results somehow into my historical data analysis, all good, no contamination, I have in the past suddenly gotten some excellent results only to find that I had a future contamination which gives an unfair weight to the algorithms used.
Essentially there seems to be 2 main contributors to my outcomes, I combine these into a final selection data set of numbers. One of these contributors is a well studied approach using Bayesian reduction on actual numbers, essentially a hot cold analysis which pretty much most people can do. The other is a completely different way of analysis that is not driven by the actual numbers them selves but by a way of illogical interpretation of the numbers behaviour patterns.
Independently either approach does not gain much advantage, but combined they seem to work very well.
I have been recently applying ratio input to each method and testing the outcomes, one of the methods seems to have a stronger influence than the other in maintaining the best final results. I am trying to find the ratio that gives the optimum outcome but am thinking that this will be a dynamic variable that I need to track in the historical outcome and see if it has a pattern as well, that way i can optimise the ratio based on historical outcomes.
So whilst what I'm doing will not win me a grand prize as the viability of playing such number sets is very impractical it does encourage me to keep chipping away at the process. It is showing me that I have found a way to improve the odds significantly but not sufficiently to play them.
It could be that what I have implemented hit's a wall. That beyond what I have done there is no way to gain further improvements? I am unsure of this at the moment though. I have the same gut instinct churning around in me that I had when I first attempted to find a pattern, that sense that there is something there just beyond the horizon of my vision.
Anyway's just an update on my current progress.
It's all very large.......