Thanks for all the praise Vince, a bit over the top, but nice to read.diep wrote:The reason why todays top engines basically do some trivial things like toying with how much to reduce, which is trivial to experiment with - and as i understand many experiments get carried out that simply are never within a mathematical optimum is a time reason.BubbaTough wrote:Your conclusion may (or may not) be right, but I abhor your reasoning. You give much too much respect to the techniques used in the top couple of engines in my opinion. I think its more useful, and more accurate, to assume almost everything in the top engines can be improved until you prove to your satisfaction otherwise. If you look at the best programs 10 years ago, and identify the things they got perfect, you will find very few. There is no reason the same won't be true 10 years from now.lkaufman wrote:At least, if this is not so, then Rybka, Ippo, Ivanhoe, Houdini, and Critter are all doing something silly. I don't believe this.
-Sam
It's easy and braindead to experiment with, any other algorithm that's promising yet no one really got it well to work, such as multicut you hear no one about that. Not sure i saw it correct, could be stockfish is using it.
If you have a new algorithm no one ever has tried before, and i really mean ALGORITHM, not so much modifying a few parameters which is what happens here and what the discussion was about - such research eats massive time. fulltime time i'd argue.
As for the engines of 10 years ago, you should give them a tad more credit. It was difficult to test anything back then, as very few had enough hardware to do so. There was a split between the engines back then. A few focussing upon searching deeper by means of cutting the branching factor, and a number that combined all this meanwhile trying to find the maximum amount of tactics.
If one thing was really well optimized in most engines, it was finding the maximum amount of tactics. Todays engines need 2 plies to find what some engines back then saw within 1 ply, especially near the leaves.
An engine really well optimized to find tactics by then was Rebel. It's entire pruning system based upon not missing tactics; maybe Ed wants to show some gems there, showing how efficient Rebel searched tactics back then.
What most here don't realize is that Ed had invented for this his own searching algorithm, maybe Ed wants to comment on that, as his homepage of a few years ago shows the correct tables yet doesn't clearly describe the algorithm, last time i checked (which is well over a year ago).
When i analyzed his algorithm a decade ago, it soon showed the brilliancy of Ed end 80s, start 90s as it prunes way more than any of todays reduction systems meanwhile not missing tactics. No todays reduction system can do that for you. Todays reduction systems are very easy to experiment with. No high IQ needed.
This in contradiction to what Ed has been doing. It's not a trivial algorithm to invent. Fact that we are over 25 years further now and no one ever posted something similar to it is already selfexplaining.
However it has 2 achillesheels and that's that it doesn't work very well together with hashtable and though nullmove is possible it isn't as efficient as in normal depthlimited search. Basically todays searching systems are totally based upon hashtable storage; without efficient hashtable you won't be getting any close to 30 ply.
We are 20 years later now however than times that Ed's algorithm was dominating computerchess. Maybe Ed should ask someone who is good in scientific publications, to write down the algorithm carefully, as Rebel dominated computerchess from end 80s well up to 1998.
In Diep, hashtable always has played a crucial role, so it can't do without.
You cannot tactical rival Ed's algorithm the first 100k nodes you search.
If you'd implement that into Stockfish and go play superbullet games single core, it'll win probably everything.
Only when you use a bunch of cores and a bit more serious time control it will lose of course.
This tells you more about the superbullet testing than anything else.
Vincent
What was in Rebel back then is what is now called "Static nullmove reductions", exploited to the extreme. Then it became known that Dynamic recursive Nullmove was more powerful. Frans was the first one and immediately with Fritz 5 took the lead and stayed on top for a couple of years. Ever since then (although in retrospect) I consider my engine as out-dated.
Surely you can mix Static Nullmove with modern LMR. In december last year the old virus hit me again while I thought I was immune for it and in one month was able to add 30-40 elo doing exactly that, mixing old with new stuff. Obviously the new stuff is much more powerful. A normal evolutionary process.