Don wrote:bob wrote:Don wrote:bob wrote:Don wrote:bob wrote:Don wrote:Hart wrote:I don't know if the question is taboo or even appropriate but I ask anyways out of sheer inquisitive curiosity. With the sources of Ippolit/RobboLito, so called "clones" of Rybka 3, are we any closer to answering such questions and if we are is it acceptable to discuss such questions in public or are these discussions to remain in dark corners or behind closed doors?
I don't believe there is anything special in Rybka, it's just the program as a whole that is special. I think you will be disappointed if you believe there is some secret that Vas is keeping in the closet somewhere.
I looked at the sources of IPP and the only thing that stood out to me is that it is probably more selective than most or all programs that are really strong, but I did not see any special mechanism for doing this, just the same type of stuff we are doing anyway. Vas is just doing it better.
Another thing that stands out to me is that it is has an excellent evaluation function. It may be the best evaluation function of any of the programs, yet it is still probably weaker in some areas than some other good programs. I have heard that king safety is one of the weak points, although it is improved over the previous versions.
There is no secret in Rybka that you can just transfer over to your program in a few minutes and then expect to compete with Rybka.
- Don
Someone has pointed out (privately) that there is a subtle singular extension algorithm in the thing that is probably worth something no doubt. I've not looked at it myself, yet, so can't comment on how well that feature works. When a personal email asked questions about specific code and gave the code along with an explanation, it was pretty obvious what it was doing, and it is just another point in favor of an entry in my to-do list being moved up a bit. I plan on re-visiting this again.
I'm not claiming that Rybka doesn't have any little goodies in it, I'm just saying that I don't believe anything in particular is a breakthrough. It is essentially just a very well engineered program.
You have to know I agree with you there. I've said hundreds of times in the past that we have not seen anything "revolutionary" since alpha/beta. It has been a steady path of evolutionary progress, with some jumps that might be a bit bigger than others, but nothing _major_...
Here is what I consider the big "breakthroughs" if you want to call it that:
1. Shanons layout out the basic algorithm
2. stand pat quies search of captures only.
3. check extensions
4. hash tables
5. null move pruning
6. reduction of various sorts.
I'm not sure that #2 was ever not done. I had a q-search as far back as I can remember. It was in chess 4.x in 1974 or so also. Null-move pruning is certainly more than 50 Elo, but reductions really are not if you already have null-move included. Or reductions are more than 50, but then null-move is not, assuming you add reductions first. They are somewhat complementary since they overlap with the same underlying idea of reducing the depth.
I would likely replace 2 above with alpha/beta which wasn't in shannon's paper at all, it came from Newall, Simon and Shaw a few years later.
Iterative deepening is also a pretty good idea, coming from Slate/Atkin of course.
You could easily include other thing such as MVV/LVA, bitboards, square of the pawn, futility pruning and so on but I don't think they are quite as big as the above. Perhaps check extensions is the most controversial, but it was a huge deal when the idea suddenly busted on the scene.
When you use the word "breakthrough" I'm looking for at least 50 ELO or more.
I'm sure someone else would produce a somewhat different list - but this is what stands out to me as the major jumps in computer chess.
- Don
[/list]
When I turn on/off LMR (even after having null move pruning on) I get at least 100 ELO. LMR is powerful for my program.
- Don
That is weird it is so powerful.
What i did do in 1999 is similar to ippolit/rybka and the reductions i used back then are not that far off from what you call LMR now.
each ply i saw back then as 2 units. with a chess specific function i selected which moves to not reduce and tried to limit it. Also bad captures got reduced for example.
Back then search depth was not so big, so the actual plydepth win was far less than the average risk to use it.
Note i'm not using LMR in todays diep, but the technique in itself is not so great positional seen.
I wouldn't call LMR or history pruning or other forms of reductions a big breakthrough. Already long before that i was using back in 1999 the reduction concept, it was obvious others from 80s already had been using it.
Ed Schroder is a good example.
If you'd look in todays chesssoftware you'll soon realize they all use reductions in some other type of form, so there is not 1 generic idea there that really works well, and nearly all of them 'invented' the concept of reductions themselves.
nullmove is a real breakthrough however, especially recursive nullmove, which AFAIK the first one the really apply it well was Frans Morsch. Only after frans morsch won world champs 1995 and shouted out loud that this was because he had invented recursive nullmove, everyone really started to use it bigtime.
The first tournament after world champs 1995 i can't remember many opposing the idea.
There is other real inventions in search that simply do not work for computerchess but might work in other areas of game tree search.
Good examples are multicut (Yngvi Bjornson) and MTD. Both do not work for computerchess, but they are real inventions.
And yes i also invented several algorithms/enhancements that i intend to use in future diep's. A big problem is that no one pays for investing half a year fulltime investigating a new feature, which definitely is needed to get a lot out of it. This whereas others just simply take over what you invent by simply looking in your assembler code, or manage to steal your source code in yet another new manner. That has happened so much in world top of computerchess, and it isn't different in other worlds.
A second problem with search is that it's not the most important thing.
Most people underestimate importance of evaluation function.
If i would equip todays diep evaluation in diep 1998 search, so not the reduction type 1999 search, then that would beat any of todays engines total hands down, if they would be equipped with an evaluation function out of 1998 (really *any* program).
The search depth difference will probably be huge between diep 1998 and a todays engines search at todays hardware. Back in 1998 diep just did do nullmove R=3 , R=2 and R=1 a subset of which as i had posted on the internet later claimed by Ernst A Heinz as adapted nullmove - which is weird after i had posted it to RGCC at the time some on this, already back in 1996 or so. Really a lot of what i did do back then is far more optimized in todays search. Sometimes things also come back for a short while. For example i used to try more moves back then in qsearch, other than just giving check, avoiding check and captures. Not the Said Koudache idea of trying killer/history moves in qsearch, yet a more limited version of that. Amazingly that seems to be popular again today in some engines. Now you probably don't suffer too much from this if you do hard forward pruning last 3.5 ply if your eval is a pawn or so above beta, but the effect is basically that of checking mainline deeper, the rest i consider randomness.
In any case don't ever forget that evaluation functions have to keep up.
The biggest progress of all engines is the evaluation function.
Todays evaluation functions are so so bugfree compared to what was there in the 90s, it's just not normal.
I don't really see most of todays software as really interesting from search viewpoint. Most are just mainline checkers and prune the rest.
Lemma and vision changes always are the most important thing in history, even more than the actual latest standings of tricks.
Sure, diep does have the biggest evaluation function and once bugfixed that will kick of course more butt than tiny evaluation functions. Sure i did invent something in search also and when i put it to work it will give also another 50+ elo. And yes it might result in the strongest engine on the planet, who knows. Yet if i just debug the EXISTING evaluation function a lot and improve it a tad, is that big progress of the field?
I don't think so.
New algorithms might get seen as progress, yet they don't give 'elo' in itself.
The best ideas and lemma changes usually comes from guys who just didn't make it to world champion or best engine, so artificial intelligence folks who were good enough to understand the things that currently mattered in game tree search, but of course by losing time to investigating new algorithms/enhancements they already made sure they couldn't produce the best engine.
Considering that i'd argue one of the more important figures past 5 years in game tree search is Fabien Letouzey as direct or indirectly his ideas in one or another form boosted computer chess and still is busy boosting it. If you just look at his engines source code, you won't find all that however.
In that sense open source overshoots its target completely.
Vincent