EGTB value

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Mangar
Posts: 65
Joined: Thu Jul 08, 2010 9:16 am

Re: EGTB value

Post by Mangar »

Hi Bob,

any possibility to use SSD for the test? Current SSD provides about 10K random reads per second - of 4K blocks. Small SSD, large anough for all chess stuff including EGTB´s are not that expensive any more.

Greetings Volker
Mangar Spike Chess
rbarreira
Posts: 900
Joined: Tue Apr 27, 2010 3:48 pm

Re: EGTB value

Post by rbarreira »

zamar wrote:
rbarreira wrote: I realize that if the tablebases are used during search that would slow down the search, but as someone else said earlier it should be good to at least use them in the root (i.e. if current game position is in the tablebase, play the move suggested there).
The modern chess engines are such monsters with their 20-30 ply searches that if there is a win in 5 pieces position they will find it with >99% probability, so using EGTB might give 1-2 elo at maximum. Linking gaviota probing code in SF made it around 1% slower when compiling with GCC (I really don't know why but likely somehow related to the grown size of executable), this means -1 elo.

So it's very close to +-0 as Bob has already said.
Well if that's true that's all that really needed to be said, I wasn't aware of it.

But is that true for short time controls? In that case would an engine find the win with 5 pieces 99% of the time, even if they're playing against an engine using EGTB?
Last edited by rbarreira on Tue Sep 28, 2010 2:14 pm, edited 1 time in total.
Gerard Taille

Re: EGTB value

Post by Gerard Taille »

Hi,
Why not putting in memory the complete 5 pieces db and ignore the 6 pieces db in order to avoid I/O on disk.
On a "monster" it seems almost negligeable to reserve let's say 10Gb or more for these 5 pieces database is'nt it ?
On a smaller computer why not put in memory only the more useful databases ?
Jouni
Posts: 3293
Joined: Wed Mar 08, 2006 8:15 pm

Re: EGTB value

Post by Jouni »

There was another test in Rybka forum showing a small benefit:

Final Report
----------------

I have finished the EGTB test of 10,000 games . Rybka 3 with EGTB's 3-4-5 and 180gb 6man has defeated Rybka 3 without EGTB's in a single CPU fixed depth 8ply match:

Rybka 3 EGTB +2064/=6237/-1699 51.83% 5182.5/10000
Rybka 3 no EGTB +1699/=6237/-2064 48.18% 4817.5/10000

This match favored the EGTB engine by 13 ELO.

Setup
--------
Time control - fixed depth 8 ply
System - Intel i7-920 @2.67ghz , hyperthreading disabled
RAM - 6gb
CPUs - 1 thread per engine
Engine - Rybka 3
Engine hash - 512mb
EGTB hash - 32mb
Engine tablebases - 3-4-5-6 man (180gb 6man)
GUI tablebases - none
Nalimov usage - "Never" for one engine and "Normal" for the other
Opening book - large private book, learning functions disabled, variety at 75%
Engine match options - alternating colors
Game options - Resign : never and Draw : never
OS - Vista x64
GUI - Chessbase Rybka 3 (version May 4 2009)

2 dubious thing: R3 - R3 and fixed depth (isn't just EGTB access making engine slower!)...

Jouni
Last edited by Jouni on Tue Sep 28, 2010 9:24 pm, edited 1 time in total.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: EGTB value

Post by bob »

michiguel wrote:
bob wrote:
Uri Blass wrote:
bob wrote:
rbarreira wrote:I haven't looked at this in detail, but it looks very weird to me that we have databases of perfect moves and we can't make use of them to improve play.

I realize that if the tablebases are used during search that would slow down the search, but as someone else said earlier it should be good to at least use them in the root (i.e. if current game position is in the tablebase, play the move suggested there).
There are several issues involved.

(1) probing in the search slows things down. 10 years ago, this was not so noticable. Today, with an effective branching factor of way less than 2.0 in endgames, it can be a huge loss.
I do not understand why probing in the search has to slow things down(assuming that you do not probe in every node in the search but only when the remaining depth is big enough so the time of searching the remaining depth is bigger than the time of probing the tablebases).

I think that the first test before playing games should be if you get the same depth faster in endgame analysis.

If you get the same depth faster but still do not earn rating points based on games then it is interesting(and maybe the reason can be that you are slightly slower in opening and middle game positions when you do not probe tablebases because checking that you do not need to probe tablebases after a capture cost time).
Simple math. On my 8-core box, Crafty searches about 20M nodes per second up to 30M in the endgame. On a good disk, you can do a read every 5ms, or about 200 reads per second. Compare the speeds. During the time I can do a single read, I can search 150K nodes. That is huge. For every 200 I/O accesses, I could search another 30M nodes. The cost to check for probes is roughly zero since in the opening and middlegame, that branch gets predicted 100% correctly. But when you get down to the 12-16 pieces total, you begin to see EGTB probes. Each successful probe costs about 5ms if a disk access is required, actually quite a bit more since you read in fairly large compressed blocks, and then have to spend time uncompressing a block before the probe can be completed...

Either you rarely probe, which doesn't help much, or you probe a lot, which starts to cost multiple plies.
Your 5ms figure assumes no cache. But if you have a cache that makes you read from disk only 5% of the time, a fast decompressing scheme that is effectively faster than reading uncompressed data, you will end up with an average figure of 0.3 ms (and we are not even talking about SSD). That means, if you probe in a node in which it is required to search deep enough to cost only 10k nodes, the effect on speed will be negligible. That means you can search "relatively" close to the leaves with no time-to-depth cost (in fact it should lower because of the pruning performed at nodes closer to the root). What is the depth you can reach with 10 k nodes? that is "approximately" the distance to the leaves you can _safely_ afford.

Of course, these numbers may not apply to Nalimov, but they do apply to the Gaviota TBs.

Miguel
The cache is much less significant when you factor in the 6-piece files that many are using. Suddenly the cache becomes almost useless due to the 2gb file sizes, and many files have to be broken into many 2gb chunks to work on 32 bit systems.


My observations are exactly that, observations. Here is one example, using just one CPU to avoid any SMP issues. This is fine #70, not a particularly good EGTB position since it takes a while to reach a position with just 5 pieces left (I am only using 3/4/5 piece files for this test). Both find the correct move, Kb1, at depth 24, same amount of time. But as the search progresses...First, normal Crafty, no egtbs, then same depth using EGTBs:

Code: Select all

               37->   0.16   4.27   1. Kb1 Kb7 2. Kc1 Kc7 3. Kd1 Kd7 4.
                                    Kc2 Kc7 5. Kd3 Kb7 6. Ke3 Kc7 7. Kf3
                                    Kd7 8. Kg3 Ke7 9. Kh4 Kf7 10. Kg5 Kg7
                                    11. Kxf5 Kf7 12. Kg5 Kg7 13. f5 Kf7
                                    14. f6 Kf8 15. Kg4 Kg8 16. Kf4 Kf8
                                    17. Kg5 Ke8 18. Kg4 Kf8 19. Kf5

               37->   3.20   4.27   1. Kb1 Kb7 2. Kc1 Kc7 3. Kd1 Kd7 4.
                                    Kc2 Kc7 5. Kd3 Kb7 6. Ke3 Kc7 7. Kf3
                                    Kd7 8. Kg3 Ke7 9. Kh4 Kf7 10. Kg5 Kg7
                                    11. Kxf5 Kf7 12. Kg5 Kg7 13. f5 Kf7
                                    14. f6 Kf8 15. Kg4 Kg8 16. Kf4 Kf8
                                    17. Kg5 Ke8 18. Kg4 Kf8 19. Kf5
20x _slower_ with EGTBs.

Code: Select all

               46->   2.06   7.11   1. Kb1 Kb7 2. Kc1 Kc7 3. Kd1 Kd7 4.
                                    Kc2 Kc7 5. Kd3 Kb7 6. Ke3 Kc7 7. Kf3
                                    Kd7 8. Kg3 Ke7 9. Kh4 Kf7 10. Kg5 Kg7
                                    11. Kxf5 Kf7 12. Kg5 Kg7 13. f5 Kf7
                                    14. f6 Kf8 15. Kg4 Kg8 16. Kf4 Kf8
                                    17. Kg5 Kf7 18. Kf5 Kf8 19. Ke6 Ke8
                                    20. Kxd6 Kf7 21. Ke5 Kf8 22. Ke6 Kg8
                                    23. d6 Kf8

               46->   1:00  16.73   1. Kb1 Kb7 2. Kc1 Kc7 3. Kd1 Kd7 4.
                                    Kc2 Kc7 5. Kd3 Kb7 6. Ke3 Kc7 7. Kf3
                                    Kd7 8. Kg3 Ke7 9. Kh4 Kf7 10. Kg5 Kg7
                                    11. Kxf5 Kf7 12. Ke4 Kf6 13. f5 Kg5
                                    14. Kd3 Kh5 15. Kc4 Kg5 16. f6 Kg6
                                    17. Kb5 Kf7 18. Kxa5 Ke8 19. f7+ Kf8
                                    20. Kb6 Kg7 21. f8=Q+ Kg6 22. Qxd6+
                                    Kf7 23. Qc7+ Ke8 24. Qc8+ Ke7 25. Qc5+
                                    Kd7 26. Qc7+ Ke8
30x slower there. Current Crafty is probing only up to 1/2 the normal iteration depth. For the 46 ply search, it only probes at plies <= 23. This is quite a ways from egtb probes at the root, and in fact:

predicted=0 evals=1.6M 50move=0 EGTBprobes=53K hits=53K

Is the the search statistics. 53K probes. 30x slower. NPS?

Code: Select all

              time=2.06  mat=1  n=8711502  fh=94%  nps=4.2M
              time=1&#58;00  mat=1  n=21506138  fh=92%  nps=356K
NPS dropped by 10x. That is why this is having a no-better result in cluster testing, whether I used very fast games or very slow games. The overall loss in NPS _really_ affects search depth when the middlegame branching factor is 2.0 or less. Enough so that the loss in depth offsets the gain in perfect knowledge...

Even fast disks don't save the day. And when you add the 6-piece files, boom.

For fun, on the fastest 64gb SSD I have:

Code: Select all

time=1.64  mat=1  n=8711502  fh=94%  nps=5.3M
time=12.99  mat=1  n=21506138  fh=92%  nps=1.7M
6x slower there. If the time limit was exactly 1 second, without, Crafty hits 44 plies, with, 38, before time runs out. 6 plies lost. This is with all 3-4-5 piece files, again on fine 70 for comparison. I can test any position you want, but egtbs really have a negative impact even on a laptop with 4 gigs of RAM for buffering, and a 64 gig SSD for storage.

....
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: EGTB value

Post by bob »

Mangar wrote:Hi Bob,

any possibility to use SSD for the test? Current SSD provides about 10K random reads per second - of 4K blocks. Small SSD, large anough for all chess stuff including EGTB´s are not that expensive any more.

Greetings Volker
See my other post. I used the fastest 64gb SSD I am aware of. With the probe limit set at 1/2 iteration_depth, it still slows things down by a factor of 6x, with 128mb of cache, on a 4gb memory machine which can cache a lot of "stuff".
User avatar
michiguel
Posts: 6401
Joined: Thu Mar 09, 2006 8:30 pm
Location: Chicago, Illinois, USA

Re: EGTB value

Post by michiguel »

Jouni wrote:There was another test in Rybka forum showing a small benefit:

Final Report
----------------

I have finished the EGTB test of 10,000 games . Rybka 3 with EGTB's 3-4-5 and 180gb 6man has defeated Rybka 3 without EGTB's in a single CPU fixed depth 8ply match:

Rybka 3 EGTB +2064/=6237/-1699 51.83% 5182.5/10000
Rybka 3 no EGTB +1699/=6237/-2064 48.18% 4817.5/10000

This match favored the EGTB engine by 13 ELO.

Setup
--------
Time control - fixed depth 8 ply
System - Intel i7-920 @2.67ghz , hyperthreading disabled
RAM - 6gb
CPUs - 1 thread per engine
Engine - Rybka 3
Engine hash - 512mb
EGTB hash - 32mb
Engine tablebases - 3-4-5-6 man (180gb 6man)
GUI tablebases - none
Nalimov usage - "Never" for one engine and "Normal" for the other
Opening book - large private book, learning functions disabled, variety at 75%
Engine match options - alternating colors
Game options - Resign : never and Draw : never
OS - Vista x64
GUI - Chessbase Rybka 3 (version May 4 2009)

But may be playing against same program isn't good test at all. BTW Bob
what are your exact test conditions?

Jouni
The problems is that it is fixed depth. That ignores 1) changes in nps and 2) pruning effect of EGTBs. (one goes in one direction and the second the other).

Anyway, this certainly shows the "potential" of 6-pc EGTB.

Miguel
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: EGTB value

Post by bob »

rbarreira wrote:
zamar wrote:
rbarreira wrote: I realize that if the tablebases are used during search that would slow down the search, but as someone else said earlier it should be good to at least use them in the root (i.e. if current game position is in the tablebase, play the move suggested there).
The modern chess engines are such monsters with their 20-30 ply searches that if there is a win in 5 pieces position they will find it with >99% probability, so using EGTB might give 1-2 elo at maximum. Linking gaviota probing code in SF made it around 1% slower when compiling with GCC (I really don't know why but likely somehow related to the grown size of executable), this means -1 elo.

So it's very close to +-0 as Bob has already said.
Well if that's true that's all that really needed to be said, I wasn't aware of it.

But is that true for short time controls? In that case would an engine find the win with 5 pieces 99% of the time, even if they're playing against an engine using EGTB?
The issue might be a bit more easily explained, than even what Joona mentioned.

You have two classes of positions: (a) those your search can win by itself; and (b) those it can only win with EGTBs. In my testing, almost all 3-4-5 piece endings fit into A. A few tricky ones (KNNKP perhaps, KQPKQ) may need some help, not because a 40+ ply search can't win them, but instead, because if we encounter one of those positions near the horizon we will mis-evaluate it.

Adding EGTBs hurts case (a) because it slows you down significantly. It helps (b) because you now have perfect knowledge at parts of the tree where you can afford to access it. You help a very small part of the total positions you search, you hurt a huge part of them. Overall, things seem to wash out, regardless of the time control used.

I have not yet tried bitbases, so can't address that issue yet.

The most important part of EGTBs is not "winning" won positions, it is correctly classifying as won/lost/drawn those positions in the search where the normal evaluation produces an incorrect result. SO that you don't stumble into a position you would rather avoid, from a position where you are winning, etc...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: EGTB value

Post by bob »

Jouni wrote:There was another test in Rybka forum showing a small benefit:

Final Report
----------------

I have finished the EGTB test of 10,000 games . Rybka 3 with EGTB's 3-4-5 and 180gb 6man has defeated Rybka 3 without EGTB's in a single CPU fixed depth 8ply match:

Rybka 3 EGTB +2064/=6237/-1699 51.83% 5182.5/10000
Rybka 3 no EGTB +1699/=6237/-2064 48.18% 4817.5/10000

This match favored the EGTB engine by 13 ELO.

Setup
--------
Time control - fixed depth 8 ply
System - Intel i7-920 @2.67ghz , hyperthreading disabled
RAM - 6gb
CPUs - 1 thread per engine
Engine - Rybka 3
Engine hash - 512mb
EGTB hash - 32mb
Engine tablebases - 3-4-5-6 man (180gb 6man)
GUI tablebases - none
Nalimov usage - "Never" for one engine and "Normal" for the other
Opening book - large private book, learning functions disabled, variety at 75%
Engine match options - alternating colors
Game options - Resign : never and Draw : never
OS - Vista x64
GUI - Chessbase Rybka 3 (version May 4 2009)

2 dubious thing: R3 - R3 and fixed depth (isn't just EGTB access making engine slower!)...

Jouni
Fixed depth is simply bogus. The EGTBs will normally cost plies, particularly in endgames. This is yet another worthless result. Yes, if I could use EGTBs and have _no_ performance penalty, they would help. But that is not reality.
rbarreira
Posts: 900
Joined: Tue Apr 27, 2010 3:48 pm

Re: EGTB value

Post by rbarreira »

bob wrote:
rbarreira wrote:
zamar wrote:
rbarreira wrote: I realize that if the tablebases are used during search that would slow down the search, but as someone else said earlier it should be good to at least use them in the root (i.e. if current game position is in the tablebase, play the move suggested there).
The modern chess engines are such monsters with their 20-30 ply searches that if there is a win in 5 pieces position they will find it with >99% probability, so using EGTB might give 1-2 elo at maximum. Linking gaviota probing code in SF made it around 1% slower when compiling with GCC (I really don't know why but likely somehow related to the grown size of executable), this means -1 elo.

So it's very close to +-0 as Bob has already said.
Well if that's true that's all that really needed to be said, I wasn't aware of it.

But is that true for short time controls? In that case would an engine find the win with 5 pieces 99% of the time, even if they're playing against an engine using EGTB?
The issue might be a bit more easily explained, than even what Joona mentioned.

You have two classes of positions: (a) those your search can win by itself; and (b) those it can only win with EGTBs. In my testing, almost all 3-4-5 piece endings fit into A. A few tricky ones (KNNKP perhaps, KQPKQ) may need some help, not because a 40+ ply search can't win them, but instead, because if we encounter one of those positions near the horizon we will mis-evaluate it.

Adding EGTBs hurts case (a) because it slows you down significantly. It helps (b) because you now have perfect knowledge at parts of the tree where you can afford to access it. You help a very small part of the total positions you search, you hurt a huge part of them. Overall, things seem to wash out, regardless of the time control used.

I have not yet tried bitbases, so can't address that issue yet.

The most important part of EGTBs is not "winning" won positions, it is correctly classifying as won/lost/drawn those positions in the search where the normal evaluation produces an incorrect result. SO that you don't stumble into a position you would rather avoid, from a position where you are winning, etc...
But adding EGTB probes at the root node only does not slow you down at all. So case a) positions will work as they do now, and case b) positions will always be won no matter the time control.

Unless the b) cases are really really rare, I don't see how probing EGTBs at the root search would be negligible in terms of strength.