Reducing tablebase size with search

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Edmund
Posts: 668
Joined: Mon Dec 03, 2007 2:01 pm
Location: Barcelona, Spain
Contact:

Re: Reducing tablebase size with search

Post by Edmund » Tue Apr 08, 2014 12:59 am

rreagan wrote:
Edmund wrote:
syzygy wrote:This idea works, but it is already being used at least in my tables. N=1 and just for captures, otherwise it will get too expensive.
Expensive, but depending on the use case still relevant.
Yes, I was actually curious how this might affect chess analysis from the human perspective. If such a method were viable, we could know optimal play in more endgames. In that case who cares if it takes a week or a month or a year to handle all of the probes and searches if we learn something we didn't know before?
When I was talking about particular use cases of this compression scheme I was not aiming for general chess analysis. For that you care about speed so that you can analyze as many leaf nodes as possible.

Compression might make sense for example for endgame tablebase distribution or for guis that only need to assess individual positions one at a time.

syzygy
Posts: 4503
Joined: Tue Feb 28, 2012 10:56 pm

Re: Reducing tablebase size with search

Post by syzygy » Tue Apr 08, 2014 5:54 am

rreagan wrote:
Edmund wrote:
syzygy wrote:This idea works, but it is already being used at least in my tables. N=1 and just for captures, otherwise it will get too expensive.
Expensive, but depending on the use case still relevant.
Yes, I was actually curious how this might affect chess analysis from the human perspective. If such a method were viable, we could know optimal play in more endgames. In that case who cares if it takes a week or a month or a year to handle all of the probes and searches if we learn something we didn't know before?
If you don't care about speed, just run a simple minimax algorithm on a commodore 64 on the initial position and it will tell you the game-theoretic outcome of chess. (Do not use hashtables as they introduce inaccuracies.)

And again, N-men tablebases for any N can be distributed by distrubuting the tablebase generator.

The game of chess does not contain more information than necessary to encode its rules.

If you don't care about speed, chess is directly comparable to tic-tac-toe.

Rein Halbersma
Posts: 686
Joined: Tue May 22, 2007 9:13 am

Re: Reducing tablebase size with search

Post by Rein Halbersma » Tue Apr 08, 2014 6:38 am

syzygy wrote: I compress out "threat positions" in my suicide tablebases (suicide chess has compulsory captures), but I am not sure that I do the same thing. Or actually, I am pretty sure that I don't do the same thing.

What I do is this:
- if the position has one or more captures, play out the captures (resulting in probes into smaller tables). The value stored can be any value (whichever compresses best).
- if the position does not have a capture, try out all "threat" moves that result in a position where the opponent can (and therefore must) capture. If one of the threat moves is winning (i.e. leads to a position in which all the captures by the opponent lose), then any value can be stored (whichever compresses best). If the best threat move draws and the position is a draw, then we can store either a draw or a loss (whichever compresses best).

Finding the values that lead to best compression in my compression scheme is quite likely NP-hard, but I found some heuristics that do quite well.

I only compress out "threat positions" in my 6-piece (suicide) tablebases, as it does slow down accesses (leading to a small search explosion if it needs to be done recursively). It saves about 50%. (Many tables are reduced to zero: all positions that do not have a capture or a winning or drawing threat move are losses.)
One heuristic used in draughts databases is to continue the current run in order to optimize the run-length compression. I think it's NP-hard otherwise, as you noted.
I probably got the idea from Building the Checkers 10-piece Endgame Databases, but what this paper describes seems a bit different:
Whenever you probe a capture or threatened capture position, you then do a search until you reach a quiet position for which the actual WDL or DTC value is stored in the db.
What is a "threatened capture position"? A position where the side not to move would have a capture?

If that's the case, then this trick seems similar to the removal of in-check positions (which I don't do, but which most likely is done by Shredder and I believe by some versions of Robbobases).
Yes, a threatened capture is one where the side not to move can capture.
Because in 8x8 checkers captures are compulsory, kings only capture short-ranged, and men only capture forward, the search tree is rather small and this tradeoff is good for performance. For 10x10 draughts with long-range king captures and backward-capturing men, this optimization does not work as well because the search trees become too large.
Do I understand correctly that those search trees are full of probes into the same database?
That depends: for capture positions, the side to move captures, and so you get a position with fewer pieces. These are typically stored in separate databases. For threatened captures, you play out all the moves for the side to move. Some of those moves will block the pending capture, and you will then look up the resulting position in the same database. Other moves will keep the capture opportunity for the opponent in tact, or create new captures for either side, which then also have to be resolved, ultimately in other databases.

BTW, this is what causes the search explosion. It's actually very similar to quiescence search: you just resolve all moves (using alpha-beta of course) until you have a quiet position with no captures pending for either side. Then you apply the evaluation function, or in this case, do the table lookup. For endgames with kings on both sides, you also need to have some kind of repetition detection in the qsearch in order to weed out "perpetual checks".

syzygy
Posts: 4503
Joined: Tue Feb 28, 2012 10:56 pm

Re: Reducing tablebase size with search

Post by syzygy » Tue Apr 08, 2014 8:18 pm

Rein Halbersma wrote:
syzygy wrote:Do I understand correctly that those search trees are full of probes into the same database?
That depends: for capture positions, the side to move captures, and so you get a position with fewer pieces. These are typically stored in separate databases. For threatened captures, you play out all the moves for the side to move. Some of those moves will block the pending capture, and you will then look up the resulting position in the same database. Other moves will keep the capture opportunity for the opponent in tact, or create new captures for either side, which then also have to be resolved, ultimately in other databases.
Yes, I meant the search trees for threatened captures. The captures go right into smaller databases, so don't cause problems. But staying in the same database is going to be costly. Of course if it saves enough space it may still be worth it.

Cardoso
Posts: 301
Joined: Thu Mar 16, 2006 6:39 pm
Location: Portugal
Full name: Alvaro Cardoso
Contact:

Re: Reducing tablebase size with search

Post by Cardoso » Thu Apr 10, 2014 3:28 pm

Rein Halbersma wrote:
syzygy wrote:
Edmund wrote:
syzygy wrote:This idea works, but it is already being used at least in my tables. N=1 and just for captures, otherwise it will get too expensive.
Expensive, but depending on the use case still relevant. If you only need to store the values for positions, where an engine predicts a wrong value you will get decreasing size for increasing search depth (better compression with increasingly better predictors).
If someone only wants to probe tablebases at the root, then why not?
True, there is a trade-off to be made and where the right balance lies depends on the circumstances.
In 8x8 checkers, it is common that both capture and threatened capture positions are not probed. They are still stored in the uncompressed db, but are encoded with a bogus value such as to maximize the current run of their adjacent values. This can signicantly improve the overall compression rate. The Chinook papers have all the details.

Whenever you probe a capture or threatened capture position, you then do a search until you reach a quiet position for which the actual WDL or DTC value is stored in the db. Because in 8x8 checkers captures are compulsory, kings only capture short-ranged, and men only capture forward, the search tree is rather small and this tradeoff is good for performance. For 10x10 draughts with long-range king captures and backward-capturing men, this optimization does not work as well because the search trees become too large.
Hi, long time no post ;)

strictly speaking about the 8x8 checkers, I wonder what's the correct thing to do:
1 - make a short alphabeta search until no captures for either side exist and then return the value of the TB's lookup function.
2 - simply skip probing the TBs in positions with captures for either side
What's your opinion?
Maybe there's not much of a difference between the two methods.

best regards,
Alvaro

Rein Halbersma
Posts: 686
Joined: Tue May 22, 2007 9:13 am

Re: Reducing tablebase size with search

Post by Rein Halbersma » Thu Apr 10, 2014 8:42 pm

Cardoso wrote: strictly speaking about the 8x8 checkers, I wonder what's the correct thing to do:
1 - make a short alphabeta search until no captures for either side exist and then return the value of the TB's lookup function.
2 - simply skip probing the TBs in positions with captures for either side
What's your opinion?
Maybe there's not much of a difference between the two methods.

best regards,
Alvaro
I think a combination of the two strategies you mention makes sense, maybe encoded in a priority function that takes into account expected lookup costs based on current search depth, alpha/beta, capture/threatened capture etc. Getting this tuned is the hard part!

Post Reply