I play chess at the club level and have been fascinated recently by the new Thoresen Chess Engines Competition.
In watching some of the very long, very blocked endgames, it occurred to me that much of the computation performed by the engines was highly repetitious; that is to say, not only would positions transpose between different move orders of the engines, but they would also be repeating analyses performed for earlier moves.
So, I ask, why not a custom tablebase built on the fly?
To illustrate, it is my understanding that 7-man tablebases do not exist due to their immense size, but that the 6-man tablebase, KRPkrp, does exist. An upper limit to the size of KRPkrp is 64*63*48*61*60*48*2 = 6.8E10 (there will be fewer than this due to illegal positions).
With that number in mind, it is understandable that a complete KRPPkrp tablebase does not exist; but at game-time, you don't need the entire tablebase because you know the initial positions of the pawns. For instance, in the (trivial) position where white's pawns are at a7 and h7 and black's is at d2, there are only three possible squares for each of white's pawns and four for black's (including the promotion squares, of course, including the possibility of capture), cutting down the size of the tablebase considerably.
Additionally, it should not be necessary for the computation to be complete in order to be useful. If any of the pawns promote, and is not instantly recaptured (or there isn't some kind of silly mate), it may be taken for granted that promotion equals victory; the engine's normal evaluation functions could take over given such a condition.
Thus, the tablebase I have envisaged wouldn't necessarily return the value "Mate in X" or "Conversion in Y" values, but would also return evaluations: "+Z in A".
Then, while playing, the engine would not be recomputing prior analysis from scratch, it would be extending the tablebase, going to each evaluated position and creating new entries from that starting point.
This could be extended to large numbers of pawns in locked positions, simply because the initial pawn position limits the number of pawn positions to be examined.
So my question is: do any engines do anything like this? If not, what are the hardware/software constraints that prevent such a thing?
And please excuse my ignorance! I'm new to computer chess!
Tablebase Computation in Real Time
Moderators: hgm, Dann Corbit, Harvey Williamson
-
rbarreira
- Posts: 900
- Joined: Tue Apr 27, 2010 3:48 pm
Re: Tablebase Computation in Real Time
You have just reinvented transposition tables
http://chessprogramming.wikispaces.com/ ... tion+Table
http://chessprogramming.wikispaces.com/ ... tion+Table
-
jiHymas
Re: Tablebase Computation in Real Time
Thank you! It seems I'm fifty years behind ... but moving fast!rbarreira wrote:You have just reinvented transposition tables :wink:
http://chessprogramming.wikispaces.com/ ... tion+Table
Now I have to ponder what happened in Vitrivius - Quazar game in round 2 of stage 1 of the 2013 nTCEC which featured 50 moves of basically plan-less play (as far as I could tell), but which seemed to me at the time to be tailor-made for my brilliant and novel idea.
-
hgm
- Posts: 27703
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Tablebase Computation in Real Time
Actually transposition tables store results of normal (forward) search, not of tablebases (which are created by retrograde analysis).
The idea to build tablebases 'on the fly', based on the limited number of 'P-slices' that can still be used from the current position, is quite old. I posted several times about it in the past. Although it is still on my to-do list to equip my engine with this, it never moved up to a position where I actually would work on it. There is little incentive to do this as long as my engine is still crushed long before it ever reaches an end-game. You won't gain many points by building tablebases for positions where you are a Rook and two Queens behind...
The idea to build tablebases 'on the fly', based on the limited number of 'P-slices' that can still be used from the current position, is quite old. I posted several times about it in the past. Although it is still on my to-do list to equip my engine with this, it never moved up to a position where I actually would work on it. There is little incentive to do this as long as my engine is still crushed long before it ever reaches an end-game. You won't gain many points by building tablebases for positions where you are a Rook and two Queens behind...
-
jiHymas
Re: Tablebase Computation in Real Time
Not enough expected ELO per programming hour sounds like a very good explanation to me!hgm wrote:There is little incentive to do this as long as my engine is still crushed long before it ever reaches an end-game. You won't gain many points by building tablebases for positions where you are a Rook and two Queens behind...
My own career has focussed on programming quantitative investment strategies, which of course is all about comparison of evaluation vectors and none of the deterministic element there is in chess. A chess engine, though, could be quite a worthy retirement project!
-
hgm
- Posts: 27703
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Tablebase Computation in Real Time
Be careful! It is very addictive... 
-
Volker Annuss
- Posts: 180
- Joined: Mon Sep 03, 2007 9:15 am
Re: Tablebase Computation in Real Time
There are some problems in chess programming that can be seen as investment strategies:jiHymas wrote:My own career has focussed on programming quantitative investment strategies, which of course is all about comparison of evaluation vectors and none of the deterministic element there is in chess.
- Which move should be extended / reduced / pruned in search.
- How much time should be used for a position. (The more general approach to the obvious/easy move problem currently discussed here.)
-
jiHymas
Re: Tablebase Computation in Real Time
I will be most interested to learn just what techniques cross over. One thing that might is the concept of risk - and perhaps I will be only 49 years behind the state of the art on this one!Volker Annuss wrote:There are some problems in chess programming that can be seen as investment strategies:
For instance, when comparing Security A (which I own) to Security B (which I might swap into), I might determine that the "Reward" of B is higher than A, e.g., that B is expected to outperform A.
In a great many cases, however, the programme will not generate a swap signal, because the two instruments are too dissimilar: my programme works best at comparing apples to apples and it knows this. The programme wants more expected Reward as the dissimilarity increases.
I suppose there's the potential for this concept to cross over into chess programming: a programme could reject the "best" move on the grounds that there is a material imbalance (e.g., exchange R for BPP - might look good but the programme is evaluating apples vs. oranges); or perhaps because the evaluation vector of the position is too unbalanced (e.g., it's less risky and therefore better to have created a backward pawn (+0.25) than to have doubled rooks (+1.00) at the expense of blocked bishop (-0.75)).
-
Edmund
- Posts: 670
- Joined: Mon Dec 03, 2007 3:01 pm
- Location: Barcelona, Spain
Re: Tablebase Computation in Real Time
The problem with your idea in computerchess is, that only single values are propagated. When you receive a value for a certain position it is just a prediction, but there is no way of giving a distribution over the probability of possible outcomes. Alpha-Beta just doesn't work that way.jiHymas wrote:I will be most interested to learn just what techniques cross over. One thing that might is the concept of risk - and perhaps I will be only 49 years behind the state of the art on this one!Volker Annuss wrote:There are some problems in chess programming that can be seen as investment strategies:
For instance, when comparing Security A (which I own) to Security B (which I might swap into), I might determine that the "Reward" of B is higher than A, e.g., that B is expected to outperform A.
In a great many cases, however, the programme will not generate a swap signal, because the two instruments are too dissimilar: my programme works best at comparing apples to apples and it knows this. The programme wants more expected Reward as the dissimilarity increases.
I suppose there's the potential for this concept to cross over into chess programming: a programme could reject the "best" move on the grounds that there is a material imbalance (e.g., exchange R for BPP - might look good but the programme is evaluating apples vs. oranges); or perhaps because the evaluation vector of the position is too unbalanced (e.g., it's less risky and therefore better to have created a backward pawn (+0.25) than to have doubled rooks (+1.00) at the expense of blocked bishop (-0.75)).
Some links to papers you might find interesting:
www.ru.is/faculty/yngvi/pdf/BjornssonM00.pdf
http://citeseerx.ist.psu.edu/viewdoc/su ... 1.1.32.176
http://citeseerx.ist.psu.edu/viewdoc/su ... 1.102.6108
http://www.top-5000.nl/ps/B-Star%20Prob ... Search.pdf
-
jiHymas
Re: Tablebase Computation in Real Time
Thank you very much for the links!Edmund wrote:The problem with your idea in computerchess is, that only single values are propagated.
The one I found most interesting was Junghanns' Fuzzy Numbers in Search to Express Uncertainty, which expresses a version of the "risk" idea.
You are obviously far more familiar with the chess engine literature than I am, and it may be that I don't find out why my idea won't work until I've gotten my hands dirty and tried it. All I can say is that I have found it best in quantitative investment programming to propogate the vectors, using differences between these vectors as a measure of the risk of the potential trade, and not to collapse the vectors into a scalar until the last possible moment in the programming - when the vectors are collapsed into a binary 'execute trade or not' decision.
So it's the propogation of scalars in computerchess that I find of interest. Why only scalars? Why not the entire vector (or perhaps some reasonably condensed version)?
A nuance that occurred to me after reading your post and scanning the papers you very kindly linked was that the risk aversion I'm describing could - potentially - apply only in superior positions. It could turn out that in an inferior position, it would be optimal to prefer evaluation vectors that have a lot of contradictory elements; in human terms this might (might!) be referred to as 'seeking complications'.