My idea to use tablebases is the following.
1)You do not probe tablebases at every node but only when the remaining depth is bigger than some constant and you store in the hash all tablebase positions that you probe.
2)For every new position that you probe tablebases you also calculate normal static evaluation in centipawns without probing tablebases
(I denote it as c)
and you update one of the following arrays
1)num_of_wins_of_white[c] if white wins based on tablebases
2)num_of_draws_of_white[c] if white draw based on tablebases
3)num_of_losses_of_white[c] if white lose based on tablebases.
Later if you get a position when you do not probe tablebases you use the
arrays to get some estimate for the tablebase score and use it to change your evaluation.
For example you can be almost be sure that 200 centipawns for white
means a draw if you have something like
num_of_position_of_white[200]=101
num_of_wins_of_white[200]=1
num_of_draws_of_white[200]=100
num_of_losses_of_white[200]=0
It means that you can translate the 200 centipawns for white to something that is close to draw score like 5 centipawns for white.
A possible problem may be that num_of_positions_of_white[200] is not big enough to get a good estimate and in this case
you can count the evaluation not in 1/100 pawns but in 1/10 pawns
and round it for the array.
learning from tablebases during search idea
Moderator: Ras
-
Uri Blass
- Posts: 11161
- Joined: Thu Mar 09, 2006 12:37 am
- Location: Tel-Aviv Israel
-
CRoberson
- Posts: 2095
- Joined: Mon Mar 13, 2006 2:31 am
- Location: North Carolina, USA
Re: learning from tablebases during search idea
Interesting idea, but I think I see a flaw.
If the tree root has many pieces (more than tablebases can handle) then I see many draws down certain branches. I want to avoid those branches. Your way would have me assuming that all the other branches (the ones I now want to consider as better than a draw) are drawn when they may not be.
If the tree root has many pieces (more than tablebases can handle) then I see many draws down certain branches. I want to avoid those branches. Your way would have me assuming that all the other branches (the ones I now want to consider as better than a draw) are drawn when they may not be.
-
Uri Blass
- Posts: 11161
- Joined: Thu Mar 09, 2006 12:37 am
- Location: Tel-Aviv Israel
Re: learning from tablebases during search idea
If you are afraid from draw evaluation based on number of cases that are too small you can have initial numbers in the array
If 200 is translated to expected result of 75% based on statistics of all the tablebases and not only on tablebases in the search space then you can start with 100 faked probes with the following(maybe it is better to start with less faked probes and I only show the idea):
num_of_position_of_white[200]=100
num_of_wins_of_white[200]=50
num_of_draws_of_white[200]=50
num_of_losses_of_white[200]=0
later you can update the tables after every prob so if you have
100 draws and one win you are going to get
num_of_position_of_white[200]=201
num_of_wins_of_white[200]=51
num_of_draws_of_white[200]=150
num_of_losses_of_white[200]=0
It means that the expected score for white now is (51+0.5*150)/201
that may be translated to something smaller than +2(maybe +1)
If 200 is translated to expected result of 75% based on statistics of all the tablebases and not only on tablebases in the search space then you can start with 100 faked probes with the following(maybe it is better to start with less faked probes and I only show the idea):
num_of_position_of_white[200]=100
num_of_wins_of_white[200]=50
num_of_draws_of_white[200]=50
num_of_losses_of_white[200]=0
later you can update the tables after every prob so if you have
100 draws and one win you are going to get
num_of_position_of_white[200]=201
num_of_wins_of_white[200]=51
num_of_draws_of_white[200]=150
num_of_losses_of_white[200]=0
It means that the expected score for white now is (51+0.5*150)/201
that may be translated to something smaller than +2(maybe +1)