Three ideas for performance improvements

Discussion of chess software programming and technical issues.

Moderator: Ras

User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Three ideas for performance improvements

Post by sje »

Three ideas for performance improvements:

1) Preloading the pawn structure evaluation transposition table. Let's say that your program has a very computationally expensive pawn structure evaluation routine and that you also have a large transposition table to store the results. Why not pre-calculate the most common N million opening pawn structures and read them into the table before a game starts? Maybe there are other tables that could be preloaded.

2) Transposition table vacuuming. Mark each transposition table entry with a ratchet signature like piece distribution or pawn advancement; when a move is played on the board, a routine (maybe running on a separate thread on a spare core) would clear out all transposition entries that corresponded to positions no longer reachable.

3) Have a program scan through a few million high level games to build a model that used material distribution to predict the count of remaining moves in a game. This could be used to better manage time allocation.
Allard Siemelink
Posts: 297
Joined: Fri Jun 30, 2006 9:30 pm
Location: Netherlands

Re: Three ideas for performance improvements

Post by Allard Siemelink »

For 2) you may be better off implementing this in your replacement policy: always replace a hash entry in case it is unreachable.
For the unreachability test, you could compare
if (entry.fiftyply<searchroot.fiftyply) -->unreachable
here, 'fiftyply' is the ply, counting from the beginning of the game, at which the most recent pawn push or capture was made.
User avatar
hgm
Posts: 28353
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Three ideas for performance improvements

Post by hgm »

In Joker I mark all enties used in the most recent search, and then delete those that were not used afterwards. In principle, i would like to do the deletion as part of the replacement algorithm during the next search. But to achieve that I would need more than a single bit to record the state of the entry (not accessed in last search, accessed in last search, but not yet in this one, accessed in the current search), and I don't have that bit easily available. (I squeeze 7 entries in a 64-byte cache line.)

In practice the overhead is so small that with a 128MB hash you would only notice it in the fastest bullet games. And then it means that your hash size is unrealistically high. So getting rid of the overhead is not very high on my list of priorities. The only thing I did is that when time drops below 10 seconds, it only does the purging only once every 10 searches.