Crafty 23.0 has been released

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

User avatar
Peter Skinner
Posts: 1763
Joined: Sun Feb 26, 2006 1:49 pm
Location: Edmonton, Alberta, Canada
Full name: Peter Skinner

Crafty 23.0 has been released

Post by Peter Skinner »

Hello everyone,

We have released Crafty v23.0 to the FTP and I have built Win32/64 builds for those that can not compile themselves.

Please also download the books from my site, as this version uses a new book scheme that is not compatible with previous versions.

My Crafty site

On behalf of the Crafty team,

Peter[/url]
I was kicked out of Chapters because I moved all the Bibles to the fiction section.
User avatar
Dr.Wael Deeb
Posts: 9773
Joined: Wed Mar 08, 2006 8:44 pm
Location: Amman,Jordan

Re: Crafty 23.0 has been released

Post by Dr.Wael Deeb »

Thanks a lot Peter,you kept your promise about the new opening book....
Dr.D
_No one can hit as hard as life.But it ain’t about how hard you can hit.It’s about how hard you can get hit and keep moving forward.How much you can take and keep moving forward….
User avatar
Denis P. Mendoza
Posts: 415
Joined: Fri Dec 15, 2006 9:46 pm
Location: Philippines

Re: Crafty 23.0 has been released

Post by Denis P. Mendoza »

Peter Skinner wrote:Hello everyone,

We have released Crafty v23.0 to the FTP and I have built Win32/64 builds for those that can not compile themselves.

Please also download the books from my site, as this version uses a new book scheme that is not compatible with previous versions.

My Crafty site

On behalf of the Crafty team,

Peter[/url]
Thanks, I was about to ask about why I can't use the old book format when I was profiling the binary here, even got some crashes. I forgot to read this as I was in a hurry:

Code: Select all

 *                                                                             *
 *    23.0   Essentially a cleaned up 22.9 version.  Comments have been        *
 *           reviewed to make sure they are consistent with what is actually   *
 *           done in the program.  Major change is that the random numbers     *
 *           used to produce the Zobrist hash signature are now statically     *
 *           initialized which eliminates a source of compatibility issues     *
 *           where a different stream of random numbers is produced if an      *
 *           architecture has some feature that changes the generator, such    *
 *           as a case on an older 30/36 bit word machine.  The issue with     *
 *           this change is that the old binary books are not compatible and   *
 *           need to be re-created with the current random numbers.  The       *
 *           "lockless hash table" idea is back in.  It was removed because    *
 *           the move from the hash table is recognized as illegal when this   *
 *           is appropriate, and no longer causes crashes.  However, the above *
 *           pawn hash issue showed that this happens enough that it is better *
 *           to avoid any error at all, including the score, for safety.  We   *
 *           made a significant change to the parallel search split logic in   *
 *           this version.  We now use a different test to limit how near the  *
 *           tips we split.  This test measures how large the sub-tree is for  *
 *           the first move at any possible split point, and requires that     *
 *           this be at least some minimum number of nodes before a split can  *
 *           be considered at this node.  The older approach, which based this *
 *           decsion on remaining search depth at a node led to some cases     *
 *           where the parallel search overhead was excessively high, or even  *
 *           excessively low (when we chose to not split frequently enough).   *
 *           This version appears to work well on a variety of platforms, even *
 *           though NUMA architectures may well need additional tuning of this *
 *           paramenter (smpsn) as well as (smpgroup) to try to contain most   *
 *           splits on a single NUMA node where memory is local.  I attempted  *
 *           to automate this entire process, and tune for all sorts of plat-  *
 *           forms, but nothing worked for the general case, which leaves the  *
 *           current approach.  When I converted back to threads from          *
 *           processes, I forgot to restore the alignment for the hash/pawn-   *
 *           hash tables.  The normal hash table needs to be on a 16 byte      *
 *           boundary, which normally happens automatically, but pawn hash     *
 *           entries should be on a 32 byte boundary to align them properly in *
 *           cache to avoid splitting an entry across two cache blocks and     *
 *           hurting performance.  New rook/bishop cache introduced to limit   *
 *           overhead caused by mobility calculations.  If the ranks/files the *
 *           rook is on are the same as the last time we evaluated a rook on   *
 *           this specific square, we can reuse the mobility score with no     *
 *           calculation required.  The code for "rook behind passed pawns"    *
 *           was moved to EvaluatePassedPawns() so that it is only done when   *
 *           there is a passed pawn on the board, not just when rooks are      *
 *           present.  Book learning has been greatly cleaned up and           *
 *           simplified.  The old "result learning" (which used the game       *
 *           result to modify the book) and "book learning" (which used the    *
 *           first N search scores to modify the book) were redundant, since   *
 *           result learning would overwrite whatever book learning did.  The  *
 *           new approach uses the game result (if available) to update the    *
 *           book database when the program exits or starts a new game.  If    *
 *           a result is not available, it will then rely on the previous      *
 *           search results so that it has some idea of whether this was a     *
 *           good opening or not, even if the game was not completed.  Minor   *
 *           LMR bug in SearchRoot() could do a PVS fail-high research using   *
 *           the wrong depth (off by -1) because of the LMR reduction that had *
 *           been set.  The normal search module had this correct, but the     *
 *           SearchRoot() module did not.  EvaluateDevelopment() was turned    *
 *           off for a side after castling.  This caused a problem in that we  *
 *           do things like checking for a knight blocking the C-pawn in queen *
 *           pawn openings.  Unfortunately, the program could either block the *
 *           pawn and then castle, which removed the blocked pawn penalty, or  *
 *           it could castle first and then block the pawn, making it more     *
 *           difficult to develop the queen-side.  We now enable this code     *
 *           always and never disable it.  This disable was done years ago     *
 *           when the castling evaluation of Crafty would scan the move list   *
 *           to see if crafty had castled, when it noticed castling was no     *
 *           longer possible.  Once it had castled, we disabled this to avoid  *
 *           the lengthy loop and the overhead it caused.  Today the test is   *
 *           quite cheap anyway, and I measured no speed difference with it on *
 *           or off, to speak of.  Now we realize that the knight in front of  *
 *           the C-pawn is bad and needs to either not go there, or else move  *
 *           out of the way.                                                   *
 *                                                                             *
 *******************************************************************************
Well-documented!
gerold
Posts: 10121
Joined: Thu Mar 09, 2006 12:57 am
Location: van buren,missouri

Re: Crafty 23.0 has been released

Post by gerold »

Peter Skinner wrote:Hello everyone,

We have released Crafty v23.0 to the FTP and I have built Win32/64 builds for those that can not compile themselves.

Please also download the books from my site, as this version uses a new book scheme that is not compatible with previous versions.

My Crafty site

On behalf of the Crafty team,

Peter[/url]
Thanks Peter, win32 working fine in Arena.

Best to you,

Gerold.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Crafty 23.0 has been released

Post by bob »

Denis P. Mendoza wrote:
Peter Skinner wrote:Hello everyone,

We have released Crafty v23.0 to the FTP and I have built Win32/64 builds for those that can not compile themselves.

Please also download the books from my site, as this version uses a new book scheme that is not compatible with previous versions.

My Crafty site

On behalf of the Crafty team,

Peter[/url]
Thanks, I was about to ask about why I can't use the old book format when I was profiling the binary here, even got some crashes. I forgot to read this as I was in a hurry:

Code: Select all

 *                                                                             *
 *    23.0   Essentially a cleaned up 22.9 version.  Comments have been        *
 *           reviewed to make sure they are consistent with what is actually   *
 *           done in the program.  Major change is that the random numbers     *
 *           used to produce the Zobrist hash signature are now statically     *
 *           initialized which eliminates a source of compatibility issues     *
 *           where a different stream of random numbers is produced if an      *
 *           architecture has some feature that changes the generator, such    *
 *           as a case on an older 30/36 bit word machine.  The issue with     *
 *           this change is that the old binary books are not compatible and   *
 *           need to be re-created with the current random numbers.  The       *
 *           "lockless hash table" idea is back in.  It was removed because    *
 *           the move from the hash table is recognized as illegal when this   *
 *           is appropriate, and no longer causes crashes.  However, the above *
 *           pawn hash issue showed that this happens enough that it is better *
 *           to avoid any error at all, including the score, for safety.  We   *
 *           made a significant change to the parallel search split logic in   *
 *           this version.  We now use a different test to limit how near the  *
 *           tips we split.  This test measures how large the sub-tree is for  *
 *           the first move at any possible split point, and requires that     *
 *           this be at least some minimum number of nodes before a split can  *
 *           be considered at this node.  The older approach, which based this *
 *           decsion on remaining search depth at a node led to some cases     *
 *           where the parallel search overhead was excessively high, or even  *
 *           excessively low (when we chose to not split frequently enough).   *
 *           This version appears to work well on a variety of platforms, even *
 *           though NUMA architectures may well need additional tuning of this *
 *           paramenter (smpsn) as well as (smpgroup) to try to contain most   *
 *           splits on a single NUMA node where memory is local.  I attempted  *
 *           to automate this entire process, and tune for all sorts of plat-  *
 *           forms, but nothing worked for the general case, which leaves the  *
 *           current approach.  When I converted back to threads from          *
 *           processes, I forgot to restore the alignment for the hash/pawn-   *
 *           hash tables.  The normal hash table needs to be on a 16 byte      *
 *           boundary, which normally happens automatically, but pawn hash     *
 *           entries should be on a 32 byte boundary to align them properly in *
 *           cache to avoid splitting an entry across two cache blocks and     *
 *           hurting performance.  New rook/bishop cache introduced to limit   *
 *           overhead caused by mobility calculations.  If the ranks/files the *
 *           rook is on are the same as the last time we evaluated a rook on   *
 *           this specific square, we can reuse the mobility score with no     *
 *           calculation required.  The code for "rook behind passed pawns"    *
 *           was moved to EvaluatePassedPawns() so that it is only done when   *
 *           there is a passed pawn on the board, not just when rooks are      *
 *           present.  Book learning has been greatly cleaned up and           *
 *           simplified.  The old "result learning" (which used the game       *
 *           result to modify the book) and "book learning" (which used the    *
 *           first N search scores to modify the book) were redundant, since   *
 *           result learning would overwrite whatever book learning did.  The  *
 *           new approach uses the game result (if available) to update the    *
 *           book database when the program exits or starts a new game.  If    *
 *           a result is not available, it will then rely on the previous      *
 *           search results so that it has some idea of whether this was a     *
 *           good opening or not, even if the game was not completed.  Minor   *
 *           LMR bug in SearchRoot() could do a PVS fail-high research using   *
 *           the wrong depth (off by -1) because of the LMR reduction that had *
 *           been set.  The normal search module had this correct, but the     *
 *           SearchRoot() module did not.  EvaluateDevelopment() was turned    *
 *           off for a side after castling.  This caused a problem in that we  *
 *           do things like checking for a knight blocking the C-pawn in queen *
 *           pawn openings.  Unfortunately, the program could either block the *
 *           pawn and then castle, which removed the blocked pawn penalty, or  *
 *           it could castle first and then block the pawn, making it more     *
 *           difficult to develop the queen-side.  We now enable this code     *
 *           always and never disable it.  This disable was done years ago     *
 *           when the castling evaluation of Crafty would scan the move list   *
 *           to see if crafty had castled, when it noticed castling was no     *
 *           longer possible.  Once it had castled, we disabled this to avoid  *
 *           the lengthy loop and the overhead it caused.  Today the test is   *
 *           quite cheap anyway, and I measured no speed difference with it on *
 *           or off, to speak of.  Now we realize that the knight in front of  *
 *           the C-pawn is bad and needs to either not go there, or else move  *
 *           out of the way.                                                   *
 *                                                                             *
 *******************************************************************************
Well-documented!
There were a lot of changes, and there is a lot left to do. We are working on this every day, and playing about 5 million test games a week to test new changes before we release them. :)
swami
Posts: 6640
Joined: Thu Mar 09, 2006 4:21 am

Re: Crafty 23.0 has been released

Post by swami »

Thanks Bob and Peter. Crafty has been recently making a lot of progress unlike the Crafty of the yesteryears. ;)
Gian-Carlo Pascutto
Posts: 1243
Joined: Sat Dec 13, 2008 7:00 pm

Re: Crafty 23.0 has been released

Post by Gian-Carlo Pascutto »

Denis P. Mendoza wrote:

Code: Select all

 *                                                                             *
 *    23.0   We  made a significant change to the parallel search split logic in   *
 *           this version.  We now use a different test to limit how near the  *
 *           tips we split.  This test measures how large the sub-tree is for  *
 *           the first move at any possible split point, and requires that     *
 *           this be at least some minimum number of nodes before a split can  *
 *           be considered at this node.  
So, Bob does actually listen sometimes :)

http://talkchess.com/forum/viewtopic.ph ... 711#247711
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Crafty 23.0 has been released

Post by bob »

Gian-Carlo Pascutto wrote:
Denis P. Mendoza wrote:

Code: Select all

 *                                                                             *
 *    23.0   We  made a significant change to the parallel search split logic in   *
 *           this version.  We now use a different test to limit how near the  *
 *           tips we split.  This test measures how large the sub-tree is for  *
 *           the first move at any possible split point, and requires that     *
 *           this be at least some minimum number of nodes before a split can  *
 *           be considered at this node.  
So, Bob does actually listen sometimes :)

http://talkchess.com/forum/viewtopic.ph ... 711#247711
Not sure what you mean. I had previously explained that I had tested many options. Limiting the distance from the leaves in absolute plies. LImiting distance from the leaves in proportion to total search depth. Limiting the distance from the leaves in pure nodes searched. All had one fatal drawback for what I was trying to do, which was to try to quantify the "width" of a node so that I could adjust another important limit, the "thread_group" value which says "no more than N threads can split at one ply". As I reported here, I never found a solution to that that worked well enough to be useful.

So I am not sure what I am supposed to have listened to??? I never was able to solve that specific problem. The current node limit was something that had already been tried, but did not help with the part of the tuning I was most interested in, as I wanted something that could self-adjust as the game tree changes from wide/bushy to deep/narrow. The "node approach" was what led to my trying the more complex idea of trying to compute "w" from the nodes searched in a sub-tree, where "w" was a good way of limiting the number of threads at any one node. But the calculation simply did not work well for most nodes, and gave mis-leading "w" values that were over/under the truth by a big margin, unfortunately.

The current approach is not very good either. Too many positions where Crafty searches significantly faster with bigger N, or with smaller N, which means the static node limit is still wrong, and I'm playing with that whenever I think of something new to try. And this is not even considering the min_thread_group problem at all...