That is all it ever did... It works, but through elimination only, although it does do the inverse and recognize good openings as well and play them more frequently. But based only on what is has been given as raw PGN input to create the book.mvk wrote:It is not very helpful to use material-only as criterium for 'solving' a position. We are past that. Modern programs give huge evaluations for non-material aspects. Unlike 17 years ago.Zenmastur wrote: I notice several people are making claims about how one program or another speculates. I'm not sure that is helpful.
Also, I would not put too much faith in the book learning capabilities of Crafty. It hasn't been worked on for ages. From my own testing, it isn't really working either. It appears as if it can't expand its book through learning, only effectively shrink it.
Is this a bug in Crafty 24.0 or...
Moderator: Ras
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Is this a bug in Crafty 24.0 or...
-
Zenmastur
- Posts: 919
- Joined: Sat May 31, 2014 8:28 am
Re: Is this a bug in Crafty 24.0 or...
I don't recall stating that "material only" evaluations was going to be used. Nor do I recall this being used much in the past. So why are you bring this into the conversation?mvk wrote:It is not very helpful to use material-only as criterium for 'solving' a position....Zenmastur wrote: I notice several people are making claims about how one program or another speculates. I'm not sure that is helpful.
Well, that's good to know. I'll certainly keep that in mind for future reference.mvk wrote:... We are past that....
In order to make this last claim you must have knowledge of all chess programs written prior to 1997. I know as a matter of fact that you don't. So, I think we can safely discard your last statement as unsupportable. This begs the question, what point are you trying to make?mvk wrote: ... Modern programs give huge evaluations for non-material aspects. Unlike 17 years ago.
How do these statements relate to the statements of mine that you've quoted?
I can only assume that you are referring to SF's early analysis of position. E.G.:
[d]rq2k2r/3n1ppp/p2bNnb1/8/Np6/1B3PP1/PP2Q2P/R1BR2K1 b kq - 0 1
Which probably looks like this one at one ply:
Code: Select all
info depth 1 seldepth 1 score cp 9 nodes 162 nps 162000 time 1 multipv 1 pv f7e6 e2e6 d6e7 d1e1 b8d8 c1f4 f6g8 a1d1 a6a5 a4c5 d7c5 d1d8 a8d8 e6c6 e8f8 e1e7 g8e7 c6c5 d8c8 c5a5 g6f7 a5b4 f7b3 b4b3 mvk wrote: Also, I would not put too much faith in the book learning capabilities of Crafty. It hasn't been worked on for ages. From my own testing, it isn't really working either. It appears as if it can't expand its book through learning, only effectively shrink it.
I actually read the paper bob wrote before I started using it the first time and I recently reread it. So, I am aware of the limitations imposed by crafty's book structure (I looked at the code briefly) and the restrictive rules about how the data is used on it's ability to learn.bob wrote: That is all it ever did... It works, but through elimination only, although it does do the inverse and recognize good openings as well and play them more frequently. But based only on what is has been given as raw PGN input to create the book.
I read the thread you linked. It seems to me that if you had engaged your brain a little more before you engaged your computer on a year long task the CPU time could have been better spent. While Crafty's learning abilities might be very modest, they are adequate for the purpose for which they were designed and some other tasks. i.e. pseudo-proving a position is lost etc. Although after trying this, I'm less than satisfied with the results, but that's another story. The upside is I didn't waste a year doing it. I'm sure other uses could be found with a little ingenuity with out changing the code. With minor changes I believe it could be much more effective and be useful for other tasks as well.
I haven't had a chance to read your thesis, so I can't comment on it. I do note that expanding the book as described in one of the papers referenced in the other thread and mini-maxing the book is a good idea, but it doesn't go nearly far enough as far as I'm concerned. e.g. a book is eminently more useful if the exact board position can be determined/reconstructed from the individual records and they can be traversed in both direction without the aid of a game score or other auxiliary information.
Many of people designing the programs are apparently stuck in the era in which they first became chess programmers. Their mind sets haven't changed enough to take advantage of increased capacity and speed of storage devices vice what it was "X" number of years ago. They apparently haven't spent much time analyzing what could be done with various extensions to the book or a more general database of positional data.
With table bases taking up 1+ Tb of space and hundreds of files why should our books be kept small, static, difficult to use, and relegated to but a single use? The largest books I have built so far contain 100M-120M unique positions. I envision books of tens of billions of positions with record sizes on the order of 64-128 bytes each plus multiple indexes. ideally a general database is probable the best/ most enduring solution.
I would personally like a book that can be expanded as needed. i.e. At a minimum multi-TB sizes should be supported, and seamless additions, subtractions, and data mining should be supported.
I want to be able to take someone else's book, that is compatible, and add it to mine without any issue other than space considerations. I also think the format should be published and free to use for not-for-profit purposes. Use of a commercial database package is therefore undesirable. The record format should be such that it can easily accommodate other data types. i.e. non-book positions, game records with forward and backward links between records, chunk data for similarity searches, etc. This could be accomplished by the use of record type identifiers. Not all programs would need to support the use of all record types. But they would need to be aware of them so they don't inadvertently destroying data. This seems like a simple task.
In fact, I'm surprised that someone hasn't formed an informal standards committee to do just that. I think this would be a lasting and valuable addition to the computer chess community. It would probably be best if it were designed by the academic members, with input from the rest of the community, since the "for profit" members tend to want to shut everyone else out so they can maximize profits.
Now, back to the main topic.
I ran a similar search using stockfish starting after Qxe6 to ~340B nodes (48/104) and it returned a different PV:bob wrote:I probably introduced the Be7 "noise". With a 3 ply search, that is what Crafty thought was best, and that's when I began to wonder why the score changed so much from before to after Be7...zullil wrote:[d]rq2k2r/3n1ppp/p2bpnb1/8/Np1N4/1B3PP1/PP2Q2P/R1BR2K1 w kq - 1 16
Here is the "best" line of play, according to Stockfish. Note that Be7 in response to Qxe6 is wrong.
Code: Select all
info depth 42 seldepth 61 score cp 407 nodes 8420792652 nps 15062305 time 559064 multipv 1 pv d4e6 f7e6 e2e6 e8f8 d1d6 b8e8 c1d2 g6f7 e6f5 f7b3 a2b3 f8g8 a1e1 e8g6 f5f4 h7h6 e1e7 a8f8 f4d4 g8h7 e7d7 g6b1 g1g2 f6d7 d6d7 b1g6 d2b4 f8f7 d7f7 g6f7 a4c5 f7g6 c5e4 h8b8 d4c4 b8d8 c4c7 d8e8 g2f2 e8g8 h2h4 h7h8 f2g2 g8a8 c7c4
[d] rq2k2r/3n2pp/p2bQnb1/8/Np6/1B3PP1/PP5P/R1BR2K1 b kq - 0 17
Code: Select all
info depth 48 seldepth 104 score cp -496 nodes 343420726246 nps 6712070 time 51164652 multipv 1 pv e8f8 d1d6 b8e8 c1f4 g6f7 e6f5 f7b3 a2b3 e8g6 f5h3 f8g8 d6d7 f6d7 h3d7 h7h6 d7d6 g8h7 d6g6 h7g6 a4b6 a8a7 b6c4 h8a8 f4e3 a7c7 f3f4 g6h7 g1f2 h7g8 a1a4 c7b7 f2f3 b7b5 g3g4 h6h5 h2h3 g8h8 a4a1 h8h7
info depth 48 seldepth 104 score cp -827 nodes 343420726246 nps 6712070 time 51164652 multipv 2 pv e8d8 d1d6 b8b5 a4b6 h8e8 d6d7 b5d7 e6d7 f6d7 b6a8 d7e5 c1g5 d8c8 b3d5 e5d3 a8b6 c8c7 b6c4 e8b8 g5e3 b8b5 d5e6 a6a5 a1d1 c7d8 e3d4 d8e7 e6c8 e7f8 c8a6 b5d5 d4b6 d5h5 c4a5 d3b2 d1d8 g6e8 a5c6 f8f7 c6b4 e8b5 b6d4 b5a6 b4a6 b2d3 d8d7 f7e6 d7g7 h5a5 a6c7 e6d6 g7h7 a5a2 c7b5 d6c6
info depth 48 seldepth 104 score cp -1034 nodes 343420726246 nps 6712070 time 51164652 multipv 3 pv d6e7 d1e1 b8d8 c1f4 e8f8 f4d6 d7e5 d6e5 d8d7 e6d7 f6d7 e5g7 f8g7 e1e7 g7h6 e7d7 h8d8 a1d1 d8d7 d1d7 g6e8 d7d6 h6g7 a4c5 a6a5 c5e6 g7h8 e6c7 a8b8 c7e8 b8e8 g1f2 e8a8 d6d7 h7h5 b3c2 a8c8 c2d3 c8c5 f2e3 h8g8 d7h7 a5a4 e3d4 c5c1 h7h5 g8f8 h5a5 c1h1 h2h4 h1h3 g3g4 h3h4 a5a4
It's nice to have an oracle to consult so you know what "should" happen but it doesn't explain why Crafty fails to see that it's not losing material until ply 26. Since the two engines don't search in the same way not much can be gained by using SF's analysis.
In the best line given by SF you get the material back on move 18, so its not much of a sacrifice. Crafty doesn't see this line. At the end of 26 plies it sees this:
[d]rq2k2r/3n1ppp/p2bpnb1/8/Np1N4/1B3PP1/PP2Q2P/R1BR2K1 w kq - 1 16
Code: Select all
26 1:04:08.360 23,737,576k 6,168k +2.75 16. Nxe6 fxe6 17. Qxe6+ Be7 18. Bf4 Qa7+ 19. Kg2 [b]Qb7 20. Bd6 Nf8 21. Qe5 Be4 22. fxe4 Ng6 23. Qe6 Qxe4+ 24. Qxe4 Nxe4 25. Bd5 Nxd6 26. Bxa8 O-O 27. Bd5+ Kh8 28. Rf1 Rxf1 29. Rxf1 Ne5 30. Nb6 a5[/b]If the move Nxe6 is made and the position searched again, crafty sees this at 21 plies:
[d]rq2k2r/3n1ppp/p2bNnb1/8/Np6/1B3PP1/PP2Q2P/R1BR2K1 b kq - 0 16
Code: Select all
21 00:47.340 283,115k 5,980k +1.75 16. ... fxe6 17. Qxe6+ Be7 18. Bf4 Qa7+ 19. Kg2 [b]Rf8 20. Bd6 Ng8 21. Bxe7 Nxe7 22. Re1 Rf7 23. Qd6 Nf6 24. Nb6 Rd8 25. Ba4+ Nd7 26. Nxd7 Rxd7 27. Bxd7+ Qxd7 28. Qb8+ Qd8 29. Qxb4[/b]From this data I conclude that in the first search the evaluation at the end of ply 22 should have been +1.75 or better. Instead it remained at or below +0.56 until ply 26. One of the two searches has an error in it. One would think that the search starting at before 16. Nxe6 would be more accurate since the PV runs to ply 60 of the game (not the search) and the search starting after 16. Nxe6 runs to game ply 57. More importantly, the search that started before the move 16. Nxe6 is played has searched deeper into the game tree once it reaches ply 23 and yet still doesn't see the correct move.
Something had to be missed for this to happen. Could it be a pruning problem or a futility issue?
Regards,
Forrest
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Is this a bug in Crafty 24.0 or...
It could be either null-move pruning, LMR reductions, or futility pruning in the last 5-6 plies. Any of those tend to make it harder for a new best move to be found over the current best move. I've been working on this off and on, and am currently involved in a lot of testing to try to better choose which moves get reduced or not, etc.Zenmastur wrote:I don't recall stating that "material only" evaluations was going to be used. Nor do I recall this being used much in the past. So why are you bring this into the conversation?mvk wrote:It is not very helpful to use material-only as criterium for 'solving' a position....Zenmastur wrote: I notice several people are making claims about how one program or another speculates. I'm not sure that is helpful.
Well, that's good to know. I'll certainly keep that in mind for future reference.mvk wrote:... We are past that....
In order to make this last claim you must have knowledge of all chess programs written prior to 1997. I know as a matter of fact that you don't. So, I think we can safely discard your last statement as unsupportable. This begs the question, what point are you trying to make?mvk wrote: ... Modern programs give huge evaluations for non-material aspects. Unlike 17 years ago.
How do these statements relate to the statements of mine that you've quoted?
I can only assume that you are referring to SF's early analysis of position. E.G.:
[d]rq2k2r/3n1ppp/p2bNnb1/8/Np6/1B3PP1/PP2Q2P/R1BR2K1 b kq - 0 1
Which probably looks like this one at one ply:
You can call this speculation if you want. I don't know what the program is really doing since I didn't look at the code, but it seems like a cheap attempt to provide as many early cut-offs as possible in the search. This doesn't appear to be produced from a "standard" search since in the line given after a6a5 the winning move is Bc7 which wins blacks queen. On the other hand, I doubt that this line was intended to do anything other than help the engine perform it's function more efficiently early in the search. The error in the line can be corrected by substituting a8c8 for a6a5 which leaves the "a" pawn hanging but secures the queen. I looked at several of these early searches and the evaluations in some cases didn't change for 15 or more plies, which isn't bad for the few hundred nodes searched. Some of them do change after more searching is done. So there isn't really much speculation going on that I can see, just an attempt to save time, or so it would appear.Code: Select all
info depth 1 seldepth 1 score cp 9 nodes 162 nps 162000 time 1 multipv 1 pv f7e6 e2e6 d6e7 d1e1 b8d8 c1f4 f6g8 a1d1 a6a5 a4c5 d7c5 d1d8 a8d8 e6c6 e8f8 e1e7 g8e7 c6c5 d8c8 c5a5 g6f7 a5b4 f7b3 b4b3mvk wrote: Also, I would not put too much faith in the book learning capabilities of Crafty. It hasn't been worked on for ages. From my own testing, it isn't really working either. It appears as if it can't expand its book through learning, only effectively shrink it.I actually read the paper bob wrote before I started using it the first time and I recently reread it. So, I am aware of the limitations imposed by crafty's book structure (I looked at the code briefly) and the restrictive rules about how the data is used on it's ability to learn.bob wrote: That is all it ever did... It works, but through elimination only, although it does do the inverse and recognize good openings as well and play them more frequently. But based only on what is has been given as raw PGN input to create the book.
I read the thread you linked. It seems to me that if you had engaged your brain a little more before you engaged your computer on a year long task the CPU time could have been better spent. While Crafty's learning abilities might be very modest, they are adequate for the purpose for which they were designed and some other tasks. i.e. pseudo-proving a position is lost etc. Although after trying this, I'm less than satisfied with the results, but that's another story. The upside is I didn't waste a year doing it. I'm sure other uses could be found with a little ingenuity with out changing the code. With minor changes I believe it could be much more effective and be useful for other tasks as well.
I haven't had a chance to read your thesis, so I can't comment on it. I do note that expanding the book as described in one of the papers referenced in the other thread and mini-maxing the book is a good idea, but it doesn't go nearly far enough as far as I'm concerned. e.g. a book is eminently more useful if the exact board position can be determined/reconstructed from the individual records and they can be traversed in both direction without the aid of a game score or other auxiliary information.
Many of people designing the programs are apparently stuck in the era in which they first became chess programmers. Their mind sets haven't changed enough to take advantage of increased capacity and speed of storage devices vice what it was "X" number of years ago. They apparently haven't spent much time analyzing what could be done with various extensions to the book or a more general database of positional data.
With table bases taking up 1+ Tb of space and hundreds of files why should our books be kept small, static, difficult to use, and relegated to but a single use? The largest books I have built so far contain 100M-120M unique positions. I envision books of tens of billions of positions with record sizes on the order of 64-128 bytes each plus multiple indexes. ideally a general database is probable the best/ most enduring solution.
I would personally like a book that can be expanded as needed. i.e. At a minimum multi-TB sizes should be supported, and seamless additions, subtractions, and data mining should be supported.
I want to be able to take someone else's book, that is compatible, and add it to mine without any issue other than space considerations. I also think the format should be published and free to use for not-for-profit purposes. Use of a commercial database package is therefore undesirable. The record format should be such that it can easily accommodate other data types. i.e. non-book positions, game records with forward and backward links between records, chunk data for similarity searches, etc. This could be accomplished by the use of record type identifiers. Not all programs would need to support the use of all record types. But they would need to be aware of them so they don't inadvertently destroying data. This seems like a simple task.
In fact, I'm surprised that someone hasn't formed an informal standards committee to do just that. I think this would be a lasting and valuable addition to the computer chess community. It would probably be best if it were designed by the academic members, with input from the rest of the community, since the "for profit" members tend to want to shut everyone else out so they can maximize profits.
Now, back to the main topic.
I ran a similar search using stockfish starting after Qxe6 to ~340B nodes (48/104) and it returned a different PV:bob wrote:I probably introduced the Be7 "noise". With a 3 ply search, that is what Crafty thought was best, and that's when I began to wonder why the score changed so much from before to after Be7...zullil wrote:[d]rq2k2r/3n1ppp/p2bpnb1/8/Np1N4/1B3PP1/PP2Q2P/R1BR2K1 w kq - 1 16
Here is the "best" line of play, according to Stockfish. Note that Be7 in response to Qxe6 is wrong.
Code: Select all
info depth 42 seldepth 61 score cp 407 nodes 8420792652 nps 15062305 time 559064 multipv 1 pv d4e6 f7e6 e2e6 e8f8 d1d6 b8e8 c1d2 g6f7 e6f5 f7b3 a2b3 f8g8 a1e1 e8g6 f5f4 h7h6 e1e7 a8f8 f4d4 g8h7 e7d7 g6b1 g1g2 f6d7 d6d7 b1g6 d2b4 f8f7 d7f7 g6f7 a4c5 f7g6 c5e4 h8b8 d4c4 b8d8 c4c7 d8e8 g2f2 e8g8 h2h4 h7h8 f2g2 g8a8 c7c4
[d] rq2k2r/3n2pp/p2bQnb1/8/Np6/1B3PP1/PP5P/R1BR2K1 b kq - 0 17Code: Select all
info depth 48 seldepth 104 score cp -496 nodes 343420726246 nps 6712070 time 51164652 multipv 1 pv e8f8 d1d6 b8e8 c1f4 g6f7 e6f5 f7b3 a2b3 e8g6 f5h3 f8g8 d6d7 f6d7 h3d7 h7h6 d7d6 g8h7 d6g6 h7g6 a4b6 a8a7 b6c4 h8a8 f4e3 a7c7 f3f4 g6h7 g1f2 h7g8 a1a4 c7b7 f2f3 b7b5 g3g4 h6h5 h2h3 g8h8 a4a1 h8h7 info depth 48 seldepth 104 score cp -827 nodes 343420726246 nps 6712070 time 51164652 multipv 2 pv e8d8 d1d6 b8b5 a4b6 h8e8 d6d7 b5d7 e6d7 f6d7 b6a8 d7e5 c1g5 d8c8 b3d5 e5d3 a8b6 c8c7 b6c4 e8b8 g5e3 b8b5 d5e6 a6a5 a1d1 c7d8 e3d4 d8e7 e6c8 e7f8 c8a6 b5d5 d4b6 d5h5 c4a5 d3b2 d1d8 g6e8 a5c6 f8f7 c6b4 e8b5 b6d4 b5a6 b4a6 b2d3 d8d7 f7e6 d7g7 h5a5 a6c7 e6d6 g7h7 a5a2 c7b5 d6c6 info depth 48 seldepth 104 score cp -1034 nodes 343420726246 nps 6712070 time 51164652 multipv 3 pv d6e7 d1e1 b8d8 c1f4 e8f8 f4d6 d7e5 d6e5 d8d7 e6d7 f6d7 e5g7 f8g7 e1e7 g7h6 e7d7 h8d8 a1d1 d8d7 d1d7 g6e8 d7d6 h6g7 a4c5 a6a5 c5e6 g7h8 e6c7 a8b8 c7e8 b8e8 g1f2 e8a8 d6d7 h7h5 b3c2 a8c8 c2d3 c8c5 f2e3 h8g8 d7h7 a5a4 e3d4 c5c1 h7h5 g8f8 h5a5 c1h1 h2h4 h1h3 g3g4 h3h4 a5a4
It's nice to have an oracle to consult so you know what "should" happen but it doesn't explain why Crafty fails to see that it's not losing material until ply 26. Since the two engines don't search in the same way not much can be gained by using SF's analysis.
In the best line given by SF you get the material back on move 18, so its not much of a sacrifice. Crafty doesn't see this line. At the end of 26 plies it sees this:
[d]rq2k2r/3n1ppp/p2bpnb1/8/Np1N4/1B3PP1/PP2Q2P/R1BR2K1 w kq - 1 16
The largest evaluation given during the search that lead to the line given above prior to ply 26 was +0.56. This is with a single core using 8gb cache and a 4gb of pawn cache with the caches empty at the start of the search.Code: Select all
26 1:04:08.360 23,737,576k 6,168k +2.75 16. Nxe6 fxe6 17. Qxe6+ Be7 18. Bf4 Qa7+ 19. Kg2 [b]Qb7 20. Bd6 Nf8 21. Qe5 Be4 22. fxe4 Ng6 23. Qe6 Qxe4+ 24. Qxe4 Nxe4 25. Bd5 Nxd6 26. Bxa8 O-O 27. Bd5+ Kh8 28. Rf1 Rxf1 29. Rxf1 Ne5 30. Nb6 a5[/b]
If the move Nxe6 is made and the position searched again, crafty sees this at 21 plies:
[d]rq2k2r/3n1ppp/p2bNnb1/8/Np6/1B3PP1/PP2Q2P/R1BR2K1 b kq - 0 16
Notice that the evaluation given in this line (+1.75) is greater than any evaluation given in the previous line prior to ply 26. This was also with a single core, 8gb cache and 4gb of pawn cache.Code: Select all
21 00:47.340 283,115k 5,980k +1.75 16. ... fxe6 17. Qxe6+ Be7 18. Bf4 Qa7+ 19. Kg2 [b]Rf8 20. Bd6 Ng8 21. Bxe7 Nxe7 22. Re1 Rf7 23. Qd6 Nf6 24. Nb6 Rd8 25. Ba4+ Nd7 26. Nxd7 Rxd7 27. Bxd7+ Qxd7 28. Qb8+ Qd8 29. Qxb4[/b]
From this data I conclude that in the first search the evaluation at the end of ply 22 should have been +1.75 or better. Instead it remained at or below +0.56 until ply 26. One of the two searches has an error in it. One would think that the search starting at before 16. Nxe6 would be more accurate since the PV runs to ply 60 of the game (not the search) and the search starting after 16. Nxe6 runs to game ply 57. More importantly, the search that started before the move 16. Nxe6 is played has searched deeper into the game tree once it reaches ply 23 and yet still doesn't see the correct move.
Something had to be missed for this to happen. Could it be a pruning problem or a futility issue?
Regards,
Forrest
-
Zenmastur
- Posts: 919
- Joined: Sat May 31, 2014 8:28 am
Re: Is this a bug in Crafty 24.0 or...
Well, I wish you good luck finding the problem. I know it can be frustrating at times.
Here's a good example. I ordered a 4TB HD and a 256GB SSD. They arrived today. So, I installed them. No real problems other than changing the partition type (it came pre-partitioned for some reason) so I could use the entire 4TB as a single volume. After I got it working I cut and pasted my chess directory to the new HD. It wasn't that big (maybe 20gb) other than the 4 or 5 different sets of table bases I spent a few weeks down loading.
The new drive died during the copy.
LOL.
I've spent 3-4 weeks installing new programs gathering about 10 million games, getting everything configured so I could work on my chess and maybe do some programming. Out of all the data I have 4.5gb left on the old drive. Of the few programs that survived, most need to be re-installed as now they refuse to work. So all told, I just lost between 3 and 4 weeks of work. Makes me want to cry!
Anyway, good luck with the debugging. I'm sure I'll be busy trying to figure out what all I lost and replace it,.. probably won't be posting much until after I've read a good book on Zen!
Regards,
Forrest
Here's a good example. I ordered a 4TB HD and a 256GB SSD. They arrived today. So, I installed them. No real problems other than changing the partition type (it came pre-partitioned for some reason) so I could use the entire 4TB as a single volume. After I got it working I cut and pasted my chess directory to the new HD. It wasn't that big (maybe 20gb) other than the 4 or 5 different sets of table bases I spent a few weeks down loading.
The new drive died during the copy.
LOL.
I've spent 3-4 weeks installing new programs gathering about 10 million games, getting everything configured so I could work on my chess and maybe do some programming. Out of all the data I have 4.5gb left on the old drive. Of the few programs that survived, most need to be re-installed as now they refuse to work. So all told, I just lost between 3 and 4 weeks of work. Makes me want to cry!
Anyway, good luck with the debugging. I'm sure I'll be busy trying to figure out what all I lost and replace it,.. probably won't be posting much until after I've read a good book on Zen!
Regards,
Forrest
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.
-
zullil
- Posts: 6442
- Joined: Tue Jan 09, 2007 12:31 am
- Location: PA USA
- Full name: Louis Zulli
Re: Is this a bug in Crafty 24.0 or...
I think Bob elaborates here:Zenmastur wrote:Louis, I'm having trouble seeing what point you're trying to make with these comments. A little clarification would be helpful.zullil wrote:You stated in another post that "I've been away from computer chess and chess in general for about 17 years." Many search enhancements have been introduced during that period. My understanding is that many extensions and reductions are now performed, so that at the completion of "depth" N, some root moves have been searched much more deeply than N plies, and perhaps some less than N. It's not just an alpha-beta search with iterative deepening anymore. And then there's the whole issue of multi-threading (which you wisely eliminated from consideration here by using a single core to test)...Zenmastur wrote: The point is that it doesn't take 26 plies to win the material back. So why does the engine fail to find the move until 26 plies are reached?
Forrest
Regards,
Forrest
My point was that when the depth N iteration is completed, some branches of the tree have been searched much more deeply than depth N, and some branches less than depth N.bob wrote:It could be either null-move pruning, LMR reductions, or futility pruning in the last 5-6 plies. Any of those tend to make it harder for a new best move to be found over the current best move.
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Is this a bug in Crafty 24.0 or...
If I could even count the number of times...Zenmastur wrote:Well, I wish you good luck finding the problem. I know it can be frustrating at times.
Here's a good example. I ordered a 4TB HD and a 256GB SSD. They arrived today. So, I installed them. No real problems other than changing the partition type (it came pre-partitioned for some reason) so I could use the entire 4TB as a single volume. After I got it working I cut and pasted my chess directory to the new HD. It wasn't that big (maybe 20gb) other than the 4 or 5 different sets of table bases I spent a few weeks down loading.
The new drive died during the copy.![]()
LOL.
I've spent 3-4 weeks installing new programs gathering about 10 million games, getting everything configured so I could work on my chess and maybe do some programming. Out of all the data I have 4.5gb left on the old drive. Of the few programs that survived, most need to be re-installed as now they refuse to work. So all told, I just lost between 3 and 4 weeks of work. Makes me want to cry!
Anyway, good luck with the debugging. I'm sure I'll be busy trying to figure out what all I lost and replace it,.. probably won't be posting much until after I've read a good book on Zen!![]()
Regards,
Forrest