Thanks for the answer! Looking forward to Andscacs NNUE!cdani wrote: ↑Mon Jul 27, 2020 11:55 amSure will be interesting things to do. But for the moment if I take again Andscacs I think it will be to tune it's static eval against NNUE eval.peter wrote: ↑Mon Jul 27, 2020 10:05 am Tuning search is always fine, tuning position- learning code based on hash- learning wouldn't make code much more complicated, I think, even if I'm not good enough in proramming to prove so.
Andscacs had a nice hash- storage before SugaR had so, no interest in position- learning based on selected hash- entries, Daniel?
Ever looked at Jeremy Bernstein's SF PA?
Might still have the source somewhere, at least the one, Zerbinati made out of it for a revival a few years ago.
Those other things are being done for Stockfish more or less, I think. Have not reviewed them.
SF-NNUE going forward...
Moderators: hgm, Rebel, chrisw
-
- Posts: 3186
- Joined: Sat Feb 16, 2008 7:38 am
- Full name: Peter Martan
Re: SF-NNUE going forward...
Peter.
-
- Posts: 4190
- Joined: Wed Nov 25, 2009 1:47 am
Re: SF-NNUE going forward...
That makes absolutely no sense. You would be trying to approximate an approximation. Why go through the hoops, why not then train static eval on SF fixed depth 10 or whatever search returned score directly?cdani wrote: ↑Mon Jul 27, 2020 11:55 amSure will be interesting things to do. But for the moment if I take again Andscacs I think it will be to tune it's static eval against NNUE eval.peter wrote: ↑Mon Jul 27, 2020 10:05 am Tuning search is always fine, tuning position- learning code based on hash- learning wouldn't make code much more complicated, I think, even if I'm not good enough in proramming to prove so.
Andscacs had a nice hash- storage before SugaR had so, no interest in position- learning based on selected hash- entries, Daniel?
Ever looked at Jeremy Bernstein's SF PA?
Might still have the source somewhere, at least the one, Zerbinati made out of it for a revival a few years ago.
Those other things are being done for Stockfish more or less, I think. Have not reviewed them.
-
- Posts: 476
- Joined: Sun Mar 17, 2019 12:00 pm
- Full name: Henk Drost
Re: SF-NNUE going forward...
I wouldn't call nnue an approximation, although maybe with the high lambda that's being used it may sort of be.Milos wrote: ↑Mon Jul 27, 2020 12:31 pmThat makes absolutely no sense. You would be trying to approximate an approximation. Why go through the hoops, why not then train static eval on SF fixed depth 10 or whatever search returned score directly?cdani wrote: ↑Mon Jul 27, 2020 11:55 amSure will be interesting things to do. But for the moment if I take again Andscacs I think it will be to tune it's static eval against NNUE eval.peter wrote: ↑Mon Jul 27, 2020 10:05 am Tuning search is always fine, tuning position- learning code based on hash- learning wouldn't make code much more complicated, I think, even if I'm not good enough in proramming to prove so.
Andscacs had a nice hash- storage before SugaR had so, no interest in position- learning based on selected hash- entries, Daniel?
Ever looked at Jeremy Bernstein's SF PA?
Might still have the source somewhere, at least the one, Zerbinati made out of it for a revival a few years ago.
Those other things are being done for Stockfish more or less, I think. Have not reviewed them.
But using a superior eval will be more time efficient.
My only real worry is that (with the current nets) nnue's eval isn't superior at every stage of the game.
-
- Posts: 4190
- Joined: Wed Nov 25, 2009 1:47 am
Re: SF-NNUE going forward...
I'm almost certain that NNUE eval coupled with search in endgame is inferior. The reason is, it takes approximately the same amount of time to compute NNUE eval irrespective of game phase while handcrafted eval is executed much faster in endgame yielding higher depth search in the same amount of time compared to search with NNUE eval.Raphexon wrote: ↑Mon Jul 27, 2020 1:05 pmI wouldn't call nnue an approximation, although maybe with the high lambda that's being used it may sort of be.Milos wrote: ↑Mon Jul 27, 2020 12:31 pmThat makes absolutely no sense. You would be trying to approximate an approximation. Why go through the hoops, why not then train static eval on SF fixed depth 10 or whatever search returned score directly?cdani wrote: ↑Mon Jul 27, 2020 11:55 amSure will be interesting things to do. But for the moment if I take again Andscacs I think it will be to tune it's static eval against NNUE eval.peter wrote: ↑Mon Jul 27, 2020 10:05 am Tuning search is always fine, tuning position- learning code based on hash- learning wouldn't make code much more complicated, I think, even if I'm not good enough in proramming to prove so.
Andscacs had a nice hash- storage before SugaR had so, no interest in position- learning based on selected hash- entries, Daniel?
Ever looked at Jeremy Bernstein's SF PA?
Might still have the source somewhere, at least the one, Zerbinati made out of it for a revival a few years ago.
Those other things are being done for Stockfish more or less, I think. Have not reviewed them.
But using a superior eval will be more time efficient.
My only real worry is that (with the current nets) nnue's eval isn't superior at every stage of the game.
-
- Posts: 476
- Joined: Sun Mar 17, 2019 12:00 pm
- Full name: Henk Drost
Re: SF-NNUE going forward...
This was true only for the earliest versions of SFNNUE.Milos wrote: ↑Mon Jul 27, 2020 1:13 pmI'm almost certain that NNUE eval coupled with search in endgame is inferior. The reason is, it takes approximately the same amount of time to compute NNUE eval irrespective of game phase while handcrafted eval is executed much faster in endgame yielding higher depth search in the same amount of time compared to search with NNUE eval.Raphexon wrote: ↑Mon Jul 27, 2020 1:05 pmI wouldn't call nnue an approximation, although maybe with the high lambda that's being used it may sort of be.Milos wrote: ↑Mon Jul 27, 2020 12:31 pmThat makes absolutely no sense. You would be trying to approximate an approximation. Why go through the hoops, why not then train static eval on SF fixed depth 10 or whatever search returned score directly?cdani wrote: ↑Mon Jul 27, 2020 11:55 amSure will be interesting things to do. But for the moment if I take again Andscacs I think it will be to tune it's static eval against NNUE eval.peter wrote: ↑Mon Jul 27, 2020 10:05 am Tuning search is always fine, tuning position- learning code based on hash- learning wouldn't make code much more complicated, I think, even if I'm not good enough in proramming to prove so.
Andscacs had a nice hash- storage before SugaR had so, no interest in position- learning based on selected hash- entries, Daniel?
Ever looked at Jeremy Bernstein's SF PA?
Might still have the source somewhere, at least the one, Zerbinati made out of it for a revival a few years ago.
Those other things are being done for Stockfish more or less, I think. Have not reviewed them.
But using a superior eval will be more time efficient.
My only real worry is that (with the current nets) nnue's eval isn't superior at every stage of the game.
"Speeded up by omitting the difference calculation for the pieces removed from the board"
https://github.com/nodchip/Stockfish/co ... 5de129113f
On my current PC on 1 core: SF from start does roughly 1370 knps, nnue does 684 knps. 2:1 ratio.
1k6/2n3pp/3b4/8/8/8/PP4KN/3B4 w - - 0 1: SF does 1976knps, nnue does 900 knps. 2.2:1 ratio.
1k6/2n5/3b4/8/8/8/6KN/3B4 w - - 0 1: SF does 2070 knps, nnue does 1078 knps.
The difference in speed generally stays at roughly a 2:1 to 2.3:1 ratio.
I sometimes see the difference in speed drop below 2:1, but those are generally trivial endgames with very few pieces.
Although nnue does take a little longer to get up to speed. Very little effect if time taken per move is at least 10-15 seconds, but noticeable if it has to blitz.
-
- Posts: 4190
- Joined: Wed Nov 25, 2009 1:47 am
Re: SF-NNUE going forward...
I see, how about positions with few pieces but a lot of pawns and not evaluating them from clean hash but in game? There pawn hash and lazy eval would make much more difference in favor of handcrafted eval.Raphexon wrote: ↑Mon Jul 27, 2020 1:30 pm This was true only for the earliest versions of SFNNUE.
"Speeded up by omitting the difference calculation for the pieces removed from the board"
https://github.com/nodchip/Stockfish/co ... 5de129113f
On my current PC on 1 core: SF from start does roughly 1370 knps, nnue does 684 knps. 2:1 ratio.
1k6/2n3pp/3b4/8/8/8/PP4KN/3B4 w - - 0 1: SF does 1976knps, nnue does 900 knps. 2.2:1 ratio.
1k6/2n5/3b4/8/8/8/6KN/3B4 w - - 0 1: SF does 2070 knps, nnue does 1078 knps.
The difference in speed generally stays at roughly a 2:1 to 2.3:1 ratio.
I sometimes see the difference in speed drop below 2:1, but those are generally trivial endgames with very few pieces.
Although nnue does take a little longer to get up to speed. Very little effect if time taken per move is at least 10-15 seconds, but noticeable if it has to blitz.
-
- Posts: 476
- Joined: Sun Mar 17, 2019 12:00 pm
- Full name: Henk Drost
Re: SF-NNUE going forward...
From having eyed a bit of games not having to evaluate from a clean hash is as much of an advantage for nnue as it is for SF.Milos wrote: ↑Mon Jul 27, 2020 1:39 pmI see, how about positions with few pieces but a lot of pawns and not evaluating them from clean hash but in game? There pawn hash and lazy eval would make much more difference in favor of handcrafted eval.Raphexon wrote: ↑Mon Jul 27, 2020 1:30 pm This was true only for the earliest versions of SFNNUE.
"Speeded up by omitting the difference calculation for the pieces removed from the board"
https://github.com/nodchip/Stockfish/co ... 5de129113f
On my current PC on 1 core: SF from start does roughly 1370 knps, nnue does 684 knps. 2:1 ratio.
1k6/2n3pp/3b4/8/8/8/PP4KN/3B4 w - - 0 1: SF does 1976knps, nnue does 900 knps. 2.2:1 ratio.
1k6/2n5/3b4/8/8/8/6KN/3B4 w - - 0 1: SF does 2070 knps, nnue does 1078 knps.
The difference in speed generally stays at roughly a 2:1 to 2.3:1 ratio.
I sometimes see the difference in speed drop below 2:1, but those are generally trivial endgames with very few pieces.
Although nnue does take a little longer to get up to speed. Very little effect if time taken per move is at least 10-15 seconds, but noticeable if it has to blitz.
The 2:1 ratio on my PC is pretty much a constant throughout the game.
So speed is the least of my worries.
-
- Posts: 4190
- Joined: Wed Nov 25, 2009 1:47 am
Re: SF-NNUE going forward...
That's actually quite good. I still wouldn't train net on fixed depth though, it makes endgame positions eval relatively much weaker. One could gain more with fix time per move training or just use some TC with increment.Raphexon wrote: ↑Mon Jul 27, 2020 2:12 pmFrom having eyed a bit of games not having to evaluate from a clean hash is as much of an advantage for nnue as it is for SF.Milos wrote: ↑Mon Jul 27, 2020 1:39 pmI see, how about positions with few pieces but a lot of pawns and not evaluating them from clean hash but in game? There pawn hash and lazy eval would make much more difference in favor of handcrafted eval.Raphexon wrote: ↑Mon Jul 27, 2020 1:30 pm This was true only for the earliest versions of SFNNUE.
"Speeded up by omitting the difference calculation for the pieces removed from the board"
https://github.com/nodchip/Stockfish/co ... 5de129113f
On my current PC on 1 core: SF from start does roughly 1370 knps, nnue does 684 knps. 2:1 ratio.
1k6/2n3pp/3b4/8/8/8/PP4KN/3B4 w - - 0 1: SF does 1976knps, nnue does 900 knps. 2.2:1 ratio.
1k6/2n5/3b4/8/8/8/6KN/3B4 w - - 0 1: SF does 2070 knps, nnue does 1078 knps.
The difference in speed generally stays at roughly a 2:1 to 2.3:1 ratio.
I sometimes see the difference in speed drop below 2:1, but those are generally trivial endgames with very few pieces.
Although nnue does take a little longer to get up to speed. Very little effect if time taken per move is at least 10-15 seconds, but noticeable if it has to blitz.
The 2:1 ratio on my PC is pretty much a constant throughout the game.
So speed is the least of my worries.
-
- Posts: 11588
- Joined: Thu Mar 09, 2006 12:57 am
- Location: Birmingham UK
Re: SF-NNUE going forward...
Probably not, unfortunately: it takes a large number of positions to train NNs to play good chess, and I suspect that there aren't enough positions in the the record of these players' games to do the training.
Right now, your best option is probably to select a program that has a style of play that's fun for you.
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
-
- Posts: 2204
- Joined: Sat Jan 18, 2014 10:24 am
- Location: Andorra
Re: SF-NNUE going forward...
I can try. But is not supposed to be more smooth the evaluation of NNUE? Due to generalitzation, I mean. I supposed this was an advantage.Milos wrote: ↑Mon Jul 27, 2020 12:31 pmThat makes absolutely no sense. You would be trying to approximate an approximation. Why go through the hoops, why not then train static eval on SF fixed depth 10 or whatever search returned score directly?cdani wrote: ↑Mon Jul 27, 2020 11:55 amSure will be interesting things to do. But for the moment if I take again Andscacs I think it will be to tune it's static eval against NNUE eval.peter wrote: ↑Mon Jul 27, 2020 10:05 am Tuning search is always fine, tuning position- learning code based on hash- learning wouldn't make code much more complicated, I think, even if I'm not good enough in proramming to prove so.
Andscacs had a nice hash- storage before SugaR had so, no interest in position- learning based on selected hash- entries, Daniel?
Ever looked at Jeremy Bernstein's SF PA?
Might still have the source somewhere, at least the one, Zerbinati made out of it for a revival a few years ago.
Those other things are being done for Stockfish more or less, I think. Have not reviewed them.
Daniel José - http://www.andscacs.com