SF-NNUE going forward...

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

peter
Posts: 3186
Joined: Sat Feb 16, 2008 7:38 am
Full name: Peter Martan

Re: SF-NNUE going forward...

Post by peter »

cdani wrote: Mon Jul 27, 2020 11:55 am
peter wrote: Mon Jul 27, 2020 10:05 am Tuning search is always fine, tuning position- learning code based on hash- learning wouldn't make code much more complicated, I think, even if I'm not good enough in proramming to prove so.
Andscacs had a nice hash- storage before SugaR had so, no interest in position- learning based on selected hash- entries, Daniel?

Ever looked at Jeremy Bernstein's SF PA?

Might still have the source somewhere, at least the one, Zerbinati made out of it for a revival a few years ago.
Sure will be interesting things to do. But for the moment if I take again Andscacs I think it will be to tune it's static eval against NNUE eval.
Those other things are being done for Stockfish more or less, I think. Have not reviewed them.
Thanks for the answer! Looking forward to Andscacs NNUE!
Peter.
Milos
Posts: 4190
Joined: Wed Nov 25, 2009 1:47 am

Re: SF-NNUE going forward...

Post by Milos »

cdani wrote: Mon Jul 27, 2020 11:55 am
peter wrote: Mon Jul 27, 2020 10:05 am Tuning search is always fine, tuning position- learning code based on hash- learning wouldn't make code much more complicated, I think, even if I'm not good enough in proramming to prove so.
Andscacs had a nice hash- storage before SugaR had so, no interest in position- learning based on selected hash- entries, Daniel?

Ever looked at Jeremy Bernstein's SF PA?

Might still have the source somewhere, at least the one, Zerbinati made out of it for a revival a few years ago.
Sure will be interesting things to do. But for the moment if I take again Andscacs I think it will be to tune it's static eval against NNUE eval.
Those other things are being done for Stockfish more or less, I think. Have not reviewed them.
That makes absolutely no sense. You would be trying to approximate an approximation. Why go through the hoops, why not then train static eval on SF fixed depth 10 or whatever search returned score directly?
Raphexon
Posts: 476
Joined: Sun Mar 17, 2019 12:00 pm
Full name: Henk Drost

Re: SF-NNUE going forward...

Post by Raphexon »

Milos wrote: Mon Jul 27, 2020 12:31 pm
cdani wrote: Mon Jul 27, 2020 11:55 am
peter wrote: Mon Jul 27, 2020 10:05 am Tuning search is always fine, tuning position- learning code based on hash- learning wouldn't make code much more complicated, I think, even if I'm not good enough in proramming to prove so.
Andscacs had a nice hash- storage before SugaR had so, no interest in position- learning based on selected hash- entries, Daniel?

Ever looked at Jeremy Bernstein's SF PA?

Might still have the source somewhere, at least the one, Zerbinati made out of it for a revival a few years ago.
Sure will be interesting things to do. But for the moment if I take again Andscacs I think it will be to tune it's static eval against NNUE eval.
Those other things are being done for Stockfish more or less, I think. Have not reviewed them.
That makes absolutely no sense. You would be trying to approximate an approximation. Why go through the hoops, why not then train static eval on SF fixed depth 10 or whatever search returned score directly?
I wouldn't call nnue an approximation, although maybe with the high lambda that's being used it may sort of be.

But using a superior eval will be more time efficient.
My only real worry is that (with the current nets) nnue's eval isn't superior at every stage of the game.
Milos
Posts: 4190
Joined: Wed Nov 25, 2009 1:47 am

Re: SF-NNUE going forward...

Post by Milos »

Raphexon wrote: Mon Jul 27, 2020 1:05 pm
Milos wrote: Mon Jul 27, 2020 12:31 pm
cdani wrote: Mon Jul 27, 2020 11:55 am
peter wrote: Mon Jul 27, 2020 10:05 am Tuning search is always fine, tuning position- learning code based on hash- learning wouldn't make code much more complicated, I think, even if I'm not good enough in proramming to prove so.
Andscacs had a nice hash- storage before SugaR had so, no interest in position- learning based on selected hash- entries, Daniel?

Ever looked at Jeremy Bernstein's SF PA?

Might still have the source somewhere, at least the one, Zerbinati made out of it for a revival a few years ago.
Sure will be interesting things to do. But for the moment if I take again Andscacs I think it will be to tune it's static eval against NNUE eval.
Those other things are being done for Stockfish more or less, I think. Have not reviewed them.
That makes absolutely no sense. You would be trying to approximate an approximation. Why go through the hoops, why not then train static eval on SF fixed depth 10 or whatever search returned score directly?
I wouldn't call nnue an approximation, although maybe with the high lambda that's being used it may sort of be.

But using a superior eval will be more time efficient.
My only real worry is that (with the current nets) nnue's eval isn't superior at every stage of the game.
I'm almost certain that NNUE eval coupled with search in endgame is inferior. The reason is, it takes approximately the same amount of time to compute NNUE eval irrespective of game phase while handcrafted eval is executed much faster in endgame yielding higher depth search in the same amount of time compared to search with NNUE eval.
Raphexon
Posts: 476
Joined: Sun Mar 17, 2019 12:00 pm
Full name: Henk Drost

Re: SF-NNUE going forward...

Post by Raphexon »

Milos wrote: Mon Jul 27, 2020 1:13 pm
Raphexon wrote: Mon Jul 27, 2020 1:05 pm
Milos wrote: Mon Jul 27, 2020 12:31 pm
cdani wrote: Mon Jul 27, 2020 11:55 am
peter wrote: Mon Jul 27, 2020 10:05 am Tuning search is always fine, tuning position- learning code based on hash- learning wouldn't make code much more complicated, I think, even if I'm not good enough in proramming to prove so.
Andscacs had a nice hash- storage before SugaR had so, no interest in position- learning based on selected hash- entries, Daniel?

Ever looked at Jeremy Bernstein's SF PA?

Might still have the source somewhere, at least the one, Zerbinati made out of it for a revival a few years ago.
Sure will be interesting things to do. But for the moment if I take again Andscacs I think it will be to tune it's static eval against NNUE eval.
Those other things are being done for Stockfish more or less, I think. Have not reviewed them.
That makes absolutely no sense. You would be trying to approximate an approximation. Why go through the hoops, why not then train static eval on SF fixed depth 10 or whatever search returned score directly?
I wouldn't call nnue an approximation, although maybe with the high lambda that's being used it may sort of be.

But using a superior eval will be more time efficient.
My only real worry is that (with the current nets) nnue's eval isn't superior at every stage of the game.
I'm almost certain that NNUE eval coupled with search in endgame is inferior. The reason is, it takes approximately the same amount of time to compute NNUE eval irrespective of game phase while handcrafted eval is executed much faster in endgame yielding higher depth search in the same amount of time compared to search with NNUE eval.
This was true only for the earliest versions of SFNNUE.

"Speeded up by omitting the difference calculation for the pieces removed from the board"
https://github.com/nodchip/Stockfish/co ... 5de129113f


On my current PC on 1 core: SF from start does roughly 1370 knps, nnue does 684 knps. 2:1 ratio.
1k6/2n3pp/3b4/8/8/8/PP4KN/3B4 w - - 0 1: SF does 1976knps, nnue does 900 knps. 2.2:1 ratio.
1k6/2n5/3b4/8/8/8/6KN/3B4 w - - 0 1: SF does 2070 knps, nnue does 1078 knps.

The difference in speed generally stays at roughly a 2:1 to 2.3:1 ratio.
I sometimes see the difference in speed drop below 2:1, but those are generally trivial endgames with very few pieces.

Although nnue does take a little longer to get up to speed. Very little effect if time taken per move is at least 10-15 seconds, but noticeable if it has to blitz.
Milos
Posts: 4190
Joined: Wed Nov 25, 2009 1:47 am

Re: SF-NNUE going forward...

Post by Milos »

Raphexon wrote: Mon Jul 27, 2020 1:30 pm This was true only for the earliest versions of SFNNUE.

"Speeded up by omitting the difference calculation for the pieces removed from the board"
https://github.com/nodchip/Stockfish/co ... 5de129113f


On my current PC on 1 core: SF from start does roughly 1370 knps, nnue does 684 knps. 2:1 ratio.
1k6/2n3pp/3b4/8/8/8/PP4KN/3B4 w - - 0 1: SF does 1976knps, nnue does 900 knps. 2.2:1 ratio.
1k6/2n5/3b4/8/8/8/6KN/3B4 w - - 0 1: SF does 2070 knps, nnue does 1078 knps.

The difference in speed generally stays at roughly a 2:1 to 2.3:1 ratio.
I sometimes see the difference in speed drop below 2:1, but those are generally trivial endgames with very few pieces.

Although nnue does take a little longer to get up to speed. Very little effect if time taken per move is at least 10-15 seconds, but noticeable if it has to blitz.
I see, how about positions with few pieces but a lot of pawns and not evaluating them from clean hash but in game? There pawn hash and lazy eval would make much more difference in favor of handcrafted eval.
Raphexon
Posts: 476
Joined: Sun Mar 17, 2019 12:00 pm
Full name: Henk Drost

Re: SF-NNUE going forward...

Post by Raphexon »

Milos wrote: Mon Jul 27, 2020 1:39 pm
Raphexon wrote: Mon Jul 27, 2020 1:30 pm This was true only for the earliest versions of SFNNUE.

"Speeded up by omitting the difference calculation for the pieces removed from the board"
https://github.com/nodchip/Stockfish/co ... 5de129113f


On my current PC on 1 core: SF from start does roughly 1370 knps, nnue does 684 knps. 2:1 ratio.
1k6/2n3pp/3b4/8/8/8/PP4KN/3B4 w - - 0 1: SF does 1976knps, nnue does 900 knps. 2.2:1 ratio.
1k6/2n5/3b4/8/8/8/6KN/3B4 w - - 0 1: SF does 2070 knps, nnue does 1078 knps.

The difference in speed generally stays at roughly a 2:1 to 2.3:1 ratio.
I sometimes see the difference in speed drop below 2:1, but those are generally trivial endgames with very few pieces.

Although nnue does take a little longer to get up to speed. Very little effect if time taken per move is at least 10-15 seconds, but noticeable if it has to blitz.
I see, how about positions with few pieces but a lot of pawns and not evaluating them from clean hash but in game? There pawn hash and lazy eval would make much more difference in favor of handcrafted eval.
From having eyed a bit of games not having to evaluate from a clean hash is as much of an advantage for nnue as it is for SF.
The 2:1 ratio on my PC is pretty much a constant throughout the game.

So speed is the least of my worries.
Milos
Posts: 4190
Joined: Wed Nov 25, 2009 1:47 am

Re: SF-NNUE going forward...

Post by Milos »

Raphexon wrote: Mon Jul 27, 2020 2:12 pm
Milos wrote: Mon Jul 27, 2020 1:39 pm
Raphexon wrote: Mon Jul 27, 2020 1:30 pm This was true only for the earliest versions of SFNNUE.

"Speeded up by omitting the difference calculation for the pieces removed from the board"
https://github.com/nodchip/Stockfish/co ... 5de129113f


On my current PC on 1 core: SF from start does roughly 1370 knps, nnue does 684 knps. 2:1 ratio.
1k6/2n3pp/3b4/8/8/8/PP4KN/3B4 w - - 0 1: SF does 1976knps, nnue does 900 knps. 2.2:1 ratio.
1k6/2n5/3b4/8/8/8/6KN/3B4 w - - 0 1: SF does 2070 knps, nnue does 1078 knps.

The difference in speed generally stays at roughly a 2:1 to 2.3:1 ratio.
I sometimes see the difference in speed drop below 2:1, but those are generally trivial endgames with very few pieces.

Although nnue does take a little longer to get up to speed. Very little effect if time taken per move is at least 10-15 seconds, but noticeable if it has to blitz.
I see, how about positions with few pieces but a lot of pawns and not evaluating them from clean hash but in game? There pawn hash and lazy eval would make much more difference in favor of handcrafted eval.
From having eyed a bit of games not having to evaluate from a clean hash is as much of an advantage for nnue as it is for SF.
The 2:1 ratio on my PC is pretty much a constant throughout the game.

So speed is the least of my worries.
That's actually quite good. I still wouldn't train net on fixed depth though, it makes endgame positions eval relatively much weaker. One could gain more with fix time per move training or just use some TC with increment.
User avatar
towforce
Posts: 11575
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: SF-NNUE going forward...

Post by towforce »

Rowen wrote: Mon Jul 27, 2020 9:09 am Hi
Perhaps my presumptions are incorrect, but could specialised nets be created that train an engine to play like a human or humans with a particular strength , personality, characteristic, or play like Tal etc, etc.
Thanks

Probably not, unfortunately: it takes a large number of positions to train NNs to play good chess, and I suspect that there aren't enough positions in the the record of these players' games to do the training.

Right now, your best option is probably to select a program that has a style of play that's fun for you.
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: SF-NNUE going forward...

Post by cdani »

Milos wrote: Mon Jul 27, 2020 12:31 pm
cdani wrote: Mon Jul 27, 2020 11:55 am
peter wrote: Mon Jul 27, 2020 10:05 am Tuning search is always fine, tuning position- learning code based on hash- learning wouldn't make code much more complicated, I think, even if I'm not good enough in proramming to prove so.
Andscacs had a nice hash- storage before SugaR had so, no interest in position- learning based on selected hash- entries, Daniel?

Ever looked at Jeremy Bernstein's SF PA?

Might still have the source somewhere, at least the one, Zerbinati made out of it for a revival a few years ago.
Sure will be interesting things to do. But for the moment if I take again Andscacs I think it will be to tune it's static eval against NNUE eval.
Those other things are being done for Stockfish more or less, I think. Have not reviewed them.
That makes absolutely no sense. You would be trying to approximate an approximation. Why go through the hoops, why not then train static eval on SF fixed depth 10 or whatever search returned score directly?
I can try. But is not supposed to be more smooth the evaluation of NNUE? Due to generalitzation, I mean. I supposed this was an advantage.