Search found 258 matches
- Mon Jan 18, 2021 5:35 pm
- Forum: Computer Chess Club: General Topics
- Topic: It's NNUE era (sharing my thoughts)
- Replies: 35
- Views: 4472
Re: It's NNUE era (sharing my thoughts)
You have to labour for the NNUE eval too. Unless you are a coward and reuse the code and weights that Stockfish already gave you. NNUE is not a magic bullet -- Stockfish NNUEs are a magic bullet. Everyone who has tried to replicate NNUE on their own will know this. @Dann Corbit: This is the most im...
- Mon Jan 18, 2021 1:42 pm
- Forum: Computer Chess Club: General Topics
- Topic: It's NNUE era (sharing my thoughts)
- Replies: 35
- Views: 4472
Re: It's NNUE era (sharing my thoughts)
To avoid misunderstandings:
I condemn one-on-one copying with zero added value.
No need for 5 same engine.
I condemn one-on-one copying with zero added value.
No need for 5 same engine.
- Mon Jan 18, 2021 1:25 pm
- Forum: Computer Chess Club: General Topics
- Topic: It's NNUE era (sharing my thoughts)
- Replies: 35
- Views: 4472
Re: It's NNUE era (sharing my thoughts)
I agree with Andrew. Evaluation is the soul of the engine. LMR, Null move, etc is nothing without eval.
- Sat Jan 09, 2021 1:22 pm
- Forum: Computer Chess Club: Programming and Technical Discussions
- Topic: What K factor should be used if two players are in different K factor brackets?
- Replies: 3
- Views: 1073
Re: What K factor should be used if two players are in different K factor brackets?
Thanks guys!
I was unsure, but thank you for the confirmation!
-Tamás
I was unsure, but thank you for the confirmation!
-Tamás
- Sat Jan 09, 2021 10:49 am
- Forum: Computer Chess Club: Programming and Technical Discussions
- Topic: What K factor should be used if two players are in different K factor brackets?
- Replies: 3
- Views: 1073
What K factor should be used if two players are in different K factor brackets?
https://chess.stackexchange.com/questio ... r-brackets
Is the accepted answer really correct?
So is there a case where the points are not transferred one in one?
Is the accepted answer really correct?
So is there a case where the points are not transferred one in one?
- Wed Jan 06, 2021 7:19 am
- Forum: Computer Chess Club: Programming and Technical Discussions
- Topic: How to calc the derivative for gradient descent?
- Replies: 14
- Views: 2172
Re: How to calc the derivative for gradient descent?
How does one prove the eval is linear? I am not saying that chess is linear. If all evaluation term are linear, then the whole is linear. It depends on your evaluation. Can anybody translate the links to NN chess for dummies? King placement is very important and this seems to be a starting point fo...
- Tue Jan 05, 2021 8:32 pm
- Forum: Computer Chess Club: Programming and Technical Discussions
- Topic: How to calc the derivative for gradient descent?
- Replies: 14
- Views: 2172
Re: How to calc the derivative for gradient descent?
Hello, this is my first post. I'd like to know which way you suggest for calculating the derivative of the evaluation for each parameter for gradient descent with Texel tuning. I've read about (Eval(xi+1)-Eval(xi-1))/2, Eval(xi+1)-Eval(xi), auto differentiation libraries, Jacobian matrix and so for...
- Sun Jan 03, 2021 4:45 pm
- Forum: Computer Chess Club: General Topics
- Topic: Wasp 4.5 Released
- Replies: 20
- Views: 2943
Re: Wasp 4.5 Released
Tuning is now done in a similar fashion to back-propogation for neural networks rather than the gradient-descent method... ...each pertinent term is tweaked by a small amount in the direction to reduce error. this is the gradient-descent. the neural network also uses gradient-descent. backprop prop...
- Sun Jan 03, 2021 5:54 am
- Forum: Computer Chess Club: General Topics
- Topic: Wasp 4.5 Released
- Replies: 20
- Views: 2943
Re: Wasp 4.5 Released
Tuning is now done in a similar fashion to back-propogation for neural networks rather than the gradient-descent method... ...each pertinent term is tweaked by a small amount in the direction to reduce error. this is the gradient-descent. the neural network also uses gradient-descent. backprop prop...
- Fri Jan 01, 2021 10:11 am
- Forum: Computer Chess Club: Programming and Technical Discussions
- Topic: NN faster and energy efficient training.
- Replies: 4
- Views: 1129
Re: NN faster and energy efficient training.
Hi! Hi, Halogen author here. 768x32x1 was the shape of the first network that was able to completely replace my old HCE. How strong was this network? How many examples did you used for learning? Gaining +60 elo already with a hybrid approach is very impressive for so few training positions. Thanks a...