Search found 258 matches

by tomitank
Mon Jan 18, 2021 5:35 pm
Forum: Computer Chess Club: General Topics
Topic: It's NNUE era (sharing my thoughts)
Replies: 35
Views: 4472

Re: It's NNUE era (sharing my thoughts)

You have to labour for the NNUE eval too. Unless you are a coward and reuse the code and weights that Stockfish already gave you. NNUE is not a magic bullet -- Stockfish NNUEs are a magic bullet. Everyone who has tried to replicate NNUE on their own will know this. @Dann Corbit: This is the most im...
by tomitank
Mon Jan 18, 2021 1:42 pm
Forum: Computer Chess Club: General Topics
Topic: It's NNUE era (sharing my thoughts)
Replies: 35
Views: 4472

Re: It's NNUE era (sharing my thoughts)

To avoid misunderstandings:
I condemn one-on-one copying with zero added value.
No need for 5 same engine.
by tomitank
Mon Jan 18, 2021 1:25 pm
Forum: Computer Chess Club: General Topics
Topic: It's NNUE era (sharing my thoughts)
Replies: 35
Views: 4472

Re: It's NNUE era (sharing my thoughts)

I agree with Andrew. Evaluation is the soul of the engine. LMR, Null move, etc is nothing without eval.
by tomitank
Wed Jan 06, 2021 7:19 am
Forum: Computer Chess Club: Programming and Technical Discussions
Topic: How to calc the derivative for gradient descent?
Replies: 14
Views: 2172

Re: How to calc the derivative for gradient descent?

How does one prove the eval is linear? I am not saying that chess is linear. If all evaluation term are linear, then the whole is linear. It depends on your evaluation. Can anybody translate the links to NN chess for dummies? King placement is very important and this seems to be a starting point fo...
by tomitank
Tue Jan 05, 2021 8:32 pm
Forum: Computer Chess Club: Programming and Technical Discussions
Topic: How to calc the derivative for gradient descent?
Replies: 14
Views: 2172

Re: How to calc the derivative for gradient descent?

Hello, this is my first post. I'd like to know which way you suggest for calculating the derivative of the evaluation for each parameter for gradient descent with Texel tuning. I've read about (Eval(xi+1)-Eval(xi-1))/2, Eval(xi+1)-Eval(xi), auto differentiation libraries, Jacobian matrix and so for...
by tomitank
Sun Jan 03, 2021 4:45 pm
Forum: Computer Chess Club: General Topics
Topic: Wasp 4.5 Released
Replies: 20
Views: 2943

Re: Wasp 4.5 Released

Tuning is now done in a similar fashion to back-propogation for neural networks rather than the gradient-descent method... ...each pertinent term is tweaked by a small amount in the direction to reduce error. this is the gradient-descent. the neural network also uses gradient-descent. backprop prop...
by tomitank
Sun Jan 03, 2021 5:54 am
Forum: Computer Chess Club: General Topics
Topic: Wasp 4.5 Released
Replies: 20
Views: 2943

Re: Wasp 4.5 Released

Tuning is now done in a similar fashion to back-propogation for neural networks rather than the gradient-descent method... ...each pertinent term is tweaked by a small amount in the direction to reduce error. this is the gradient-descent. the neural network also uses gradient-descent. backprop prop...
by tomitank
Fri Jan 01, 2021 10:11 am
Forum: Computer Chess Club: Programming and Technical Discussions
Topic: NN faster and energy efficient training.
Replies: 4
Views: 1129

Re: NN faster and energy efficient training.

Hi! Hi, Halogen author here. 768x32x1 was the shape of the first network that was able to completely replace my old HCE. How strong was this network? How many examples did you used for learning? Gaining +60 elo already with a hybrid approach is very impressive for so few training positions. Thanks a...