Board adaptive / tuning evaluation function - no NN/AI

Discussion of chess software programming and technical issues.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
YUFe
Posts: 17
Joined: Sat Jan 11, 2020 2:52 pm
Full name: Moritz Gedig

Re: Representation in metric Space

Post by YUFe » Mon Jan 27, 2020 5:02 pm

DustyMonkey wrote:
Mon Jan 27, 2020 4:14 pm
an autoencoder cannot reduce the state much beyond a well packed traditional encoding
Actually it alone can not at all.

YUFe
Posts: 17
Joined: Sat Jan 11, 2020 2:52 pm
Full name: Moritz Gedig

Re: More details

Post by YUFe » Tue Apr 21, 2020 7:21 pm

Reading myself I found it too hard to understand to not clarify.
YUFe wrote:
Sun Jan 19, 2020 10:19 am
We do know which of the myriad of possible following states gave us that value.
By that I meant the board setting of which the eval() was propagated.
After each search we got pairs of states of approximately the same value even though the [immediate] eval() does yield different values.
I was talking about root moves, that now have a propagated value and a direct evaluation.
We know that our eval() is wrong in a particular direction.
For every move with two evaluations we know that our eval() judges it too low or to high and can try to find out why.
Because all root move result states can not be too different it should not be hard to figure out what made the difference.
They can only be different by turn, thus few (two) pieces make the difference.

Post Reply