I very like your "objective", "logical" and well-established critics.chrisw wrote: ↑Fri May 10, 2019 12:01 pmYou prefer “move vector”. No problem. You wrote exactly this:corres wrote: ↑Fri May 10, 2019 11:12 amWith pleasure.chrisw wrote: ↑Fri May 10, 2019 10:39 amNo. I don’t mix anything. Thank you for the beginners lecture.corres wrote: ↑Fri May 10, 2019 10:24 amYou mix the "policy matrix" with the "value matrix".chrisw wrote: ↑Fri May 10, 2019 10:16 amThe answer is no. You have either not thought deeply enough about this, or you have an incorrect model of the policy matrix.corres wrote: ↑Fri May 10, 2019 10:03 amNN is a "Black Box" what store huge number of connection between board positions and pre-evaluated worth of those position. The pre-evaluation were happened during the self-play learning. The result of self-play learning is contained by the NN file what Leela uses during searching.
In this course Leela gives a position as an input to NN and read out from NN as an output a probability
vector of move probabilities and probability number what refer to the winning chance of the game
from which the position originates.
The question is we can rebuild the position from the evaluation value of an AB engine and from the evaluation worth of an NN engine.
Evaluation of an AB engine is pure number with dimension of centipawn. Obviously a centipawn value practically says nothing about the position from what it was derived.
Evaluation of an NN engine contains the move vector what refers to each possible moves for black and white from that position. Can we rebuild the position from what that move vector was derived? Practically the answer is Yes.In this sense the NN contains "a kind of database" as I was stated earlier.
Probability vector of move probabilities and the winning chance are contained by the "value head" (=value matrix).
"Policy head" (=policy matrix) is the helper for searching to the right way only.
If you think you can reconstitute the position from the policy matrix, you have an incorrect model of the policy matrix.
I wrote nothing about "policy matrix".
Please, stick to my text if you refer to it.
“Evaluation of an NN engine contains the move vector what refers to each possible moves for black and white from that position. Can we rebuild the position from what that move vector was derived? Practically the answer is Yes”.
Error 1. “NN” “contains” one sides “moves” only. Not black and white. Good luck with rebuilding a position with one sides moves only.
Error 2. A “move vector” is origin square to destination square. No information about piece type. Good luck with working out, by definition with the actual piece positions not known, whether e4e5 is a pawn, queen, rook or king move.
Error 3. Your “move vector”, policy matrix, call it what you like is a map of move probabilities for all possible chess moves, in chess, not just the position you don’t know and want to reconstruct. Some valid moves will have very low probabilities. Some non-existent moves will have indeterminate but finite probabilities. Good luck, absent the position, with disentangling what represents a move and what doesn’t.
Error 4. Your “move vector” is only a valid list of legal moves because you already know the legal moves, and you already only know the legal moves if you know the position. But you don’t, by definition, know what you are trying to reconstruct.
So, not only wrong (Pauli) applies. Bad model, insufficient thought, error on top of error, no logic. Leading to nonsense conclusion.
So I have no any other comment to you.