## Bluefish vs Leela in TCEC , who will win? BF =170 Threads, Lc0 =2x GPU

Discussion of anything and everything relating to chess playing software and machines.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
corres
Posts: 1581
Joined: Wed Nov 18, 2015 10:41 am
Location: hungary

### Re: Bluefish vs Leela in TCEC , who will win? BF =170 Threads, Lc0 =2x GPU

chrisw wrote:
Fri May 10, 2019 12:01 pm
corres wrote:
Fri May 10, 2019 11:12 am
chrisw wrote:
Fri May 10, 2019 10:39 am
corres wrote:
Fri May 10, 2019 10:24 am
chrisw wrote:
Fri May 10, 2019 10:16 am
corres wrote:
Fri May 10, 2019 10:03 am
NN is a "Black Box" what store huge number of connection between board positions and pre-evaluated worth of those position. The pre-evaluation were happened during the self-play learning. The result of self-play learning is contained by the NN file what Leela uses during searching.
In this course Leela gives a position as an input to NN and read out from NN as an output a probability
vector of move probabilities and probability number what refer to the winning chance of the game
from which the position originates.
The question is we can rebuild the position from the evaluation value of an AB engine and from the evaluation worth of an NN engine.
Evaluation of an AB engine is pure number with dimension of centipawn. Obviously a centipawn value practically says nothing about the position from what it was derived.
Evaluation of an NN engine contains the move vector what refers to each possible moves for black and white from that position. Can we rebuild the position from what that move vector was derived? Practically the answer is Yes.
The answer is no. You have either not thought deeply enough about this, or you have an incorrect model of the policy matrix.
In this sense the NN contains "a kind of database" as I was stated earlier.
You mix the "policy matrix" with the "value matrix".
Probability vector of move probabilities and the winning chance are contained by the "value head" (=value matrix).
"Policy head" (=policy matrix) is the helper for searching to the right way only.
No. I don’t mix anything. Thank you for the beginners lecture.
If you think you can reconstitute the position from the policy matrix, you have an incorrect model of the policy matrix.
With pleasure.

I wrote nothing about "policy matrix".
Please, stick to my text if you refer to it.
You prefer “move vector”. No problem. You wrote exactly this:
“Evaluation of an NN engine contains the move vector what refers to each possible moves for black and white from that position. Can we rebuild the position from what that move vector was derived? Practically the answer is Yes”.

Error 1. “NN” “contains” one sides “moves” only. Not black and white. Good luck with rebuilding a position with one sides moves only.

Error 2. A “move vector” is origin square to destination square. No information about piece type. Good luck with working out, by definition with the actual piece positions not known, whether e4e5 is a pawn, queen, rook or king move.

Error 3. Your “move vector”, policy matrix, call it what you like is a map of move probabilities for all possible chess moves, in chess, not just the position you don’t know and want to reconstruct. Some valid moves will have very low probabilities. Some non-existent moves will have indeterminate but finite probabilities. Good luck, absent the position, with disentangling what represents a move and what doesn’t.

Error 4. Your “move vector” is only a valid list of legal moves because you already know the legal moves, and you already only know the legal moves if you know the position. But you don’t, by definition, know what you are trying to reconstruct.

So, not only wrong (Pauli) applies. Bad model, insufficient thought, error on top of error, no logic. Leading to nonsense conclusion.

You’re welcome.
I very like your "objective", "logical" and well-established critics.
So I have no any other comment to you.