probability is certainly not different from thought of human.corres wrote:The team of AlphaZero stated that their machine taught itself from zero knowledge of chess using only the rule of chess as start point.mclane wrote:
Stockfish and Komodo and Houdini play machine chess.
AZ plays a human like chess.
It sacs pieces for attack or development or space.
And the sacs do NOT lead to a material compensation soon, otherwise
Stockfish and Komodo and Houdini would see the sac and not eat the piece.
It seems AZ plays chess out of the interval stockfish/Komodo/ Houdini searches.
This is very funny to replay.
Stockfish gets smashed down like an idiot.
I guess any human chess player no matter which level he plays can observe that stockfish has no chance at all to win.
The way AZ plays is heavily different then that of the normal chess programs that I feel sorrow for them.
They all play machine chess in their interval of 20-30 plies they search.
But outside this interval, AZ kills them with very easy moves.
A human mind can understand those moves, but for a chess program with a search tree it seems those moves are very difficult to understand.
We see human chess beat machine chess.
With AZ playing like a machine emulated human.
In your opinion what is the explanation for the phenomenon described by you above? "Thinking" of AlphaZero is based on vectors and probability.
This is very different from thought of human.
If humans play games against themselves in a new position that they do not know and see no way to win they learn that they probably cannot win.
This is the way humans who do not know about the blind bishop discover it
and understand that some KBP vs K endgame is a draw without previous knowledge.
Of course you can teach the program specific knowledge about KBP vs K but there are many different cases and you cannot cover all of them so you should teach it to think in the way humans do to discover things like it during the game.