Lyudmil Tsvetkov wrote:The training matches are different from the 100 games match with Stockfish.
Yes, the plot on the diagram is the training game, but 100 games per openning was played. 50-50, and the score below the diagram is on AlphaZero perspective.
12 openings with reversed colours don't square in any way with 100 played games, so did they actually left some openings played more than others, or did not they flip colours?
Lyudmil Tsvetkov wrote:
The training matches are different from the 100 games match with Stockfish.
It is not at all clear to me where were books used and where not.
12 openings with reversed colours don't square in any way with 100 played games, so did they actually left some openings played more than others, or did not they flip colours?
I'm sure opening books were not used...
In the early self-play games things like 1.a3, 1.a4, etc. were probably tried by AlphaZero...
eventually it learned that 1. e4 or 1. d4 had the highest success rates.
How can you be sure if they don't specify it?
And it learned wrong. But 1.Nf3?
Is this engine still based on random choices? What perfect engine we are talking about then?
Lyudmil Tsvetkov wrote:The training matches are different from the 100 games match with Stockfish.
Yes, the plot on the diagram is the training game, but 100 games per openning was played. 50-50, and the score below the diagram is on AlphaZero perspective.
12 openings with reversed colours don't square in any way with 100 played games, so did they actually left some openings played more than others, or did not they flip colours?
12 opennings x 100 = 1,200 games total.
Before we were talking about 300 and 100, now 1200 suddenly appears...
The 64/36 score certainly comes from 100 games, unless they assigned random points for a win.
And in that sample, I see Alpha playing just 1.d4 and 1.Nf3.
jdart wrote:Even Stockfish on a Raspberry Pi is a strong chess player, though. So AlphaZero has decent performance, we just need a better comparison. I don't think there is any fundamental reason a NN based system such as AlphaZero couldn't run on commodity hardware, so maybe that can happen.
--Jon
Well, probably lower than 3000 elos.
I don't think any engine below 3000 elo would crush Stockfish with a strong hardware.
This is a specifically-built engine running on a very specific hardware.
It is not that easy to find promising advances in chess evaluation, unless you get rid of all redundances and come up with completely new terms, which they have not done, so it is simply logically impossible to improve vastly in that way.
I can not believe something that is simply unachievable.
On a single core, if they would adapt their code that way, Alpha would certainly be weaker than 2900.
But actually, why don't they offer a single-core standard version to convince everyone?
Lyudmil Tsvetkov wrote:The training matches are different from the 100 games match with Stockfish.
Yes, the plot on the diagram is the training game, but 100 games per openning was played. 50-50, and the score below the diagram is on AlphaZero perspective.
12 openings with reversed colours don't square in any way with 100 played games, so did they actually left some openings played more than others, or did not they flip colours?
12 opennings x 100 = 1,200 games total.
Before we were talking about 300 and 100, now 1200 suddenly appears...
The 64/36 score certainly comes from 100 games, unless they assigned random points for a win.
And in that sample, I see Alpha playing just 1.d4 and 1.Nf3.
Read the series of posts properly. It is 100 games per openning, you clearly don't understand Table 2.
300 games because you were talking about 1.e4 earlier which appears in 6 diagrams.
How many is 50 x 6 ?
You were claiming that AlphaZero didn't play 1.e4, i told you it did! 300 times it did play 1.e4 against SF8.
See the total summation below: 1,200 games for all 12 opennings. Come on man, even this very
basic stuff we argue?
MikeGL wrote:See the total summation below: 1,200 games for all 12 opennings. Come on man, even this very
basic stuff we argue?
I suppose when the Wheel was first invented, there must have been people like Lyudmil Tsvetkov around, who said that it would never work.
Throughout the History of Mankind there have always been people who flatly refused to believe in revolutionary discoveries and inventions.
I suppose this falls in the same Category.
Since no one of us has access to TPU it's only fair to count in terms of what is available (for example if we had Alpha0 x64 binary compile and wanted to run it at home).
1TPU ~ 30xE5-2699v3 (18 cores machine).
4TPUs ~ 2000 Haswell cores
Apples and Bananas,
Stockfish is not able to make use of these TPUs,
and AlphaZero depends probably heavily on floating point operations (maybe half precision) to query the neural network.
So the question might be, if an stripped down x86-64 version of AlphaZero, with only some hundred or thousand of nps, is still able to beat Stockfish.....dunno.
--
Srdja
No original paper is comparing apples and bananas. SF is running on general purpose hardware. TPUs are not commercially available so running alpha0 on TPUs is giving it huge unfair advantage.
It would be like running SF on special hardware where search is happening on conventional CPU and all evaluation is handled with hundreds if not thousands of FPGAa, something like DeepBlue. Then we could say that comparison is fair.
Even in this setup, if it was the most recent version of Brainfish (so with opening book), and normal TC like 40/40 not 1move/min, Alpha0 would probably loose.
- i was wrong, it looks like they are doing 8 bit integer operations for inferencing
Lyudmil Tsvetkov wrote:The training matches are different from the 100 games match with Stockfish.
Yes, the plot on the diagram is the training game, but 100 games per openning was played. 50-50, and the score below the diagram is on AlphaZero perspective.
12 openings with reversed colours don't square in any way with 100 played games, so did they actually left some openings played more than others, or did not they flip colours?
12 opennings x 100 = 1,200 games total.
Before we were talking about 300 and 100, now 1200 suddenly appears...
The 64/36 score certainly comes from 100 games, unless they assigned random points for a win.
And in that sample, I see Alpha playing just 1.d4 and 1.Nf3.
Read the series of posts properly. It is 100 games per openning, you clearly don't understand Table 2.
300 games because you were talking about 1.e4 earlier which appears in 6 diagrams.
How many is 50 x 6 ?
You were claiming that AlphaZero didn't play 1.e4, i told you it did! 300 times it did play 1.e4 against SF8.
See the total summation below: 1,200 games for all 12 opennings. Come on man, even this very
basic stuff we argue?
We are talking here of the 100 game match, for which we have the pgn.
Do you have the pgn for the training games, which, btw., are claimed to run into the thousands?