Dann Corbit wrote:LCZero is the most interesting thing that happened in computer chess in the last 30 years.
No way.
Deep Blue(1997), introduction of Rybka 1.0(domination for 10 consecutive years) and maybe others.
Leela is very interesting too, as its limit is the sky hopefully.
I agree.
LOL
A mediocre engine on a dedicated chess-specific hardware supercomputer, and a Fruit on steroids (bitboards + material tables), both in need of opening book to look reasonable in that stage, compared to pictures seeing chess software on a domestic GPU? You know what, pictures are worse in Chess than in Go, but it seems it still works, and in openings (and not only) even humans sometimes see pictures. This thing is surely different, heck, even I as a patzer see it playing differently. Completely differently.
Don't get me wrong, Leela is playing differently. She plays great Chess, great ATTACKING Chess with beautiful moves to create weaknesses in opponent's position especially King attacks.
But i was speaking about that quote of Dann " is the most interesting thing that happened in computer chess in the last 30 years.".
Back then in 1997, Deep Blue(the match DB Vs Kasparov) was everywhere for many years to come and in fact it played some very very powerful moves that the late 90w engines couldn't find and was compared and discussed for many years later.
Rybka 1.0 brought also a revolution with its games full of material imbalances and the great domination she had. Fruit or no Fruit she was a point of reference for 10 consecutive years dominating everything in these 10 years.
After his son's birth they've asked him:
"Is it a boy or girl?"
YES! He replied.....
Dann Corbit wrote:Kai has shown that LCZero does far better than linear strength increase with increased time. The best standard programs are sublinear.
I guess few understand utterly revolutionary that is.
This is not just a new algorithm. This is a revolution.
There is nothing revolutionary if you understand how things work.
That scaling performance is just a consequence of MCTS. If you have too few playouts engine will be very weak. You need a threshold to get some performance. Once you are over the threshold, scaling continues in an ordinary fashion.
Problem with Kai's tests is that they are performed at extremely low number of playouts so he gets artificially huge benefit whenever he double TC. It would be the similar as if you tested SF with TC=10ms/move, then 20ms/move, then 40ms/move etc. You'd see extremely good scaling.
If he was able to run TC where LC0 would have 100k+ playouts per move and then test scaling with 200k per move, 400k per move etc. you'd notice that scaling is not better than of a strong AB engine.
Scorpio MCTS behaves that way, practically any engine with LC0 search and shallow AB search in place of NN evaluation would perform exactly in the same way. Reinforcement learning or DNNs have nothing to do with it.
And please don't cite me Figure 2 from that quasi-scientific A0 preprint because it is proven to be bogus (at least SF performance is).
Dann Corbit wrote:Kai has shown that LCZero does far better than linear strength increase with increased time. The best standard programs are sublinear.
I guess few understand utterly revolutionary that is.
This is not just a new algorithm. This is a revolution.
There is nothing revolutionary if you understand how things work.
That scaling performance is just a consequence of MCTS. If you have too few playouts engine will be very weak. You need a threshold to get some performance. Once you are over the threshold, scaling continues in an ordinary fashion.
Problem with Kai's tests is that they are performed at extremely low number of playouts so he gets artificially huge benefit whenever he double TC. It would be the similar as if you tested SF with TC=10ms/move, then 20ms/move, then 40ms/move etc. You'd see extremely good scaling.
If he was able to run TC where LC0 would have 100k+ playouts per move and then test scaling with 200k per move, 400k per move etc. you'd notice that scaling is not better than of a strong AB engine.
Scorpio MCTS behaves that way, practically any engine with LC0 search and shallow AB search in place of NN evaluation would perform exactly in the same way. Reinforcement learning or DNNs have nothing to do with it.
And please don't cite me Figure 2 from that quasi-scientific A0 preprint because it is proven to be bogus (at least SF performance is).
You are saying that Scorpio scales in a super-linear manner like LCZero?
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Dann Corbit wrote:Kai has shown that LCZero does far better than linear strength increase with increased time. The best standard programs are sublinear.
I guess few understand utterly revolutionary that is.
This is not just a new algorithm. This is a revolution.
There is nothing revolutionary if you understand how things work.
That scaling performance is just a consequence of MCTS. If you have too few playouts engine will be very weak. You need a threshold to get some performance. Once you are over the threshold, scaling continues in an ordinary fashion.
Problem with Kai's tests is that they are performed at extremely low number of playouts so he gets artificially huge benefit whenever he double TC. It would be the similar as if you tested SF with TC=10ms/move, then 20ms/move, then 40ms/move etc. You'd see extremely good scaling.
If he was able to run TC where LC0 would have 100k+ playouts per move and then test scaling with 200k per move, 400k per move etc. you'd notice that scaling is not better than of a strong AB engine.
Scorpio MCTS behaves that way, practically any engine with LC0 search and shallow AB search in place of NN evaluation would perform exactly in the same way. Reinforcement learning or DNNs have nothing to do with it.
And please don't cite me Figure 2 from that quasi-scientific A0 preprint because it is proven to be bogus (at least SF performance is).
You are exaggerating a bit (or by much). Even the games are at 100-200 playouts/move. And this test on 4 threads is 2000 playouts against 500 playouts / position. http://www.talkchess.com/forum/viewtopi ... &start=197
The scaling is completely different, and that is not SF at 10ms/move. I compared with engines of similar strength in the same conditions. I generally compare with engines of similar strength. The scaling is completely different at similar strength in same conditions.
The difference here is that Kai, Dann and most of the others here look at all the data available and draw their conclusions from that, while Milos already knows the conclusion and selects the data to consider from that
Dann Corbit wrote:Kai has shown that LCZero does far better than linear strength increase with increased time. The best standard programs are sublinear.
I guess few understand utterly revolutionary that is.
This is not just a new algorithm. This is a revolution.
There is nothing revolutionary if you understand how things work.
That scaling performance is just a consequence of MCTS. If you have too few playouts engine will be very weak. You need a threshold to get some performance. Once you are over the threshold, scaling continues in an ordinary fashion.
Problem with Kai's tests is that they are performed at extremely low number of playouts so he gets artificially huge benefit whenever he double TC. It would be the similar as if you tested SF with TC=10ms/move, then 20ms/move, then 40ms/move etc. You'd see extremely good scaling.
If he was able to run TC where LC0 would have 100k+ playouts per move and then test scaling with 200k per move, 400k per move etc. you'd notice that scaling is not better than of a strong AB engine.
Scorpio MCTS behaves that way, practically any engine with LC0 search and shallow AB search in place of NN evaluation would perform exactly in the same way. Reinforcement learning or DNNs have nothing to do with it.
And please don't cite me Figure 2 from that quasi-scientific A0 preprint because it is proven to be bogus (at least SF performance is).
You are saying that Scorpio scales in a super-linear manner like LCZero?
Scorpio MCTS yes. Ask Daniel or you can test it yourself.
Dann Corbit wrote:Kai has shown that LCZero does far better than linear strength increase with increased time. The best standard programs are sublinear.
I guess few understand utterly revolutionary that is.
This is not just a new algorithm. This is a revolution.
There is nothing revolutionary if you understand how things work.
That scaling performance is just a consequence of MCTS. If you have too few playouts engine will be very weak. You need a threshold to get some performance. Once you are over the threshold, scaling continues in an ordinary fashion.
Problem with Kai's tests is that they are performed at extremely low number of playouts so he gets artificially huge benefit whenever he double TC. It would be the similar as if you tested SF with TC=10ms/move, then 20ms/move, then 40ms/move etc. You'd see extremely good scaling.
If he was able to run TC where LC0 would have 100k+ playouts per move and then test scaling with 200k per move, 400k per move etc. you'd notice that scaling is not better than of a strong AB engine.
Scorpio MCTS behaves that way, practically any engine with LC0 search and shallow AB search in place of NN evaluation would perform exactly in the same way. Reinforcement learning or DNNs have nothing to do with it.
And please don't cite me Figure 2 from that quasi-scientific A0 preprint because it is proven to be bogus (at least SF performance is).
You are exaggerating a bit (or by much). Even the games are at 100-200 playouts/move. And this test on 4 threads is 2000 playouts against 500 playouts / position. http://www.talkchess.com/forum/viewtopi ... &start=197
The scaling is completely different, and that is not SF at 10ms/move. I compared with engines of similar strength in the same conditions. I generally compare with engines of similar strength. The scaling is completely different at similar strength in same conditions.
No it is not. Ofc it makes a difference whether you compare scaling in time or scaling in threads. And ofc that scaling in threads will be more less linear and independent of engine strength since it is MCTS property.
However, scaling in time is the one that you claim is different and I claim is not compared to strong AB engine. What is different is the point at which super linearity saturates.
If you had really powerful hardware (like 4TPUs or 40-100 1080Ti's) LC0 could be assumed as strong engine (maybe only 300Elo below SF on TCEC hardware). Then it would make absolutely no sense to test it against mediocre engines you are testing it now. You are not testing against engines of similar strength. You are testing on extremely handicapped hardware setup (compared to how LC0 was supposed to run) so those engines look as if they are of similar strength, but you are just shifting your reference point. Repeat the test LC0 500 playouts/move vs SFdev on 10k nodes/move and then LC0 2000 playouts/move vs SFdev 40k nodes per move and you'll see how suddenly LC0 doesn't scale any better .
I am testing SF8 2min/move on 24 threads vs. SF8 2s/move on 24 threads and current Elo difference is 500Elo (roughly 90% score). With 1.2s/move vs. 20ms/move also on 24 threads score was 95% (around 550Elo).
You really believe LC0 with 120k playouts/move vs. LC0 2k playouts/move would score more than 90%???
Sorry, but for that we need actual test, not some really non-relevant toy examples that you are doing.
Topic ist most boring.
We had it for 18 years with Nightmare in Leiden with an interesting book by the author about it.
So long people have no other things in the brain as LC Zero, 1-times weekly for TalkChess is more as enough.
But it's a good time to play with the older chess Computers. For myself much more interesting as LC Zero. Topic is much older as 18 years only and sorry to others ... to 100000% more interesting.
Best
Frank
PS: A special forum for such an old and stale topic could be a good idea. Moderators are sleeping here!
hgm wrote:In flat view more than half the threads on the first page are about other topics than LCZero. Is it that you are usig threadeed view? One of the LCZero threads is getting quite long now; perhaps it should be locked an a new one started.
I guess the main problem is that there isn't much to report beyodn LCZero, an the posting intensity really reflects what the TalChess community is doing at the moment..
Just my $.02 - LCZero is not going away and I believe it is fairly obvious that this is huge paradigm shift in chess programming - bigger than anything I can think of in recent or distant memory. But it's not just just LCZero - but it's the whole genre - call it "Deep Chess Neural Networks" or whatever , but it is a topic that clearly deserves it's own forum heading and the sooner the better, It is not going away and the interest will just increase. And I applaud all of those who have taken up the "Deep Chess Neural Networks" - it is certainly fascinating to watch it unfold. It does remind little bit when a core group here got into engine testing and were eagerly posting their results and eventually the powers to be saw fit that they rightfully deserve their own forum as well.
Dann Corbit wrote:Kai has shown that LCZero does far better than linear strength increase with increased time. The best standard programs are sublinear.
I guess few understand utterly revolutionary that is.
This is not just a new algorithm. This is a revolution.
There is nothing revolutionary if you understand how things work.
That scaling performance is just a consequence of MCTS. If you have too few playouts engine will be very weak. You need a threshold to get some performance. Once you are over the threshold, scaling continues in an ordinary fashion.
Problem with Kai's tests is that they are performed at extremely low number of playouts so he gets artificially huge benefit whenever he double TC. It would be the similar as if you tested SF with TC=10ms/move, then 20ms/move, then 40ms/move etc. You'd see extremely good scaling.
If he was able to run TC where LC0 would have 100k+ playouts per move and then test scaling with 200k per move, 400k per move etc. you'd notice that scaling is not better than of a strong AB engine.
Scorpio MCTS behaves that way, practically any engine with LC0 search and shallow AB search in place of NN evaluation would perform exactly in the same way. Reinforcement learning or DNNs have nothing to do with it.
And please don't cite me Figure 2 from that quasi-scientific A0 preprint because it is proven to be bogus (at least SF performance is).
You are exaggerating a bit (or by much). Even the games are at 100-200 playouts/move. And this test on 4 threads is 2000 playouts against 500 playouts / position. http://www.talkchess.com/forum/viewtopi ... &start=197
The scaling is completely different, and that is not SF at 10ms/move. I compared with engines of similar strength in the same conditions. I generally compare with engines of similar strength. The scaling is completely different at similar strength in same conditions.
No it is not. Ofc it makes a difference whether you compare scaling in time or scaling in threads. And ofc that scaling in threads will be more less linear and independent of engine strength since it is MCTS property.
However, scaling in time is the one that you claim is different and I claim is not compared to strong AB engine. What is different is the point at which super linearity saturates.
If you had really powerful hardware (like 4TPUs or 40-100 1080Ti's) LC0 could be assumed as strong engine (maybe only 300Elo below SF on TCEC hardware). Then it would make absolutely no sense to test it against mediocre engines you are testing it now. You are not testing against engines of similar strength. You are testing on extremely handicapped hardware setup (compared to how LC0 was supposed to run) so those engines look as if they are of similar strength, but you are just shifting your reference point. Repeat the test LC0 500 playouts/move vs SFdev on 10k nodes/move and then LC0 2000 playouts/move vs SFdev 40k nodes per move and you'll see how suddenly LC0 doesn't scale any better .
I am testing SF8 2min/move on 24 threads vs. SF8 2s/move on 24 threads and current Elo difference is 500Elo (roughly 90% score). With 1.2s/move vs. 20ms/move also on 24 threads score was 95% (around 550Elo).
You really believe LC0 with 120k playouts/move vs. LC0 2k playouts/move would score more than 90%???
Sorry, but for that we need actual test, not some really non-relevant toy examples that you are doing.
What is toy here? That stockfish develops at 15''+0.1'' is also toy? It seems to work. Now, what really is not a toy, is that on an opening positional test suite, mimicking a strong GPU (6x factor compared to my CPU), I got the following in perfectly equal and reasonable conditions, not toy at all for all competitors (20s/position on 4 cores most):
That positionally in openings on strong GPU it plays like top engines is interesting, if not more than that. Also, my wondering is not the scaling, but the holistic understanding and some sort of "intuition" (pattern recognition) of LC0 compared to smirks by regular engines to not blunder and apply hand-crafted knowledge written by humans. I would guess that in 2-3 months we will see the beginning of the re-writing of some chapters of human opening theory at this pace.