2 open projects

Discussion of anything and everything relating to chess playing software and machines.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.

stockfish vs lczero

Poll ended at Thu May 03, 2018 11:20 pm

lczero will attract more ppl, leaving the stockfish project to die
2
5%
lczero will saturate so stockfish will continue to prevail
3
8%
both projects will continue to be supported without to hurt each other
33
87%
 
Total votes: 38

JJJ
Posts: 1287
Joined: Sat Apr 19, 2014 11:47 am

Re: 2 open projects

Post by JJJ » Fri Apr 27, 2018 12:24 pm

I think Leela will overcome his weakness with tactics Uri. It's just his way to learn at first. She might even improve against missed mate in 1 or 2 and so on.

10 millions game is just 25% of alphazero game played to train.

And anyway Leela might play 1 billions games if needed with bigger and bigger net. So let's see !

Milos
Posts: 3387
Joined: Wed Nov 25, 2009 12:47 am

Re: 2 open projects

Post by Milos » Fri Apr 27, 2018 12:32 pm

JJJ wrote:I think Leela will overcome his weakness with tactics Uri. It's just his way to learn at first. She might even improve against missed mate in 1 or 2 and so on.

10 millions game is just 25% of alphazero game played to train.

And anyway Leela might play 1 billions games if needed with bigger and bigger net. So let's see !
A0 after 10 millions training games already (allegedly because there is no proof beside that Google advertising leaflet) surpassed SF8.
You know where is LC0 at 800'000 playouts a minute - 400 Elo below SF8 on 64 cores and 1min/move.
You really think that increasing network size indefinitely will continue to yield strength improvements???
Sorry, but that's not how things work.

jkiliani
Posts: 143
Joined: Wed Jan 17, 2018 12:26 pm

Re: 2 open projects

Post by jkiliani » Fri Apr 27, 2018 12:55 pm

Milos wrote:
JJJ wrote:I think Leela will overcome his weakness with tactics Uri. It's just his way to learn at first. She might even improve against missed mate in 1 or 2 and so on.

10 millions game is just 25% of alphazero game played to train.

And anyway Leela might play 1 billions games if needed with bigger and bigger net. So let's see !
A0 after 10 millions training games already (allegedly because there is no proof beside that Google advertising leaflet) surpassed SF8.
You know where is LC0 at 800'000 playouts a minute - 400 Elo below SF8 on 64 cores and 1min/move.
You really think that increasing network size indefinitely will continue to yield strength improvements???
Sorry, but that's not how things work.
As far as I'm concerned that's EXACTLY how things are going to work. The policy improvement operator from MCTS works whether the improvement is in positional evaluation or in tactics, it just can't improve a net beyond its capacity. Raise the capacity, and the self-improvement resumes. If you don't believe that, just wait for let's say three months (next TCEC), and then we can discuss again whether increasing the net size works after all.

User avatar
velmarin
Posts: 1600
Joined: Mon Feb 21, 2011 8:48 am

Re: 2 open projects

Post by velmarin » Fri Apr 27, 2018 12:58 pm

Perhaps at some point a hybrid will be born, Eval, search and Syzygy from Stockfish with a neural network.
Or is it not possible?
What a performance it can have. :?:

noobpwnftw
Posts: 360
Joined: Sun Nov 08, 2015 10:10 pm

Re: 2 open projects

Post by noobpwnftw » Fri Apr 27, 2018 1:10 pm

There are basically two categories of people riding the hype:
1. The new approach is so different and they found it interesting and worth a try, I'm neutral to them and I believe the strength of NN engines will reach top-tier because GPUs are relatively cheaper as in performance per buck.

2. Some people feel motivated when they finally get a chance to believe that their incapability of coding anything may turns out to be a profit, for those I enjoy turning their fantasy into a troll thread.

:P

User avatar
velmarin
Posts: 1600
Joined: Mon Feb 21, 2011 8:48 am

Re: 2 open projects

Post by velmarin » Fri Apr 27, 2018 1:17 pm

noobpwnftw wrote:troll

:P
Here's this awful adjective again.

This is a forum, let's be nice,

noobpwnftw
Posts: 360
Joined: Sun Nov 08, 2015 10:10 pm

Re: 2 open projects

Post by noobpwnftw » Fri Apr 27, 2018 1:28 pm

velmarin wrote:awful forum
Nice quote. :D

JJJ
Posts: 1287
Joined: Sat Apr 19, 2014 11:47 am

Re: 2 open projects

Post by JJJ » Fri Apr 27, 2018 1:31 pm

Milos wrote:
JJJ wrote:I think Leela will overcome his weakness with tactics Uri. It's just his way to learn at first. She might even improve against missed mate in 1 or 2 and so on.

10 millions game is just 25% of alphazero game played to train.

And anyway Leela might play 1 billions games if needed with bigger and bigger net. So let's see !
A0 after 10 millions training games already (allegedly because there is no proof beside that Google advertising leaflet) surpassed SF8.
You know where is LC0 at 800'000 playouts a minute - 400 Elo below SF8 on 64 cores and 1min/move.
You really think that increasing network size indefinitely will continue to yield strength improvements???
Sorry, but that's not how things work.
You know milos, you should pay attention about what you re writing. If you haven't noticed, you had never posted a single positive comment in this forum but the opposite. Negativity is gonna make you real sick somedays, perhaps you already are.

Milos
Posts: 3387
Joined: Wed Nov 25, 2009 12:47 am

Re: 2 open projects

Post by Milos » Fri Apr 27, 2018 6:11 pm

jkiliani wrote:As far as I'm concerned that's EXACTLY how things are going to work. The policy improvement operator from MCTS works whether the improvement is in positional evaluation or in tactics, it just can't improve a net beyond its capacity. Raise the capacity, and the self-improvement resumes. If you don't believe that, just wait for let's say three months (next TCEC), and then we can discuss again whether increasing the net size works after all.
Whether further net size increase is gonna work short term or not is something that indeed we don't know and will need some time to see.
That there exists a point beyond which any further simple net size increase (such as increase in number of blocks and filters) doesn't bring anything is such an obvious property on NNs in general and more particular DNNs that claiming otherwise simply proves one doesn't know much about DNNs in general.
Based on your other claims about net so far I would say it is pretty obvious that your knowledge and experience with DNNs in general is very limited.
For your field of interest for example there is ton of indications that NNs offer subpar performance compared to other simpler ML techniques, so it wouldn't be much of a surprise that your DNN expertise is lacking ;).

Milos
Posts: 3387
Joined: Wed Nov 25, 2009 12:47 am

Re: 2 open projects

Post by Milos » Fri Apr 27, 2018 6:16 pm

JJJ wrote:You know milos, you should pay attention about what you re writing. If you haven't noticed, you had never posted a single positive comment in this forum but the opposite. Negativity is gonna make you real sick somedays, perhaps you already are.
You must be some psychologist then, regarding your "deep" insights. Well this is computer chess forum, so you better check you might be in a wrong forum my friend.
Your ad hominem attempt is pretty lame considering you can't offer any actual argument regarding the topic.

Post Reply