Where are the funs of Leela?

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

corres
Posts: 3657
Joined: Wed Nov 18, 2015 11:41 am
Location: hungary

Re: Where are the funs of Leela?

Post by corres »

ChiefPushesWood wrote: Wed Jun 12, 2019 11:12 pm I won't respond again to any of your posts regarding this...
Chief
At last a wise answer...
There is no any sense if somebody is repeating his monomania.
ChiefPushesWood
Posts: 62
Joined: Thu Nov 08, 2018 6:30 pm
Full name: Chief PushesWood

Re: Where are the funs of Leela?

Post by ChiefPushesWood »

corres wrote: Wed Jun 12, 2019 11:16 pm There is no any sense if somebody is repeating his monomania.
/end point
Dann Corbit
Posts: 12538
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Where are the funs of Leela?

Post by Dann Corbit »

Ovyron wrote: Wed Jun 12, 2019 11:13 pm
Dann Corbit wrote: Wed Jun 12, 2019 6:42 pm Nobody is ever satisfied with the openings, even though competent people like Cato and Jeroen work on them.

Every good chess player wants their favorite chess opening included.
You will never satisfy everyone.
Please reread the sentences you posted and see if they make sense in chess. They don't. They only make sense in "computer chess". Why is this?

Nobody expects Carlsen or Caruana to play their favorite openings [of the audience], they get on the chessboard and play the best they can. Cato or Jeroen aren't picking openings for them and forcing them to play a thematic match.

This should be emulated for chess engines, where they play the variations most advantageous to them, like humans do, and this process could even be automated without human intervention (GUIs like Chessbase or InfinityChess come with similar features where openings that perform badly are discouraged and openings that perform well are encouraged; engines are able to use Bin books and edit their weights to improve their lines, and I'm not even mentioning all the work some programmers have put so their engines have book learning, and all of this is irrelevant because the people in power of the computer chess world decided to go for irrelevant generic book lines nobody sane would play.)

A person should be satisfied because an engine played the line that gave it the best performance against a certain opponent, not because they like the line played. If the TCEC champion or the top engine of the CCRL gets decided by garbage chess lines we can only expect it to be a garbage champion.
All this does is introduce more uncertainty about which engine is the strongest.
Consider Sedat's book contests like this one:
https://sites.google.com/site/computers ... book-cs-24
All of the books are different and all of the engines (in this case asmFishWCP 130519 BMI2) and hardware are identical.
So we see that changing the book can change the rating by 113 Elo.
So we can conclude that AsmFish is 113 Elo stronger or weaker than itself, depending on which book is chosen. Does that make sense to you?

Now it is true that in a case like that we are finding what the strength is of (engine + book).
Nobody likes those contests. That is why nobody runs them.
Almost all the big contests are uniform book.
There are a few contests that allow any book and any hardware. What do we learn from these contests?
We don't learn what book is strongest. We don't learn what engine is strongest, and we don't learn what hardware is strongest.
But we also get a champion out of these.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Leo
Posts: 1080
Joined: Fri Sep 16, 2016 6:55 pm
Location: USA/Minnesota
Full name: Leo Anger

Re: Where are the funs of Leela?

Post by Leo »

Dann Corbit wrote: Thu Jun 13, 2019 12:56 am
Ovyron wrote: Wed Jun 12, 2019 11:13 pm
Dann Corbit wrote: Wed Jun 12, 2019 6:42 pm Nobody is ever satisfied with the openings, even though competent people like Cato and Jeroen work on them.

Every good chess player wants their favorite chess opening included.
You will never satisfy everyone.
Please reread the sentences you posted and see if they make sense in chess. They don't. They only make sense in "computer chess". Why is this?

Nobody expects Carlsen or Caruana to play their favorite openings [of the audience], they get on the chessboard and play the best they can. Cato or Jeroen aren't picking openings for them and forcing them to play a thematic match.

This should be emulated for chess engines, where they play the variations most advantageous to them, like humans do, and this process could even be automated without human intervention (GUIs like Chessbase or InfinityChess come with similar features where openings that perform badly are discouraged and openings that perform well are encouraged; engines are able to use Bin books and edit their weights to improve their lines, and I'm not even mentioning all the work some programmers have put so their engines have book learning, and all of this is irrelevant because the people in power of the computer chess world decided to go for irrelevant generic book lines nobody sane would play.)

A person should be satisfied because an engine played the line that gave it the best performance against a certain opponent, not because they like the line played. If the TCEC champion or the top engine of the CCRL gets decided by garbage chess lines we can only expect it to be a garbage champion.
All this does is introduce more uncertainty about which engine is the strongest.
Consider Sedat's book contests like this one:
https://sites.google.com/site/computers ... book-cs-24
All of the books are different and all of the engines (in this case asmFishWCP 130519 BMI2) and hardware are identical.
So we see that changing the book can change the rating by 113 Elo.
So we can conclude that AsmFish is 113 Elo stronger or weaker than itself, depending on which book is chosen. Does that make sense to you?

Now it is true that in a case like that we are finding what the strength is of (engine + book).
Nobody likes those contests. That is why nobody runs them.
Almost all the big contests are uniform book.
There are a few contests that allow any book and any hardware. What do we learn from these contests?
We don't learn what book is strongest. We don't learn what engine is strongest, and we don't learn what hardware is strongest.
But we also get a champion out of these.
I am most interested in the most powerful chess entity. I have a huge interest in engine plus book events. As far as I am concerned, LCZ has its own built in book so playing it against AB engines it has a built in advantage.
Advanced Micro Devices fan.
Leo
Posts: 1080
Joined: Fri Sep 16, 2016 6:55 pm
Location: USA/Minnesota
Full name: Leo Anger

Re: Where are the funs of Leela?

Post by Leo »

Leo wrote: Thu Jun 13, 2019 3:11 am
Dann Corbit wrote: Thu Jun 13, 2019 12:56 am
Ovyron wrote: Wed Jun 12, 2019 11:13 pm
Dann Corbit wrote: Wed Jun 12, 2019 6:42 pm Nobody is ever satisfied with the openings, even though competent people like Cato and Jeroen work on them.

Every good chess player wants their favorite chess opening included.
You will never satisfy everyone.
Please reread the sentences you posted and see if they make sense in chess. They don't. They only make sense in "computer chess". Why is this?

Nobody expects Carlsen or Caruana to play their favorite openings [of the audience], they get on the chessboard and play the best they can. Cato or Jeroen aren't picking openings for them and forcing them to play a thematic match.

This should be emulated for chess engines, where they play the variations most advantageous to them, like humans do, and this process could even be automated without human intervention (GUIs like Chessbase or InfinityChess come with similar features where openings that perform badly are discouraged and openings that perform well are encouraged; engines are able to use Bin books and edit their weights to improve their lines, and I'm not even mentioning all the work some programmers have put so their engines have book learning, and all of this is irrelevant because the people in power of the computer chess world decided to go for irrelevant generic book lines nobody sane would play.)

A person should be satisfied because an engine played the line that gave it the best performance against a certain opponent, not because they like the line played. If the TCEC champion or the top engine of the CCRL gets decided by garbage chess lines we can only expect it to be a garbage champion.
All this does is introduce more uncertainty about which engine is the strongest.
Consider Sedat's book contests like this one:
https://sites.google.com/site/computers ... book-cs-24
All of the books are different and all of the engines (in this case asmFishWCP 130519 BMI2) and hardware are identical.
So we see that changing the book can change the rating by 113 Elo.
So we can conclude that AsmFish is 113 Elo stronger or weaker than itself, depending on which book is chosen. Does that make sense to you?

Now it is true that in a case like that we are finding what the strength is of (engine + book).
Nobody likes those contests. That is why nobody runs them.
Almost all the big contests are uniform book.
There are a few contests that allow any book and any hardware. What do we learn from these contests?
We don't learn what book is strongest. We don't learn what engine is strongest, and we don't learn what hardware is strongest.
But we also get a champion out of these.
I am most interested in the most powerful chess playing entity. I have a huge interest in engine plus book events. As far as I am concerned, LCZ has its own built in book.
Advanced Micro Devices fan.
Uri Blass
Posts: 10269
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Where are the funs of Leela?

Post by Uri Blass »

Leo wrote: Thu Jun 13, 2019 3:11 am
Dann Corbit wrote: Thu Jun 13, 2019 12:56 am
Ovyron wrote: Wed Jun 12, 2019 11:13 pm
Dann Corbit wrote: Wed Jun 12, 2019 6:42 pm Nobody is ever satisfied with the openings, even though competent people like Cato and Jeroen work on them.

Every good chess player wants their favorite chess opening included.
You will never satisfy everyone.
Please reread the sentences you posted and see if they make sense in chess. They don't. They only make sense in "computer chess". Why is this?

Nobody expects Carlsen or Caruana to play their favorite openings [of the audience], they get on the chessboard and play the best they can. Cato or Jeroen aren't picking openings for them and forcing them to play a thematic match.

This should be emulated for chess engines, where they play the variations most advantageous to them, like humans do, and this process could even be automated without human intervention (GUIs like Chessbase or InfinityChess come with similar features where openings that perform badly are discouraged and openings that perform well are encouraged; engines are able to use Bin books and edit their weights to improve their lines, and I'm not even mentioning all the work some programmers have put so their engines have book learning, and all of this is irrelevant because the people in power of the computer chess world decided to go for irrelevant generic book lines nobody sane would play.)

A person should be satisfied because an engine played the line that gave it the best performance against a certain opponent, not because they like the line played. If the TCEC champion or the top engine of the CCRL gets decided by garbage chess lines we can only expect it to be a garbage champion.
All this does is introduce more uncertainty about which engine is the strongest.
Consider Sedat's book contests like this one:
https://sites.google.com/site/computers ... book-cs-24
All of the books are different and all of the engines (in this case asmFishWCP 130519 BMI2) and hardware are identical.
So we see that changing the book can change the rating by 113 Elo.
So we can conclude that AsmFish is 113 Elo stronger or weaker than itself, depending on which book is chosen. Does that make sense to you?

Now it is true that in a case like that we are finding what the strength is of (engine + book).
Nobody likes those contests. That is why nobody runs them.
Almost all the big contests are uniform book.
There are a few contests that allow any book and any hardware. What do we learn from these contests?
We don't learn what book is strongest. We don't learn what engine is strongest, and we don't learn what hardware is strongest.
But we also get a champion out of these.
I am most interested in the most powerful chess entity. I have a huge interest in engine plus book events. As far as I am concerned, LCZ has its own built in book so playing it against AB engines it has a built in advantage.
LCZ has its own built book only based on games of it against itself.

I think that it may be good if there is an automatic tool to build a book for every chess engine based on games of it against itself so a fair competition may start with giving every engine a long time to build a book and only after a long time start to play games.
User avatar
Guenther
Posts: 4605
Joined: Wed Oct 01, 2008 6:33 am
Location: Regensburg, Germany
Full name: Guenther Simon

Re: Where are the funs of Leela?

Post by Guenther »

Uri Blass wrote: Thu Jun 13, 2019 6:34 am
Leo wrote: Thu Jun 13, 2019 3:11 am
Dann Corbit wrote: Thu Jun 13, 2019 12:56 am
Ovyron wrote: Wed Jun 12, 2019 11:13 pm
Dann Corbit wrote: Wed Jun 12, 2019 6:42 pm Nobody is ever satisfied with the openings, even though competent people like Cato and Jeroen work on them.

Every good chess player wants their favorite chess opening included.
You will never satisfy everyone.
Please reread the sentences you posted and see if they make sense in chess. They don't. They only make sense in "computer chess". Why is this?

Nobody expects Carlsen or Caruana to play their favorite openings [of the audience], they get on the chessboard and play the best they can. Cato or Jeroen aren't picking openings for them and forcing them to play a thematic match.

This should be emulated for chess engines, where they play the variations most advantageous to them, like humans do, and this process could even be automated without human intervention (GUIs like Chessbase or InfinityChess come with similar features where openings that perform badly are discouraged and openings that perform well are encouraged; engines are able to use Bin books and edit their weights to improve their lines, and I'm not even mentioning all the work some programmers have put so their engines have book learning, and all of this is irrelevant because the people in power of the computer chess world decided to go for irrelevant generic book lines nobody sane would play.)

A person should be satisfied because an engine played the line that gave it the best performance against a certain opponent, not because they like the line played. If the TCEC champion or the top engine of the CCRL gets decided by garbage chess lines we can only expect it to be a garbage champion.
All this does is introduce more uncertainty about which engine is the strongest.
Consider Sedat's book contests like this one:
https://sites.google.com/site/computers ... book-cs-24
All of the books are different and all of the engines (in this case asmFishWCP 130519 BMI2) and hardware are identical.
So we see that changing the book can change the rating by 113 Elo.
So we can conclude that AsmFish is 113 Elo stronger or weaker than itself, depending on which book is chosen. Does that make sense to you?

Now it is true that in a case like that we are finding what the strength is of (engine + book).
Nobody likes those contests. That is why nobody runs them.
Almost all the big contests are uniform book.
There are a few contests that allow any book and any hardware. What do we learn from these contests?
We don't learn what book is strongest. We don't learn what engine is strongest, and we don't learn what hardware is strongest.
But we also get a champion out of these.
I am most interested in the most powerful chess entity. I have a huge interest in engine plus book events. As far as I am concerned, LCZ has its own built in book so playing it against AB engines it has a built in advantage.
LCZ has its own built book only based on games of it against itself.

...
It is no book.
https://rwbc-chess.de

trollwatch:
Chessqueen + chessica + AlexChess + Eduard + Sylwy
Uri Blass
Posts: 10269
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Where are the funs of Leela?

Post by Uri Blass »

Guenther wrote: Thu Jun 13, 2019 8:10 am
Uri Blass wrote: Thu Jun 13, 2019 6:34 am
Leo wrote: Thu Jun 13, 2019 3:11 am
Dann Corbit wrote: Thu Jun 13, 2019 12:56 am
Ovyron wrote: Wed Jun 12, 2019 11:13 pm
Dann Corbit wrote: Wed Jun 12, 2019 6:42 pm Nobody is ever satisfied with the openings, even though competent people like Cato and Jeroen work on them.

Every good chess player wants their favorite chess opening included.
You will never satisfy everyone.
Please reread the sentences you posted and see if they make sense in chess. They don't. They only make sense in "computer chess". Why is this?

Nobody expects Carlsen or Caruana to play their favorite openings [of the audience], they get on the chessboard and play the best they can. Cato or Jeroen aren't picking openings for them and forcing them to play a thematic match.

This should be emulated for chess engines, where they play the variations most advantageous to them, like humans do, and this process could even be automated without human intervention (GUIs like Chessbase or InfinityChess come with similar features where openings that perform badly are discouraged and openings that perform well are encouraged; engines are able to use Bin books and edit their weights to improve their lines, and I'm not even mentioning all the work some programmers have put so their engines have book learning, and all of this is irrelevant because the people in power of the computer chess world decided to go for irrelevant generic book lines nobody sane would play.)

A person should be satisfied because an engine played the line that gave it the best performance against a certain opponent, not because they like the line played. If the TCEC champion or the top engine of the CCRL gets decided by garbage chess lines we can only expect it to be a garbage champion.
All this does is introduce more uncertainty about which engine is the strongest.
Consider Sedat's book contests like this one:
https://sites.google.com/site/computers ... book-cs-24
All of the books are different and all of the engines (in this case asmFishWCP 130519 BMI2) and hardware are identical.
So we see that changing the book can change the rating by 113 Elo.
So we can conclude that AsmFish is 113 Elo stronger or weaker than itself, depending on which book is chosen. Does that make sense to you?

Now it is true that in a case like that we are finding what the strength is of (engine + book).
Nobody likes those contests. That is why nobody runs them.
Almost all the big contests are uniform book.
There are a few contests that allow any book and any hardware. What do we learn from these contests?
We don't learn what book is strongest. We don't learn what engine is strongest, and we don't learn what hardware is strongest.
But we also get a champion out of these.
I am most interested in the most powerful chess entity. I have a huge interest in engine plus book events. As far as I am concerned, LCZ has its own built in book so playing it against AB engines it has a built in advantage.
LCZ has its own built book only based on games of it against itself.

...
It is no book.
It is no book in the meaning of moves that lcz play immediately but it is a fact that lcz play the opening better based on a lot of experience of games of it against itself.

I believe that it is possible to have a tool to help other engines to play better in the opening based on games the engine play against itself from the opening position.

A possible idea is simply to give the engine to play million of games against itself and construct some small bias for lines that the engine won in the games so there will be bigger probability to repeat them and bias against lines that the engine lost so it is going to repeat them with smaller probability.

There will be minimal changes in the engine only to allow bias(and for example the engine may reduce 0.01 from the score for lines that start with 1.d4 if it lost a game that it started with 1.d4)
corres
Posts: 3657
Joined: Wed Nov 18, 2015 11:41 am
Location: hungary

Re: Where are the funs of Leela?

Post by corres »

Uri Blass wrote: Thu Jun 13, 2019 9:58 am I believe that it is possible to have a tool to help other engines to play better in the opening based on games the engine play against itself from the opening position.
A possible idea is simply to give the engine to play million of games against itself and construct some small bias for lines that the engine won in the games so there will be bigger probability to repeat them and bias against lines that the engine lost so it is going to repeat them with smaller probability.
There will be minimal changes in the engine only to allow bias(and for example the engine may reduce 0.01 from the score for lines that start with 1.d4 if it lost a game that it started with 1.d4)
If you use Chessbase GUI and Fritz Power Book 20xx or any Fritz engine with own book you will have such a system. It is the same in the case of the system of Hiarcs, the system of Shredder and the system of Junior(!).
Making such a system for Stockfish or Komodo - even if we use Chessbase GUI - there is a serious issue:
The development of these engines are continuous so their opening book would be developed together with engine also. But who are willing for this waste of time?
corres
Posts: 3657
Joined: Wed Nov 18, 2015 11:41 am
Location: hungary

Re: Where are the funs of Leela?

Post by corres »

Dann Corbit wrote: Thu Jun 13, 2019 12:56 am All this does is introduce more uncertainty about which engine is the strongest.
Consider Sedat's book contests like this one:
https://sites.google.com/site/computers ... book-cs-24
All of the books are different and all of the engines (in this case asmFishWCP 130519 BMI2) and hardware are identical.
So we see that changing the book can change the rating by 113 Elo.
So we can conclude that AsmFish is 113 Elo stronger or weaker than itself, depending on which book is chosen. Does that make sense to you?
Now it is true that in a case like that we are finding what the strength is of (engine + book).
Nobody likes those contests. That is why nobody runs them.
Almost all the big contests are uniform book.
There are a few contests that allow any book and any hardware. What do we learn from these contests?
We don't learn what book is strongest. We don't learn what engine is strongest, and we don't learn what hardware is strongest.
But we also get a champion out of these.
I agree with you maximally.
I declared such ideas for a long time. Choosing the opening positions management of a computer chess tournament can influence the result of that tournament.
NN based engines are a special case because they inherently have a kind of opening and middle game book.
Understanding this inherent book is very hard task because even the developers can not see into an NN.
So finding a book what is good for AB engines and NN engines also it may be impossible.
Naturally fans are interested in winning of their favorite engine only.
If the favorite can not win only in this case they blame the used opening book.
Although when the favorite won this was just thanking to the used opening book.