Page 3 of 4

Re: One idea to solve chess?

Posted: Wed May 22, 2019 8:24 pm
by Karlo Bala
mario carbonell wrote: Wed Dec 13, 2017 9:38 pm
CheckersGuy wrote:Let's say it was possible. I doubt that one could store the number of weights needed for that network because I dont think current networks arelarge enough to give a perfect answer to any chess position
Has anyone tried to compress the data of a 3 men tablebase in a NN and get the same result? That is an interesting experiment, first if it is possible, and second the reduction in size.
In fact, it is not just chess related, and you don't need endgame tablebases to make an experiment. You can try to train the network to remember a sequence of random numbers (basically the file compression). If possible, (I suppose) we would have much better file compression programs, but we don't have. ANN is good at rounding but not as good at giving exact results.

If it matters, I tried (file compression) a long time ago without success.

Re: One idea to solve chess?

Posted: Sat May 25, 2019 5:11 pm
by syzygy
chrisw wrote: Fri May 17, 2019 11:26 am
syzygy wrote: Wed Dec 13, 2017 10:26 pm
mario carbonell wrote:
CheckersGuy wrote:Let's say it was possible. I doubt that one could store the number of weights needed for that network because I dont think current networks arelarge enough to give a perfect answer to any chess position
Has anyone tried to compress the data of a 3 men tablebase in a NN and get the same result? That is an interesting experiment, first if it is possible, and second the reduction in size.
A neural network is not a piece of magic.
there probably is a magic neural network for 8,9,10 and so on egtbs, with the weights set to just the right magic values. There might even be an infinite number of such networks. Problem actually is that (most likely) none can be found by error back propagation algorithm.
By saying "infinite number" you necessarily allow the size of the neural network to be unlimited.

It is certainly possible, at least in principle, to re-encode 10-men TBs into a humongous neural network that is many many times the size of the compressed 10-men TBs. But I leave it to you to call that a "magic" neural network.

Re: One idea to solve chess?

Posted: Sat May 25, 2019 5:15 pm
by syzygy
chrisw wrote: Fri May 17, 2019 10:05 pm
Uri Blass wrote: Fri May 17, 2019 7:37 pm I think that if you want to solve chess then you should try to solve.
chess engines of today do not try to solve the game and when they say 0.00 people have no idea if they found a forced draw or simply found a line that they evaluate as equal.
That’s easily dealt with. Use bit zero to distinguish between score by real draw and evaluation equality. But, while the leaf end of PV evaluation is given by the search as the root position evaluation, it isn’t. So not really very useful anyway.
That will never work. What you would need to do is two separate searches:
- first set the draw value to "WHITE WINS" and prove that white can win (i.e. white has at least a draw);
- then set the draw value to "BLACK WINS" and prove that black can win (i.e. black has at least a draw).

If both sides have at least a draw, the position is a forced draw.

Re: One idea to solve chess?

Posted: Sat May 25, 2019 7:59 pm
by chrisw
syzygy wrote: Sat May 25, 2019 5:15 pm
chrisw wrote: Fri May 17, 2019 10:05 pm
Uri Blass wrote: Fri May 17, 2019 7:37 pm I think that if you want to solve chess then you should try to solve.
chess engines of today do not try to solve the game and when they say 0.00 people have no idea if they found a forced draw or simply found a line that they evaluate as equal.
That’s easily dealt with. Use bit zero to distinguish between score by real draw and evaluation equality. But, while the leaf end of PV evaluation is given by the search as the root position evaluation, it isn’t. So not really very useful anyway.
That will never work. What you would need to do is two separate searches:
- first set the draw value to "WHITE WINS" and prove that white can win (i.e. white has at least a draw);
- then set the draw value to "BLACK WINS" and prove that black can win (i.e. black has at least a draw).

If both sides have at least a draw, the position is a forced draw.
I wasn't trying to make work what your answer seems to suggest. I was answering "when they say 0.00 people have no idea if they found a forced draw or simply found a line that they evaluate as equal.", from Uri.

If you use bit zero of eval to differentiate between "actual draw" and the eval just happening to be 0.0, then the returning PV is either an "actual draw" or a 0.0 score.

Re: One idea to solve chess?

Posted: Sat May 25, 2019 8:21 pm
by chrisw
syzygy wrote: Sat May 25, 2019 5:11 pm
chrisw wrote: Fri May 17, 2019 11:26 am
syzygy wrote: Wed Dec 13, 2017 10:26 pm
mario carbonell wrote:
CheckersGuy wrote:Let's say it was possible. I doubt that one could store the number of weights needed for that network because I dont think current networks arelarge enough to give a perfect answer to any chess position
Has anyone tried to compress the data of a 3 men tablebase in a NN and get the same result? That is an interesting experiment, first if it is possible, and second the reduction in size.
A neural network is not a piece of magic.
there probably is a magic neural network for 8,9,10 and so on egtbs, with the weights set to just the right magic values. There might even be an infinite number of such networks. Problem actually is that (most likely) none can be found by error back propagation algorithm.
By saying "infinite number" you necessarily allow the size of the neural network to be unlimited.
Sure, substitute "very large" number of

It is certainly possible, at least in principle, to re-encode 10-men TBs into a humongous neural network that is many many times the size of the compressed 10-men TBs. But I leave it to you to call that a "magic" neural network.
Magic just means full of magic numbers that just happen to work, but change one by 0.000001 and nothing works. ie you can't loss function your way to finding them.

Re: One idea to solve chess?

Posted: Sat May 25, 2019 10:40 pm
by syzygy
chrisw wrote: Sat May 25, 2019 8:21 pm
syzygy wrote: Sat May 25, 2019 5:11 pm
chrisw wrote: Fri May 17, 2019 11:26 am
syzygy wrote: Wed Dec 13, 2017 10:26 pm A neural network is not a piece of magic.
there probably is a magic neural network for 8,9,10 and so on egtbs, with the weights set to just the right magic values. There might even be an infinite number of such networks. Problem actually is that (most likely) none can be found by error back propagation algorithm.
By saying "infinite number" you necessarily allow the size of the neural network to be unlimited.
Sure, substitute "very large" number of

It is certainly possible, at least in principle, to re-encode 10-men TBs into a humongous neural network that is many many times the size of the compressed 10-men TBs. But I leave it to you to call that a "magic" neural network.
Magic just means full of magic numbers that just happen to work, but change one by 0.000001 and nothing works. ie you can't loss function your way to finding them.
I started by saying that a neural network is not a piece of magic. If you want to redefine the meaning of "magic" then fine, but then you are not responding to what I wrote.

Re: One idea to solve chess?

Posted: Sat May 25, 2019 11:14 pm
by chrisw
syzygy wrote: Sat May 25, 2019 10:40 pm
chrisw wrote: Sat May 25, 2019 8:21 pm
syzygy wrote: Sat May 25, 2019 5:11 pm
chrisw wrote: Fri May 17, 2019 11:26 am
syzygy wrote: Wed Dec 13, 2017 10:26 pm A neural network is not a piece of magic.
there probably is a magic neural network for 8,9,10 and so on egtbs, with the weights set to just the right magic values. There might even be an infinite number of such networks. Problem actually is that (most likely) none can be found by error back propagation algorithm.
By saying "infinite number" you necessarily allow the size of the neural network to be unlimited.
Sure, substitute "very large" number of

It is certainly possible, at least in principle, to re-encode 10-men TBs into a humongous neural network that is many many times the size of the compressed 10-men TBs. But I leave it to you to call that a "magic" neural network.
Magic just means full of magic numbers that just happen to work, but change one by 0.000001 and nothing works. ie you can't loss function your way to finding them.
I started by saying that a neural network is not a piece of magic. If you want to redefine the meaning of "magic" then fine, but then you are not responding to what I wrote.
Well that’s a straw man, bizarre also because they call magic number move generator systems magic bitboards over on the chess programming wiki last time I looked. What exactly would you have against calling a neural network that just happened to work based on a set of magic numbers which serendipitously cooperated but couldn’t be found by conventional backpropagation methods, a magic neural network?

Re: One idea to solve chess?

Posted: Sun May 26, 2019 11:18 pm
by syzygy
chrisw wrote: Sat May 25, 2019 11:14 pm Well that’s a straw man, bizarre also because they call magic number move generator systems magic bitboards over on the chess programming wiki last time I looked.
OK, so you have understood my statement that "a neural network is not a piece of magic" differently from what I was trying to say with it.

Let me therefore be clear: I was using the term "magic" in its normal sense of "having or apparently having supernatural powers". But if you don't want to accept that this is what I was saying all along, then I will now gobble up everything I have said before in this thread and once again confirm that I am talking about "piece of magic" in the sense of "something supernatural". And you will just have to allow me to choose my own words.

The thing that the OP was looking for simply does not exist.
mario carboneli wrote:Suppose we could train a neural network with enough capacity to predict with 100% accuracy all 7 or 6 or 5 men endgame tablebases.

That could be an interesting experiment in itself, can we build a neural network that could act as a compression mechanism for some chess tablebase and give exactly the same result?

If the answer is yes, we are no longer restricted by the size of the tablebases and can obtain the same perfect result.

If we could add more men to the tablebases, and compress the perfect data in a neural network, in theory we could to this until we arrive to the 32 final men tablebase.
See, he was asking for a neural network with 100% accurate prediction, yet of a manageable size.

What he was asking for is a piece of magic in the sense of something supernatural, which therefore is impossible.

Re: One idea to solve chess?

Posted: Mon May 27, 2019 12:37 am
by chrisw
syzygy wrote: Sun May 26, 2019 11:18 pm
chrisw wrote: Sat May 25, 2019 11:14 pm Well that’s a straw man, bizarre also because they call magic number move generator systems magic bitboards over on the chess programming wiki last time I looked.
OK, so you have understood my statement that "a neural network is not a piece of magic" differently from what I was trying to say with it.

Let me therefore be clear: I was using the term "magic" in its normal sense of "having or apparently having supernatural powers". But if you don't want to accept that this is what I was saying all along, then I will now gobble up everything I have said before in this thread and once again confirm that I am talking about "piece of magic" in the sense of "something supernatural". And you will just have to allow me to choose my own words.

The thing that the OP was looking for simply does not exist.
mario carboneli wrote:Suppose we could train a neural network with enough capacity to predict with 100% accuracy all 7 or 6 or 5 men endgame tablebases.

That could be an interesting experiment in itself, can we build a neural network that could act as a compression mechanism for some chess tablebase and give exactly the same result?

If the answer is yes, we are no longer restricted by the size of the tablebases and can obtain the same perfect result.

If we could add more men to the tablebases, and compress the perfect data in a neural network, in theory we could to this until we arrive to the 32 final men tablebase.
See, he was asking for a neural network with 100% accurate prediction, yet of a manageable size.

What he was asking for is a piece of magic in the sense of something supernatural, which therefore is impossible.
Not impossible, impractical.
You can’t deny, that looked at as a problem where we have a finite number of input bits and a finite number of output bits, that there must be a box of tricks of transforms and bit wiggles which turn one into the other. You pick the box up and shake it and test to see if you just performed the magic shake. Shake testing may not be competitive with the other method, namely solve the problem by exhaustive search, but we do know a program exists to solve the problem, given enough time, therefore a box must exist to solve the problem too, given enough shakes.
So, what I’ve been discussing, really, is how big must this box be, to stand a chance of containing within itself a magical combination that does the necessary trick. I would guess from previous answers you’re going to claim “bigger than an egtb”, but I wonder if this is forcedly the case.

Re: One idea to solve chess?

Posted: Mon May 27, 2019 1:41 am
by Ovyron
chrisw wrote: Fri May 17, 2019 10:05 pm
Uri Blass wrote: Fri May 17, 2019 7:37 pmI think that if you want to solve chess then you should try to solve.
chess engines of today do not try to solve the game and when they say 0.00 people have no idea if they found a forced draw or simply found a line that they evaluate as equal.
That’s easily dealt with. Use bit zero to distinguish between score by real draw and evaluation equality.
What is a "real draw"? If the engine thinks there's a certain line that is forced draw, and all deviations from either side lose them the game, but it's wrong, no such thing exists. All draws come from "evaluation equality" except those that are inside a Tablebase, which are irrelevant as they're already solved.

But I've seen Stockfish give 0.00 score to a position up to depth 59, only to realize white is winning at depth 60, there's no way to know if a position is drawn unless you examine all possible possibilities.