One idea to solve chess?

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
syzygy
Posts: 4442
Joined: Tue Feb 28, 2012 10:56 pm

Re: One idea to solve chess?

Post by syzygy » Wed Dec 13, 2017 10:42 pm

AlvaroBegue wrote:
syzygy wrote:For best TB compression, just throw away as much information as you are willing to recompute during probing, then compress what is left with a suitable compressor. There is no way that an NN could "learn" that data more efficiently. An NN does not involve magic.
There are ways in which a NN can help in compressing EGTBs, without involving magic. For instance, try to train the NN to predict WDL. Then store in a table the list of positions that are misclassified by the NN. If the NN does a decent job, this could result in a much compressed description of the EGTB.
Yes, but I don't really believe in it, not even for WDL. I think by far the best predictor is the material balance combined with a low-depth search to weed out immediate tactics (*). The material-balance factor will be taken care of by the compression algorithm (if white wins almost all positions, then all those very long strings of white wins will compress to almost nothing). Immediate tactics can be recomputed cheaply during probing (compared to running an NN).

But even if an NN helped (e.g. to find some structure in the data after removing the immediate tactics), you would need to run it on every TB position, which is simply too expensive (and then I ignore the time needed for training the NN). (The low-depth searches are much cheaper and don't even have to be run before compression if the TB generator somehow marks the positions affected by low-depth tactics during generaton.)

Feel free to prove me wrong on 5-piece tables. :-)

(*) There are some other useful predictors like pawn advancement and bishop color. They can be dealt with by picking a good order of the pieces when calculating the index or dividing the table in separately compressible parts.

Michael Sherwin
Posts: 3024
Joined: Fri May 26, 2006 1:00 am
Location: WY, USA
Full name: Michael Sherwin

Re: One idea to solve chess?

Post by Michael Sherwin » Thu Dec 14, 2017 1:57 pm

mario carbonell wrote:Hi all :D,

Let introduce me, I made a chess program in the 90s and organized one Spanish Computer Chess championship in Valencia, my city. Worked as an AI programmer in the videogame industry some years, and now on the internet sector.

The recent games by DeepMind AlphaZero vs Stockfish has renowned my interest in computer chess and machine learning. I wanted to share and idea with you:

Suppose we could train a neural network with enough capacity to predict with 100% accuracy all 7 or 6 or 5 men endgame tablebases.

That could be an interesting experiment in itself, can we build a neural network that could act as a compression mechanism for some chess tablebase and give exactly the same result?

If the answer is yes, we are no longer restricted by the size of the tablebases and can obtain the same perfect result.

If we could add more men to the tablebases, and compress the perfect data in a neural network, in theory we could to this until we arrive to the 32 final men tablebase.

Sure, this is out of reach of todays technology, but who knows in the future?

What to you think?

Regards,
The perfect solution to solving chess is of course storing the entire chess tree. But, that is not only impractical it is impossible. However, anything short of that is not really solving chess as there is no way to prove the accuracy of the solution. So any attempt to solve chess has by its nature to be defined as something less than a true solution. And how is a consensus on the definition going to be agreed on? Impossible!

So we are talking a partial solution if we are really talking a solution at all. Memory while plentiful in this modern era is not infinite. Neural networks are memory intensive and given a specified amount of memory to use will never be able to use that memory as efficiently as traditional algorithmic methods.

So any solution based on NN will have to be considered good enough for the goal that meets the definition agreed upon by whoever subscribes to that definition.

Storing the tree even if it is not complete is the best solution whether NN is used or more traditional methods are used. Using NN to decide which moves to keep in the tree and discarding those moves that can be perfectly refuted by the chess playing algorithm may be a superior way to go.
I hate if statements. Pawns demand if statements. Therefore I hate pawns.

Sergei S. Markoff
Posts: 224
Joined: Mon Sep 12, 2011 9:27 pm
Location: Moscow, Russia
Contact:

Re: One idea to solve chess?

Post by Sergei S. Markoff » Sun May 12, 2019 2:06 pm

mario carbonell wrote:
Wed Dec 13, 2017 7:12 pm
Let introduce me, I made a chess program in the 90s and organized one Spanish Computer Chess championship in Valencia, my city. Worked as an AI programmer in the videogame industry some years, and now on the internet sector.
It looks, I've came the same idea :)

See viewtopic.php?f=7&t=70723

Everthing depends on the accuracy of the network, at least I think it worth trying. We will not solve chess, of course, but I hope this approach can drastically improve near-endgame play of NN-based engines.
The Force Be With You!

DustyMonkey
Posts: 43
Joined: Wed Feb 19, 2014 9:11 pm

Re: One idea to solve chess?

Post by DustyMonkey » Mon May 13, 2019 7:49 am

EGTB's are desirable because they are FAST. The classic Time vs Space tradeoff.

NN's are not even close to optimal models of any domain. Not even close. Keep that in mind, that its not even close. Let me say it again, that it is not even close. Very far from optimal. So very far. Let me say it again, that they are very far from optimal. This cannot be stressed enough. You would be fighting against information theory here. A generalized model will never be superior in time or space to a specific model.

Now that that is on the table, "to get extra TB compression" isnt one of the reason to consider training a NN on endgames.

To get a feel for why this is, consider and construct one of the simplest NNs that solves a "hard problem" (non-linear) .. The lowly XOR operation, which in lookup table form is exactly 4 bits.

The structure (not the weights specific values, or which activation function you chose) of the xor NN you just constructed is a general solver that can solve any 2 input 1 output problem, and that is its problem, that it can solve any of them. This ability to do it all must manifest itself as a space cost. Claude Shannon does not buy free lunches.

Sergei S. Markoff
Posts: 224
Joined: Mon Sep 12, 2011 9:27 pm
Location: Moscow, Russia
Contact:

Re: One idea to solve chess?

Post by Sergei S. Markoff » Mon May 13, 2019 5:30 pm

DustyMonkey wrote:
Mon May 13, 2019 7:49 am
EGTB's are desirable because they are FAST. The classic Time vs Space tradeoff.

NN's are not even close to optimal models of any domain. Not even close. Keep that in mind, that its not even close. Let me say it again, that it is not even close. Very far from optimal. So very far. Let me say it again, that they are very far from optimal. This cannot be stressed enough. You would be fighting against information theory here. A generalized model will never be superior in time or space to a specific model.

Now that that is on the table, "to get extra TB compression" isnt one of the reason to consider training a NN on endgames.

To get a feel for why this is, consider and construct one of the simplest NNs that solves a "hard problem" (non-linear) .. The lowly XOR operation, which in lookup table form is exactly 4 bits.

The structure (not the weights specific values, or which activation function you chose) of the xor NN you just constructed is a general solver that can solve any 2 input 1 output problem, and that is its problem, that it can solve any of them. This ability to do it all must manifest itself as a space cost. Claude Shannon does not buy free lunches.
I'm not going to get free lunch or fight information theory in other way. That's only about compression, sometimes lossy. It will not solve exptime task in polynomial time, but could provide better time/precision tradeoff than current approaches.
The Force Be With You!

Leo
Posts: 790
Joined: Fri Sep 16, 2016 4:55 pm
Location: USA/Minnesota
Full name: Leo

Re: One idea to solve chess?

Post by Leo » Fri May 17, 2019 3:57 am

"Unfortunately the size of 8-man tablebases will be 100 times larger than the size of 7-man tablebases. To fully compute them, one will need about 10 PB (10,000 TB) of disk space and 50 TB of RAM. Only the top 10 supercomputers can solve the 8-man problem in 2014. The first 1000-move mate is unlikely to be found until 2020 when a part of a TOP100 supercomputer may be allowed to be used for solving this task."
BrainFish-2 190310 bmi2 Elo 3576.

chrisw
Posts: 1745
Joined: Tue Apr 03, 2012 2:28 pm

Re: One idea to solve chess?

Post by chrisw » Fri May 17, 2019 9:26 am

syzygy wrote:
Wed Dec 13, 2017 9:26 pm
mario carbonell wrote:
CheckersGuy wrote:Let's say it was possible. I doubt that one could store the number of weights needed for that network because I dont think current networks arelarge enough to give a perfect answer to any chess position
Has anyone tried to compress the data of a 3 men tablebase in a NN and get the same result? That is an interesting experiment, first if it is possible, and second the reduction in size.
A neural network is not a piece of magic.
there probably is a magic neural network for 8,9,10 and so on egtbs, with the weights set to just the right magic values. There might even be an infinite number of such networks. Problem actually is that (most likely) none can be found by error back propagation algorithm.
You can make it learn any position, but you cannot stop it from forgetting the positions it learned before.

With 3 pieces you can get away with general rules if you don't care about exact distances, so that may work fine. With a couple more pieces there is simply no compact set of rules that could describe the outcome of all positions, let alone rules that you could expect an NN to learn.

For best TB compression, just throw away as much information as you are willing to recompute during probing, then compress what is left with a suitable compressor. There is no way that an NN could "learn" that data more efficiently. An NN does not involve magic.

Uri Blass
Posts: 8509
Joined: Wed Mar 08, 2006 11:37 pm
Location: Tel-Aviv Israel

Re: One idea to solve chess?

Post by Uri Blass » Fri May 17, 2019 5:37 pm

mario carbonell wrote:
Wed Dec 13, 2017 7:12 pm
Hi all :D,

Let introduce me, I made a chess program in the 90s and organized one Spanish Computer Chess championship in Valencia, my city. Worked as an AI programmer in the videogame industry some years, and now on the internet sector.

The recent games by DeepMind AlphaZero vs Stockfish has renowned my interest in computer chess and machine learning. I wanted to share and idea with you:

Suppose we could train a neural network with enough capacity to predict with 100% accuracy all 7 or 6 or 5 men endgame tablebases.

That could be an interesting experiment in itself, can we build a neural network that could act as a compression mechanism for some chess tablebase and give exactly the same result?

If the answer is yes, we are no longer restricted by the size of the tablebases and can obtain the same perfect result.

If we could add more men to the tablebases, and compress the perfect data in a neural network, in theory we could to this until we arrive to the 32 final men tablebase.

Sure, this is out of reach of todays technology, but who knows in the future?

What to you think?

Regards,
I think that if you want to solve chess then you should try to solve.
chess engines of today do not try to solve the game and when they say 0.00 people have no idea if they found a forced draw or simply found a line that they evaluate as equal.

solving means giving me at least one of the following evaluations:
1)side to move has a forced mate
2)side to move has a forced draw by a non empty set of moves S
3)The opponent has a forced mate
4)The opponent has a forced draw.
I wrote at least because both 2 and 4 may be correct.

If both 2 and 4 are correct the engines should choose a move inside S immidiately and save time for the future of the game.

Note that I would like see engines report during the game that they simply solved the game and it is a draw because both 2 and 4 are correct(note that the game may continue because the opponent can blunder so saying it is a draw does not finish the game)

chrisw
Posts: 1745
Joined: Tue Apr 03, 2012 2:28 pm

Re: One idea to solve chess?

Post by chrisw » Fri May 17, 2019 8:05 pm

Uri Blass wrote:
Fri May 17, 2019 5:37 pm
mario carbonell wrote:
Wed Dec 13, 2017 7:12 pm
Hi all :D,

Let introduce me, I made a chess program in the 90s and organized one Spanish Computer Chess championship in Valencia, my city. Worked as an AI programmer in the videogame industry some years, and now on the internet sector.

The recent games by DeepMind AlphaZero vs Stockfish has renowned my interest in computer chess and machine learning. I wanted to share and idea with you:

Suppose we could train a neural network with enough capacity to predict with 100% accuracy all 7 or 6 or 5 men endgame tablebases.

That could be an interesting experiment in itself, can we build a neural network that could act as a compression mechanism for some chess tablebase and give exactly the same result?

If the answer is yes, we are no longer restricted by the size of the tablebases and can obtain the same perfect result.

If we could add more men to the tablebases, and compress the perfect data in a neural network, in theory we could to this until we arrive to the 32 final men tablebase.

Sure, this is out of reach of todays technology, but who knows in the future?

What to you think?

Regards,
I think that if you want to solve chess then you should try to solve.
chess engines of today do not try to solve the game and when they say 0.00 people have no idea if they found a forced draw or simply found a line that they evaluate as equal.
That’s easily dealt with. Use bit zero to distinguish between score by real draw and evaluation equality. But, while the leaf end of PV evaluation is given by the search as the root position evaluation, it isn’t. So not really very useful anyway.

solving means giving me at least one of the following evaluations:
1)side to move has a forced mate
2)side to move has a forced draw by a non empty set of moves S
3)The opponent has a forced mate
4)The opponent has a forced draw.
I wrote at least because both 2 and 4 may be correct.

If both 2 and 4 are correct the engines should choose a move inside S immidiately and save time for the future of the game.

Note that I would like see engines report during the game that they simply solved the game and it is a draw because both 2 and 4 are correct(note that the game may continue because the opponent can blunder so saying it is a draw does not finish the game)
If one side can force a win, and knows it, then it’s a win, whatever the opponent decides.
If both sides can force a draw, and know it, whether it’s a draw or not depends on player intentions and mutual agreement. The engines are not in cooperative communication with each other and 2 and 4 thus don’t equal draw,

Uri Blass
Posts: 8509
Joined: Wed Mar 08, 2006 11:37 pm
Location: Tel-Aviv Israel

Re: One idea to solve chess?

Post by Uri Blass » Sat May 18, 2019 12:37 am

chrisw wrote:
Fri May 17, 2019 8:05 pm
Uri Blass wrote:
Fri May 17, 2019 5:37 pm
mario carbonell wrote:
Wed Dec 13, 2017 7:12 pm
Hi all :D,

Let introduce me, I made a chess program in the 90s and organized one Spanish Computer Chess championship in Valencia, my city. Worked as an AI programmer in the videogame industry some years, and now on the internet sector.

The recent games by DeepMind AlphaZero vs Stockfish has renowned my interest in computer chess and machine learning. I wanted to share and idea with you:

Suppose we could train a neural network with enough capacity to predict with 100% accuracy all 7 or 6 or 5 men endgame tablebases.

That could be an interesting experiment in itself, can we build a neural network that could act as a compression mechanism for some chess tablebase and give exactly the same result?

If the answer is yes, we are no longer restricted by the size of the tablebases and can obtain the same perfect result.

If we could add more men to the tablebases, and compress the perfect data in a neural network, in theory we could to this until we arrive to the 32 final men tablebase.

Sure, this is out of reach of todays technology, but who knows in the future?

What to you think?

Regards,
I think that if you want to solve chess then you should try to solve.
chess engines of today do not try to solve the game and when they say 0.00 people have no idea if they found a forced draw or simply found a line that they evaluate as equal.
That’s easily dealt with. Use bit zero to distinguish between score by real draw and evaluation equality. But, while the leaf end of PV evaluation is given by the search as the root position evaluation, it isn’t. So not really very useful anyway.

solving means giving me at least one of the following evaluations:
1)side to move has a forced mate
2)side to move has a forced draw by a non empty set of moves S
3)The opponent has a forced mate
4)The opponent has a forced draw.
I wrote at least because both 2 and 4 may be correct.

If both 2 and 4 are correct the engines should choose a move inside S immidiately and save time for the future of the game.

Note that I would like see engines report during the game that they simply solved the game and it is a draw because both 2 and 4 are correct(note that the game may continue because the opponent can blunder so saying it is a draw does not finish the game)
If one side can force a win, and knows it, then it’s a win, whatever the opponent decides.
If both sides can force a draw, and know it, whether it’s a draw or not depends on player intentions and mutual agreement. The engines are not in cooperative communication with each other and 2 and 4 thus don’t equal draw,
You are correct that even if both sides know that both sides can force a draw it may not be a draw because a player may play a move that allow the opponent to force a draw or to try to win when the intention is that the opponent will try to win and lose.

Inspite of it I think that deciding to play a drawing move if both 2 and 4 are correct is the simplest solution.
Advantages are
1)Saving time on the clock for the case that the opponent blunder later so you may have enough time to find a win after the opponent is going to blunder.
2)Not taking risks by playing a move that you do not know that you can force a draw and maybe is a losing move.

The only possible disadvantage is that you may have a practical better move that set a trap for the opponent that you may miss if your search in some way consider the probability of the opponent to go wrong.

For most engines this disadvantage is not relevant.

Post Reply