Page **2** of **3**

### Re: There Might Exist Simple Rules For TT replacement

Posted: **Sat Oct 19, 2019 1:52 pm**

by **corres**

towforce wrote: ↑Sat Oct 19, 2019 8:03 am

corres wrote: ↑Sat Oct 19, 2019 7:16 am

The GPU (more exactly the NN) evaluation contain the result of the many millions games played during self learning. Enhancing the number of played games and the measure of NN we can gain engines playing better and better games.

It is pity but bigger NN needs more powerful hardware.

What does "GPU" mean in this context?

For solving specific problems, a big NN can be worse than a small one. When a human has done a task enough times, they can do it quickly without conscious thought because fast NN pathways for doing that task get built. However, chimps have much smaller brains than us, but they can still learn simple video games, and when they do, they can easily beat humans because their reaction times are MASSIVELY faster than ours.

More tasks where monkeys outperform humans: the matching pennies game (

link), and willingness to change tactics (

link).

Going even more extreme, a housefly has complex behaviours in terms of flight control (including landing), walking on six legs, feeding, mating and living life before it can fly (among others), but has a brain size of only around 100,000 neurons (the human brain has around 100,000,000,000 neurons). This shows that good behaviour for solving some complex problems can be encoded in a more simple algorithm than you'd expect.

Please, show me a chimp or a housefly knowing chess.

This would be the exact evidence for your minimalistic idea.

Btw. NN is such a "black box" what has non-linear connection between its input and its output.

So it is not an ideal object for a linear math.

### Re: There Might Exist Simple Rules For TT replacement

Posted: **Sat Oct 19, 2019 2:32 pm**

by **towforce**

corres wrote: ↑Sat Oct 19, 2019 1:52 pm

towforce wrote: ↑Sat Oct 19, 2019 8:03 am

corres wrote: ↑Sat Oct 19, 2019 7:16 am

The GPU (more exactly the NN) evaluation contain the result of the many millions games played during self learning. Enhancing the number of played games and the measure of NN we can gain engines playing better and better games.

It is pity but bigger NN needs more powerful hardware.

For solving specific problems, a big NN can be worse than a small one. When a human has done a task enough times, they can do it quickly without conscious thought because fast NN pathways for doing that task get built. However, chimps have much smaller brains than us, but they can still learn simple video games, and when they do, they can easily beat humans because their reaction times are MASSIVELY faster than ours.

More tasks where monkeys outperform humans: the matching pennies game (

link), and willingness to change tactics (

link).

Going even more extreme, a housefly has complex behaviours in terms of flight control (including landing), walking on six legs, feeding, mating and living life before it can fly (among others), but has a brain size of only around 100,000 neurons (the human brain has around 100,000,000,000 neurons). This shows that good behaviour for solving some complex problems can be encoded in a more simple algorithm than you'd expect.

Please, show me a chimp or a housefly knowing chess.

Obviously there are no such examples. This is why I called to analogy from other areas.

This would be the exact evidence for your minimalistic idea.

My core idea is to use mathematical optimisation of weightings in linear expressions, which is analogous to, but different from, NNs. It looks as though most people won't believe it unless somebody builds it out.

Btw. NN is such a "black box" what has non-linear connection between its input and its output. So it is not an ideal object for a linear math.

It is possible to convert trained NNs into mathematical expression -

link. Will be a big expression for a deep NN.

### Re: There Might Exist Simple Rules For TT replacement

Posted: **Sat Oct 19, 2019 4:36 pm**

by **Zenmastur**

towforce wrote: ↑Sat Oct 19, 2019 2:32 pm

corres wrote: ↑Sat Oct 19, 2019 1:52 pm

towforce wrote: ↑Sat Oct 19, 2019 8:03 am

corres wrote: ↑Sat Oct 19, 2019 7:16 am

The GPU (more exactly the NN) evaluation contain the result of the many millions games played during self learning. Enhancing the number of played games and the measure of NN we can gain engines playing better and better games.

It is pity but bigger NN needs more powerful hardware.

For solving specific problems, a big NN can be worse than a small one. When a human has done a task enough times, they can do it quickly without conscious thought because fast NN pathways for doing that task get built. However, chimps have much smaller brains than us, but they can still learn simple video games, and when they do, they can easily beat humans because their reaction times are MASSIVELY faster than ours.

More tasks where monkeys outperform humans: the matching pennies game (

link), and willingness to change tactics (

link).

Going even more extreme, a housefly has complex behaviours in terms of flight control (including landing), walking on six legs, feeding, mating and living life before it can fly (among others), but has a brain size of only around 100,000 neurons (the human brain has around 100,000,000,000 neurons). This shows that good behaviour for solving some complex problems can be encoded in a more simple algorithm than you'd expect.

Please, show me a chimp or a housefly knowing chess.

Obviously there are no such examples. This is why I called to analogy from other areas.

This would be the exact evidence for your minimalistic idea.

My core idea is to use mathematical optimisation of weightings in linear expressions, which is analogous to, but different from, NNs. It looks as though most people won't believe it unless somebody builds it out.

Btw. NN is such a "black box" what has non-linear connection between its input and its output. So it is not an ideal object for a linear math.

It is possible to convert trained NNs into mathematical expression -

link. Will be a big expression for a deep NN.

My guess would be that on smaller problems the expression(s) can be reduced to a manageable size without too much loss in accuracy. There are many methods used to analyze complex systems in which the math used is theoretically unsuitable for the type of analysis being performed. It doesn't stop the analysis from being useful. E.g look at para-axial ray-tracing used in lens design. It's based on a false equation. Namely sin(u)=u. And yet it has been used to design the most powerful optical systems man has ever built!

Regards,

Zenmastur

### Re: There Might Exist Simple Rules For Accurate Position Evaluation

Posted: **Sat Oct 19, 2019 5:38 pm**

by **corres**

The main issue is the chess is not a solved game. If we would know the exact solution of chess we could make a mathematical model for chess and we could plan an ideal chess engine basing on this model

Without knowing the solution we can only use the experience yielded from programming tricks, parameter modification and from the tests of modified engine to gain stronger engine.

### Re: There Might Exist Simple Rules For Accurate Position Evaluation

Posted: **Sat Oct 19, 2019 6:14 pm**

by **towforce**

corres wrote: ↑Sat Oct 19, 2019 5:38 pm

The main issue is the chess is not a solved game.

It seems unlikely that chess will be solved by exhaustive game tree search from the starting position as other games have, so it will have to be solved by a combination of mathematics and computing, like the four colour theorem was.

If we would know the exact solution of chess we could make a mathematical model for chess and we could plan an ideal chess engine basing on this model

There are two criteria that a perfect chess engine has to meet:

1. in a drawn position, avoid a move that results in a losing position

2. in a winning position, choose the move on the shortest path to the win

Without knowing the solution we can only use the experience yielded from programming tricks, parameter modification and from the tests of modified engine to gain stronger engine.

This is the hill climbing optimisation technique, and is unlikely to lead to optimisation due to local maxima, ridges, alleys, and plateau (

link).

There are many rules about how to play good chess in certain types of position. There is every reason to suppose that there probably exist more complex rules that can be applied in a wider variety of position types. More complex rules still may exist that could apply in most position types. If they do exist, then they can be found, and it is my opinion that mathematical optimisation techniques would be a better tool for finding them than methods like NNs or genetic algorithms. However, I am happy to be proven wrong about this.

### Re: There Might Exist Simple Rules For Accurate Position Evaluation

Posted: **Sat Oct 19, 2019 6:36 pm**

by **corres**

towforce wrote: ↑Sat Oct 19, 2019 6:14 pm

corres wrote: ↑Sat Oct 19, 2019 5:38 pm

The main issue is the chess is not a solved game.

It seems unlikely that chess will be solved by exhaustive game tree search from the starting position as other games have, so it will have to be solved by a combination of mathematics and computing, like the four colour theorem was.

If we would know the exact solution of chess we could make a mathematical model for chess and we could plan an ideal chess engine basing on this model

There are two criteria that a perfect chess engine has to meet:

1. in a drawn position, avoid a move that results in a losing position

2. in a winning position, choose the move on the shortest path to the win

Without knowing the solution we can only use the experience yielded from programming tricks, parameter modification and from the tests of modified engine to gain stronger engine.

This is the hill climbing optimisation technique, and is unlikely to lead to optimisation due to local maxima, ridges, alleys, and plateau (

link).

There are many rules about how to play good chess in certain types of position. There is every reason to suppose that there probably exist more complex rules that can be applied in a wider variety of position types. More complex rules still may exist that could apply in most position types. If they do exist, then they can be found, and it is my opinion that mathematical optimisation techniques would be a better tool for finding them than methods like NNs or genetic algorithms. However, I am happy to be proven wrong about this.

There were some experience to make exact rule for evaluating chess positions.

If you think you can solve the problem go ahead and we will be the fans of you.

### Re: There Might Exist Simple Rules For TT replacement

Posted: **Sun Oct 20, 2019 9:06 am**

by **towforce**

corres wrote: ↑Sat Oct 19, 2019 1:52 pm

Please, show me a chimp or a housefly knowing chess.

Actually, it's easy to show that a chimp has got enough hardware to play chess is it was optimally trained. A chimp has 3x10^10 (30 billion) neurons. I don't know how many neurons each neuron connects to in a chimp, but given that we share 98% of our DNA with them, I'm going to say it's the same number as humans (10,000). So the number of NN weights a chimp would have would be something like 3x10^14 (it doesn't matter if this isn't exactly correct).

According to Srdja Matovic, AlphaZero's NN has 50 million weights (

link). If we say that 50 million weights is enough to play good chess, and if we say that a housefly has 100,000 neurons (most types of housefly will have more than that - but certainly less than a honey bee, which has nearly a million), then in order to get to 50 million weights, the number of connections each neuron has would need to be 50 million divided by 100 thousand, which equals 500. It seems likely to me that, in a housefly, the average number of connections each neuron has is greater than 500.

I understand that in reality, it would be practically impossible to train a housefly or a chimp to play chess - I'm just saying that a chimp has sufficient hardware, and a housefly might have enough hardware, for it to be possible for their brains to be trained to play good chess.

### Re: There Might Exist Simple Rules For Accurate Position Evaluation

Posted: **Mon Oct 21, 2019 7:10 am**

by **Ovyron**

corres wrote: ↑Sat Oct 19, 2019 5:38 pm

The main issue is the chess is not a solved game. If we would know the exact solution of chess we could make a mathematical model for chess and we could plan an ideal chess engine basing on this model

But chess has been solved for 7men positions, so the idea could be tested for this (build a mathematical model that tells you what is the best move in any of those positions, without having to store them, and you're done.)

### Re: There Might Exist Simple Rules For Accurate Position Evaluation

Posted: **Mon Oct 21, 2019 9:01 am**

by **towforce**

Ovyron wrote: ↑Mon Oct 21, 2019 7:10 am

corres wrote: ↑Sat Oct 19, 2019 5:38 pm

The main issue is the chess is not a solved game. If we would know the exact solution of chess we could make a mathematical model for chess and we could plan an ideal chess engine basing on this model

But chess has been solved for 7men positions, so the idea could be tested for this (build a mathematical model that tells you what is the best move in any of those positions, without having to store them, and you're done.)

Very good idea!

### Re: There Might Exist Simple Rules For Accurate Position Evaluation

Posted: **Mon Oct 21, 2019 11:11 am**

by **Ovyron**

In that case this might be the most relevant position:

This is a mate in 549 and the winning moves are known. As you follow them, there's paths in where engines without tablebases can't find the winning plan.

If the concept works there's a mathematical model that is able to tell you the moves white needs to perform to win (and the model needs to take 50-move rule into account, lest it might play a mate in 51 without captures that is a draw.) If you can't find a model for this position then the opening position is hopeless.