Playing with "The Secret of Chess"
Moderator: Ras
-
- Posts: 2
- Joined: Wed Apr 07, 2021 11:52 am
- Full name: Serge Gotsuliak
Re: Playing with "The Secret of Chess"
Are there any progress with Tsvekov's ideas? I'm new to chess and thinking about to start digging into the game with his system in mind against some traditional way of learning. Is it practical at all?
-
- Posts: 318
- Joined: Thu Mar 09, 2006 1:07 am
Re: Playing with "The Secret of Chess"
When I last wrote about this topic (in February I think) I started with my own version of texel tuning. That is tuning the evaluation function with a large set of EPD/FEN positions together with the game outcome (0, 1/2, 1). The tuning is just incrementing or decrementing one parameter at a time and calculate the overall evaluation prediction error for all positions. Then take the new parameter value and repeat. My parameter set to tune was a couple of scaling factors of the TSOC evaluation topics for the midgame and for the endgame.
That worked well for the endgame tuning. Most of the raw TSOC evaluation function were a bit high compared to the piece values. Therefore the tuning parameters went down from 100% to about 70%. The play with these modified values improved the playing strength and style of my Elephant engine (that is still in slow progress) that I use. I could see some spectacular agressive mating attacks in the middle of the game against FairyMax. But still some piece losses due to low search depth as a consequence of a very time expensive complete TSOC evaluation. I did not measure the old and new playing strength of Elephant_TSOC in a tournament of many games. I just looked at some example games.
The tuning did not work well for the midgame scaling parameters. These had a tendency to change towards 0%. There are a few explanations. The midgame is far away from the end result. Both players of the games where the positions come from could have made errors on the way and the correlation between midgame evaluation and end result is weak. A good early evaluation can still lead to a loss. Then the tuning is more randomly. This was especially bad when I included the 1/2 draw result. The overall error for 0, 1/2 and 1 likes evaluations near 0 centipawns for each draw result. I therefore only used mate result positions. Also I tried different approaches for a smooth interpolation of the mg/eg weight of the values during the tuning procedure. Another method would probably be to give midgame positions an own precalculated result as the tuning target by searching all the positions to a fixed depth. The tuning target becomes a predictor for the game state a few moves into the future. I did not have the time and motivation to implement this idea yet.
A typical outcome of the tuning (not necessary the best) looks like:
The book TSOC and its ideas are still a very good start to learn something about the evaluation of chess positions. Some times it goes a bit too far and it has some missing patterns, especially for the endgame. May be the right method to use the book is to look a all the important patterns, make them more easy to compute by more generalization and include that in your own evaluation functions. Important are patterns with a high centipawn value that are also very common. Have a look at big sum_score values in the table that I wrote in this thread on 2021-01-30.
That worked well for the endgame tuning. Most of the raw TSOC evaluation function were a bit high compared to the piece values. Therefore the tuning parameters went down from 100% to about 70%. The play with these modified values improved the playing strength and style of my Elephant engine (that is still in slow progress) that I use. I could see some spectacular agressive mating attacks in the middle of the game against FairyMax. But still some piece losses due to low search depth as a consequence of a very time expensive complete TSOC evaluation. I did not measure the old and new playing strength of Elephant_TSOC in a tournament of many games. I just looked at some example games.
The tuning did not work well for the midgame scaling parameters. These had a tendency to change towards 0%. There are a few explanations. The midgame is far away from the end result. Both players of the games where the positions come from could have made errors on the way and the correlation between midgame evaluation and end result is weak. A good early evaluation can still lead to a loss. Then the tuning is more randomly. This was especially bad when I included the 1/2 draw result. The overall error for 0, 1/2 and 1 likes evaluations near 0 centipawns for each draw result. I therefore only used mate result positions. Also I tried different approaches for a smooth interpolation of the mg/eg weight of the values during the tuning procedure. Another method would probably be to give midgame positions an own precalculated result as the tuning target by searching all the positions to a fixed depth. The tuning target becomes a predictor for the game state a few moves into the future. I did not have the time and motivation to implement this idea yet.
A typical outcome of the tuning (not necessary the best) looks like:
Code: Select all
9. Implementation, start with 100/100, with random, material not tuned, midgame + endgame tuned,
mg and eg weighted with interpolation_permille, mg weighted again like endgame,
with stm correktion, not expected draw, 50%-150%
Texel tuning parameters:
After 100 rounds
Error mg 0.0306575, eg 0.0690825
pnum, PName, Value
0, TPMaterialValueMg , 100
1, TPMaterialValueEg , 100
2, TPPsqtValueMg , 50
3, TPPsqtValueEg , 69
4, TPCorrPieceValueMg , 58
5, TPCorrPieceValueEg , 126
6, TPMobilityValueMg , 55
7, TPMobilityValueEg , 50
8, TPPawnPieceValueMg , 88
9, TPPawnPieceValueEg , 131
10, TPOutpostValueMg , 72
11, TPOutpostValueEg , 113
12, TPImbalanceValueMg , 92
13, TPImbalanceValueEg , 55
14, TPKingSafetyValueMg , 50
15, TPKingSafetyValueEg , 69
16, TPPieceActivityValueMg, 104
17, TPPieceActivityValueEg, 128
18, TPPawnValueMg , 50
19, TPPawnValueEg , 133