Position learning and opening books

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Zenmastur
Posts: 919
Joined: Sat May 31, 2014 8:28 am

Re: Position learning and opening books

Post by Zenmastur »

mvk wrote:...I recall that I did all kinds of statistics negamaxing in the previous version of my program, and I have written something about it at the time. The fundamental concern I have with it is that the value of statistics only increases with the sqrt of the number of underlying games, and games are expensive.
The MAB index I've been using is very slow to change. The effect when the statistics for a node changes abruptly by being negamaxed from a node deeper in the tree is quite dramatic none the less. In almost all cases the "apparent" number of games played from the node that received the new stats will drop. The MAB algorithm tries to keep the indexes for all descendant nodes roughly equal. A sudden drop in the number of games played from one node due to a negamax of the statistics will cause its index to rise sharply. An increase in the average game score will also increase this nodes index. The combination of both conditions occurring simultaneously is dramatic. This changes the behavior of the book. It gives it a sense of direction or purpose if you prefer.

The behavior described above can be enhanced if desired by altering the index equation.
The index equation I have been playing with is I=r(i)+sqrt(2*Ln(N)/n(i)). I is the calculated index, r(i) is the average reward or game score for move i, N is the total number of times the position has been seen, and n(i) is the number of times move i has been played.

If, for example, you wanted to make the behavior non-linear with respect to average game score you could change the equation like this: I=r(i)^3+sqrt(2*Ln(N)/n(i)). I have done a lot of experimenting with derivatives of this equation. A more general form of the equation is: I=a*r(i)^b+(Log base c(N)/n(i))^d+f. Where a,b,c,d, and f are constants. "a' can be used to adjust the relative weight given to exploitation vice exploration. Making "a" larger makes the algorithm greedy. "b" adjusts how non-linear you want the rewards to be. I set "c" equal to the number of legal moves in the position, but other values can be used. e=2.71828 is too small a base to use in my opinion but I still have a lot of testing to do. "d" was 1/2 or the square root in the original equation this seems to high IMO. "f" can be used to bias the results by a constant value which can be useful during tests. None of this is written in stone since I'm still experimenting at this stage.
mvk wrote:...Therefore I consider searches more valuable near the tips. And closer to the root negamax does the job. Looking at statistics of arbitrary games feels like a very slow Monte Carlo experiment with an embedded data quality problem, especially from human games.
This is precisely why blindly following book moves should be avoided, why third party statistics should be heavily discounted so they only have influence early in the learning process, and why they should be stored separately from engine results so they don't contaminate them.
mvk wrote:...It might be better to generate many fast games instead of relying on external PGNs for that.
I thought that's what I was proposing! :P

The problem is getting the position into the book FIRST so they can collect the needed data. Its much faster to do this with existing PGN's than it is through game play. Even if some of the positions will never be played by an engine it's better to get all the position you can from PGN's during creation. This solves the problem of data being missed during game play because the positions needed to save the data haven't been added to the book yet. i.e. it save a disproportionate amount of time compared to the small amount of disk space wasted.

It makes little difference if there are position in the book that won't ever be played. If they are never played they will also never be searched, and no game statistics will ever be gathered. Thus they will require no additional resources other than the disk space they occupy. They will just sit in the book unused. I don't see this as a much of a problem and it's considerably better than wasting a bunch of time getting the positions into the book through game play IMO.

There are other ways to get around this problem but they require a lot of extra time be spent on it. Disk space is cheap, and my time is very expensive, so I think this is a good trade-off.

Regards,

Zen
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.
Zenmastur
Posts: 919
Joined: Sat May 31, 2014 8:28 am

Re: Position learning and opening books

Post by Zenmastur »

Ozymandias wrote:
Zenmastur wrote:There are many games that have results that aren't congruent with the terminal position. I've seen a lot of games that were drawn, but when the terminal position is analyzed it's winning for one side or the othe
It's worse with correspondence games. About 10% have a wrong result, although we aren't talking about draws here, but rather the winning side resigning, for any number of outside-the-board reasons.
I agree. I have seen to many types of problems with games to list them all. I pointed out only one of them. Some place on my computer I compiled a list of different types of problems I have found. I started to compile possible solutions as well. Then I realized how much time these would take to implement and I quickly gave up on the project.

What is need is one solutions that deals with all the problems. I have been toying with a couple of ideas. But I'm not quite ready to commit to any particular action just yet.

Regards,

Zen
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.
Zenmastur
Posts: 919
Joined: Sat May 31, 2014 8:28 am

Re: Position learning and opening books

Post by Zenmastur »

mvk wrote:
Zenmastur wrote:There are many games that have results that aren't congruent with the terminal position. I've seen a lot of games that were drawn, but when the terminal position is analyzed it's winning for one side or the other. An other problem is when one side has a clear advantage out of the opening but blunders and loses the game. But I don't have a good solution that is also quick and easy to implement.
I prescan foreign games to combat this. Scan the game, starting from the start, and perform at each game position both a shallow and a little bit deeper search, until at both depths abs(score) exceeds 2.5. For most positions the deeper search can be skipped due to short cut evaluation of the && condition, until near the end of the prospect line. The resulting line is then my line of interest, and what happens after that I don't care, including the game result. Secondly, there is a blunder filter in that screening, needed for PGNs of human games. If any side makes a "too big and unneeded error" (in quotes because there are some details here to get right), I throw away the line anyway.

This is only a vetting process, before a game can even be considered for further processing.
I have tried various adhoc methods similar to yours, they work more or less, but I want something I can use for very large databases (5+million games) that can be applied once. Then I could just publish them for everyone to use.

Game collection and publishing is much better now than it used to be. So I expect much fewer issues from games published in the future.

Regards,

Zen
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.
syzygy
Posts: 5647
Joined: Tue Feb 28, 2012 11:56 pm

Re: Position learning and opening books

Post by syzygy »

Zenmastur wrote:1.) I already pointed out that playing games to collect statistical information is more time consuming than using an engine search. They can both be used to uncover poor play. The difference is that to do it statistically many games (10 to 100's or more depending on various factors) must be played, each of which requires many engine searches ( more than 110 on average for computer games). So, of course, it will be orders of magnitude faster to do things with an engine search than by collecting game statistics.
Collecting game statistics to do what? Add positions to a book? Not sure what you mean.
2.) Adding a large number of positions to the book when its created is much faster than adding them through game play.
What do you mean? Adding large numbers of positions is always very easy. Just take any position in the book and add its children. Chess has a near endless supply of positions. Of course that is not helpful.
The rate positions are added during creation is well over 100K positions per second. When any "reasonable" criterion is used for adding positions from engine play the rate positions are added is about 1 position per game.
"The rate positions are added during creation is well over 100K positions per second." What does that sentence refer to? Is someone adding positions "during creation"? Is something already being created? What is he doing to add 100K positions per second? (Running perft or so?)

I'm not sure what you are talking about.

Are you thinking of creating a book from an existing collection of games? Then you are comparing apples to oranges. And I'm not sure why you're comparing these two in the first place.
Even if only a small percentage of the positions that a book is created with are useful it's still orders of magnitude faster than adding those positions by playing games.
Yes, 100K > 1.

Making a high-quality book is not about collecting as many positions as you can, but about figuring out which moves lead to good positions.

If you want an engine to do that, which seemed to be the point of your opening post, then the logical thing to do would seem to be somehow using the engine to figure out which moves lead to good positions. That is the IDEA approach and that is Marcel's approach. Basically evaluate the leaves using a search of a particular depth and back up the scores to the root.

It might certainly be useful to have more information per node than just the backed up evaluation score, but going only on statistics is, in my view, insufficient. Only knowing the moves of a game and its outcome is not of much value (unless you know it was played by two top engines at long time control).
It takes a lot of good decisions to make a project work. A single poorly thought out decision can ruin a project. Ill considered comments like the following:
Ill considered! Heh.
syzygy wrote: ... Adding "random" games, even if they are of high quality, will not have much of an effect, as it would likely only add some moves to a line that you won't be playing anyway (or only with extremely small probability).
Can be taken incorrectly. This could easily happen due to its dubious wording. e.g. what is a high quality "random" game and what relationship do they have to making an opening book?
A random game is any game you get from somewhere and just add blindly to the book. You gave a list of examples. Such games generally won't help your book. See what I wrote, no need to repeat myself.
This could mislead someone into using overly restrictive rules for a position being included in the book. What harm will be caused by having unused positions in an opening book? It isn't likely to slow down access to the good data, so what does it matter?
I did not say it causes harm, I said it "will not have much of an effect". It is not going to help your book. And therefore it is a waste of resources that could have been used to actually improve the book.
On the other hand, not including a 100,000 positions that the engine will use can cost a 1,00 hours or more in additional playing time to get these position added to the book.
Just run perft?
mvk
Posts: 589
Joined: Tue Jun 04, 2013 10:15 pm

Re: Position learning and opening books

Post by mvk »

Zenmastur wrote:but I want something I can use for very large databases (5+million games) that can be applied once.
That is indeed how I apply it. For >5M PGN collections, once.
[Account deleted]
syzygy
Posts: 5647
Joined: Tue Feb 28, 2012 11:56 pm

Re: Position learning and opening books

Post by syzygy »

From your other posts in this thread I understand that you're indeed starting with a large collection of games (no matter the quality, it seems). From this collection you create a book. Then you start playing games, keep track of statistics, and try to find the better lines in this book.

You find it important that the book has as many positions as possible. You don't want to run the risk of missing some 100,000 positions here and there.

Why not simply start from a virtual book that has "all" positions? Then you won't miss any position and there is no need to actually store this book on disk.

Now just start playing games and do as you like. Collect statistics on the positions that actually occur. Those positions will have to be stored on disk, of course. The positions you do not see will not get statistics, so never need to be stored. That should save lots of disk space, you can skip the initial "creation" step, and you won't miss *any* position.
mvk
Posts: 589
Joined: Tue Jun 04, 2013 10:15 pm

Re: Position learning and opening books

Post by mvk »

Zenmastur wrote:The problem is getting the position into the book FIRST so they can collect the needed data.
Maybe this is the core of what I don't understand. I don't see how this follows. There is a hidden assumption somewhere in your story? Maybe that a book file must be of a fixed size forever, or something like that.
[Account deleted]
Zenmastur
Posts: 919
Joined: Sat May 31, 2014 8:28 am

Re: Position learning and opening books

Post by Zenmastur »

I haven't had time to write a response to your questions just yet maybe later tonight or tomorrow.

Regards,

Zen
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.
Zenmastur
Posts: 919
Joined: Sat May 31, 2014 8:28 am

Re: Position learning and opening books

Post by Zenmastur »

mvk wrote:
Zenmastur wrote:The problem is getting the position into the book FIRST so they can collect the needed data.
Maybe this is the core of what I don't understand. I don't see how this follows. There is a hidden assumption somewhere in your story? Maybe that a book file must be of a fixed size forever, or something like that.
Ok, we're obviously having a communication problem. I thought I explained with sufficient detail in one of my previous post, but apparently not. So let's try this again

I'm not trying to collect as many positions as I can. If I wanted to do that I would use perft. I'm trying to collect as many positions as I can that are likely to be seen again. The history of what has already been played is the precursor to what will be played. i.e. we learn by studying the mistakes and practices of our predecessors. Engines don't do this but humans do. It's not an accident that many of the moves found in openings of strong players are judged to be best by engines.

Lets keep it simple and start with the rules for adding a position to the book during creation.

In order for a position to be added to the book it must been seen "X" number of times. If "X" is set to one then all moves are immediately added to the book. I have actually used this rule with "X" set to one and it isn't very useful. It does, however, produce extremely large books. But most of the positions aren't ever seen again so you end up with a book so large as to be unwieldy with little or no benefit.

Lets say you have a collection of 100 million games with which to build you book. If "X" is set to 100 you will include all positions that occur with a frequency of at least 1 game in a million. The quality of the games isn't too important because for a blunder to be included in the book it must have been played at least 100 times. Most blunders aren't repeat at all. Few will be seen even 10 times. The ones that are repeated 100 times, assuming there are some that fall into this category, will be so few as to be inconsequential. They are inconsequential because there won't be enough of them to use large quantities of disk space . Furthermore, unlike most current books, moves are searched before they are played the first time so the engine will ferret out the few remaining tactical blunders before they have a chance of being played.

You can, if it pleases you, remove games by low rated players as most don't know the openings well enough to contribute significantly to the book. But I doubt that this is essential to producing a good set of starting positions.

After the book is created and games are being play you still need rules about which positions should be added to the book. These rules don't have to be the same as the rules for book creation. What ever rules are adopted should try to include positions that will be seen again. If a move is seen more than once there is a good chance that it'll be seen repeatedly. This is true because engines are highly tactical in nature and the available tactics in a position don't ever change. Engines are highly deterministic anyway, so if a position is seen twice it's VERY likely to be seen again.

Since this is the case "X" is set to two for game play. Setting it higher than two will simply slow the process of adding new positions. The problem is this is still too slow in adding positions IMO. Its not unusable, but it would be better if positions could be added faster than this rule allows. This is one reason why starting with a large set of opening positions should be used with this method.

I have looked at various ways to increase the rate at which positions are added. I haven't decided on a particular solution yet.

One remedy I considered is to do this off-line by saving all games played and processing them after the fact. Or using saved games to augment what was learned in "real time". But if it's done this way, the search information must be stored in the PGN file or it will be lost. Another disadvantages of doing this off-line, which is far more import IMO, is that learning isn't done in real time and post game processing must be done often and religiously. This is a lot of work. Work that most people won't do. They won't take the time to perform the required tasks day after day and year after year and then the system won't work as intended. The requirement for offline post processing of games makes this option a non-starter in my estimation. I want something that will do it's job with VERY little intervention by the end user. i.e. You create the book and play games. That's the only requirements for the system to work properly.

Hopefully this explanation will shed enough light on the process to explain why I want to start the book with a large number of positions.

Regards,

Zen
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.
mvk
Posts: 589
Joined: Tue Jun 04, 2013 10:15 pm

Re: Position learning and opening books

Post by mvk »

mvk wrote:I'm currently in the midst of quantifying some of the book improvement results. One coarse measurement I did last month is where I pitted my engine with different versions of the book (from 4 different years) against each other, and one without book. No opponent learning was active in these test matches, just a static book. 'b<year>' means that a book from that year was used. Results are:

Code: Select all

Rank Name                Elo    +    - games score oppo. draws 
   1 rookie3.9x1-b2015     0   12   12  2000   54%   -22   48% 
   2 rookie3.9x1-b2014   -10   10   10  3000   55%   -41   47% 
   3 rookie3.9x1-b2013   -34    9    9  4000   57%   -79   44% 
   4 rookie3.9x1-b2012   -88   10   10  3000   50%   -88   43% 
   5 rookie3.9x1-bnone  -219   13   13  2000   26%   -61   36% 
This encourages me to believe that there is some kind of progression in the books, but also that that effect starts to taper off.
Returning to the above results once more: I have recently completed a higher resolution repeat of the above experiment. For that I picked 1 book version from every month in the past 4 years (namely, the last version of each month). Then I pitted them against each other with cutechess-cli, always using the same version of my program but letting that play with a different book version. Maximum allowable path error for the move selection is 0.1p. Each match had 1,000 games at 90s+0.5s time control for a total of 159,000 games. There are too many entrants for a full round robin test, therefore each program/book combo 'N' played against version N-1, N-2, N-4, N-8, N-16, etc (and, consequently, also against N+1, N+2, N+4, etc). Unlike the earlier test, I didn't enter a no-book variant.

The result is depicted in the plot below. The ratings and margins were calculated by bayeselo with default settings.

Image

Both the books' elo increase over the years and the recent taper-off effect are confirmed by the second experiment. I tried to find a correlation between the delta-elo and the expansion methods I used in each book version (eg. learning, dropout expansion, importing playchess games, etc...), but it is not so clear what helps most, unfortunately. Maybe more on that later.

In the meantime I have streamlined my original set of scripts and I'm deepening the dropout searches of the repertoire. There is now a live visualisation of this update process running here, where I sometimes go watch when I'm bored :-). The yellow arrows represent moves that stay in the book (without indication if they are good or bad btw) and the red arrow is the current dropout move for that position (that is, the best move leaving book).

Finally, I did a test to see if there is really an impact on allowing for a certain path error, and corresponding score-wise position degradation, in exchange for opening line variability. For this I played the same program with the same, most recent, book version against itself while applying different maximum allowable path errors in book move selection: I tested for 0.10 pawn, 0.08 pawn, 0.06 pawn, 0.04 pawn and 0.02 pawn maximum path error. On the servers I normally use 0.1 pawn, which is very wide. During tournaments I normally use a much smaller value. There was a claim before that doing this weakens the program in theory and practice. The latter can be tested, which I did here. See results below. My conclusion is that, within a reasonable number of games, there is no observable correlation between performance and allowable path error in this range. (Note that a 0.1 pawn disadvantage would normally correspond to a 10 elo loss)

Code: Select all

Rank Name               Elo    +    - games score oppo. draws 
   1 rookie3.9x1-m040     1    3    3 34000   50%     0   32% 
   2 rookie3.9x1-m020     0    4    4 24000   50%     0   34% 
   3 rookie3.9x1-m100     0    4    4 24000   50%    -1   33% 
   4 rookie3.9x1-m080    -1    3    3 34000   50%     0   32% 
   5 rookie3.9x1-m060    -1    3    3 44000   50%     0   32% 
[Account deleted]