Diminishing returns of increasing search depth

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

Uri Blass
Posts: 10282
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Diminishing returns of increasing search depth

Post by Uri Blass »

bob wrote:Until we reach the point where a program can discover the game theoretic value from any starting position, it can always be drawn or beaten.
quote]

I do not agree with it.

I believe that if the search depth is big enough the program is unbeatable from the starting position but cannot discover the theoretic value from any starting position.

The depth may be big enough to find moves that are good enough to get at least a draw from the opening position against every opponent but not good enough to prove it or to beat every player that play a losing mistake.

I suspect that programs with the right opening book(not opening book of more than 10^9 moves) and with searching 40 plies forward in the middle game and more than it in the endgame may be pracitcally unbeatable with white and black.

Uri
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: Diminishing returns of increasing search depth

Post by Laskos »

Don't scare me, I made a bet with Eelco on a bottle of a good Champagne for the 1st of August 2017. I did two Pade fits, each with 5 parameters, one with asymptotic behaviour 1/x, second with 1, throwing away the first 3 results (the data are from ply 4 up).

The data of the first (1/x) fit up to ply 80 are

{152.026, 224.031, 163.896, 151.125, 142.293, 135.014, 128.673, \
123.011, 117.885, 113.205, 108.904, 104.933, 101.251, 97.8252, \
94.629, 91.6388, 88.8346, 86.1992, 83.7173, 81.3756, 79.1625, \
77.0674, 75.0811, 73.1952, 71.4022, 69.6953, 68.0685, 66.5161, \
65.0333, 63.6153, 62.258, 60.9576, 59.7105, 58.5136, 57.3637, \
56.2583, 55.1948, 54.1708, 53.1842, 52.2329, 51.3151, 50.4291, \
49.5732, 48.7458, 47.9457, 47.1714, 46.4218, 45.6957, 44.9919, \
44.3095, 43.6475, 43.005, 42.3812, 41.7752, 41.1864, 40.6139, \
40.0571, 39.5154, 38.9881, 38.4748, 37.9748, 37.4876, 37.0128, \
36.5498, 36.0983, 35.6579, 35.228, 34.8084, 34.3987, 33.9985, \
33.6075, 33.2255, 32.852, 32.4868, 32.1296, 31.7803, 31.4384, \
31.1038, 30.7763, 30.4556}


Correlation 0.928473



The data for the linear fit (1) up to ply 80 are

{152.028, 223.501, 168.821, 149.012, 138.214, 131.337, 126.555, \
123.031, 120.324, 118.178, 116.434, 114.99, 113.773, 112.734, \
111.837, 111.054, 110.364, 109.753, 109.207, 108.716, 108.273, \
107.87, 107.503, 107.167, 106.858, 106.573, 106.31, 106.065, 105.838, \
105.625, 105.427, 105.241, 105.067, 104.902, 104.748, 104.602, \
104.463, 104.333, 104.209, 104.091, 103.979, 103.872, 103.77, \
103.673, 103.581, 103.492, 103.407, 103.326, 103.248, 103.173, \
103.101, 103.032, 102.966, 102.902, 102.84, 102.781, 102.723, \
102.668, 102.614, 102.563, 102.513, 102.464, 102.418, 102.372, \
102.328, 102.286, 102.244, 102.204, 102.165, 102.127, 102.091, \
102.055, 102.02, 101.986, 101.953, 101.921, 101.89, 101.86, 101.83, \
101.801}


Correlation 0.91354


So, diminishing results for the additional ply is a little more plausible. There is much noise in the "experimental" results, I would like someone to play 1000 games with Crafty or Toga up to ply 14, to have less noise. My prediction is, as of now, the doubling of time is worth around 70 Elo points, in 2017 it will be worth 50 Elo points (my bet was 45 points, but whoever is closer wins the bet).

Kai
User avatar
towforce
Posts: 11575
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Diminishing returns of increasing search depth

Post by towforce »

Laskos wrote:So, diminishing results for the additional ply is a little more plausible.
There is a point I have to clarify - if only for my own benefit - and that is the very real possibility that there may be one or more fundamental problems with the ELO rating system. In the past, SSDF found a problem with chess computers' ELO ratings getting closer together as they did ever-larger numbers of tests, and they put forward the possibility of a flaw in the rating system as an explanation.

More fundamentally, if you think about "simpler" games than chess, there comes a time when either they are "solved" or they become unbeatable by computers searching to a certain depth. There is no fundamental reason why this shouldn't apply to chess as well - but if it is the case, then ELO ratings will have an upper limit.
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
plattyaj

Re: Diminishing returns of increasing search depth

Post by plattyaj »

Interesting question - do we see diminishing returns because we've built in increasing levels of pruning as we descend the tree or have we built in increasing levels of pruning because we know there will be diminishing returns?!

Andy.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Diminishing returns of increasing search depth

Post by bob »

Uri Blass wrote:
bob wrote:Until we reach the point where a program can discover the game theoretic value from any starting position, it can always be drawn or beaten.
quote]

I do not agree with it.

I believe that if the search depth is big enough the program is unbeatable from the starting position but cannot discover the theoretic value from any starting position.

The depth may be big enough to find moves that are good enough to get at least a draw from the opening position against every opponent but not good enough to prove it or to beat every player that play a losing mistake.

I suspect that programs with the right opening book(not opening book of more than 10^9 moves) and with searching 40 plies forward in the middle game and more than it in the endgame may be pracitcally unbeatable with white and black.

Uri
Believe what you want. I believe that if I can go one ply deeper than you, Or even just as deep since I will effectively see deeper after you make your move, that I am going to win games, until you reach the point where you can see to the end of the tree. Until that point, the deeper search is going to win. There are too many very difficult to win games, KRP vs KR being a common one, where depth is the key.

They may be unbeatable by humans, but not by other computers that can go deeper.
jarkkop
Posts: 198
Joined: Thu Mar 09, 2006 2:44 am
Location: Helsinki, Finland

Re: Diminishing returns of increasing search depth

Post by jarkkop »

This is what used:

General model Rat01:
f(x) = (p1) / (x + q1)
Coefficients (with 95% confidence bounds):
p1 = 1462 (1084, 1840)
q1 = 2.649 (1.354, 3.944)

Goodness of fit:
SSE: 1.023e+004
R-square: 0.8993
Adjusted R-square: 0.8902
RMSE: 30.5
jarkkop
Posts: 198
Joined: Thu Mar 09, 2006 2:44 am
Location: Helsinki, Finland

Re: Diminishing returns of increasing search depth

Post by jarkkop »

Could you use your cluster to test with Crafty what kind of results you get with it?

You get so fast so many games played that it would give us more reliable information. A humble request.



- Jarkko
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: Diminishing returns of increasing search depth

Post by Laskos »

jarkkop wrote:This is what used:

General model Rat01:
f(x) = (p1) / (x + q1)
Coefficients (with 95% confidence bounds):
p1 = 1462 (1084, 1840)
q1 = 2.649 (1.354, 3.944)

Goodness of fit:
SSE: 1.023e+004
R-square: 0.8993
Adjusted R-square: 0.8902
RMSE: 30.5
This is Pade with 2 variables, with the asymptote to the same 1/x, that's why our results agreed so well. R-square looks very good for only two variables.

Kai
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Diminishing returns of increasing search depth

Post by bob »

jarkkop wrote:Could you use your cluster to test with Crafty what kind of results you get with it?

You get so fast so many games played that it would give us more reliable information. A humble request.



- Jarkko
The question is, what to test? Crafty (N) vs Crafty (N+1) is not, IMHO, going to say much about what the additional ply does. It could easily over-state or understate the improvement.

But tell me how you want the test to run and I can certainly run it with some big numbers of games to at least cut the error bar down to something like +/-4 or less.
jarkkop
Posts: 198
Joined: Thu Mar 09, 2006 2:44 am
Location: Helsinki, Finland

Re: Diminishing returns of increasing search depth

Post by jarkkop »

Nothing fancy
Just the thing you suggested.

Crafty (N) vs. Crafty (N+1) and let the N go from 2 to 15.

so

Crafty (1 ply depth) vs Crafty (2 ply depth)
Crafty (2 ply depth) vs Crafty (3 ply depth)
Crafty (3 ply depth) vs Crafty (4 ply depth)
...
Crafty (14 ply depth) vs Crafty (15 ply depth)

Run the each match until the error bar is +/-4

Even if it over-state or understate the improvement it would be nice to see the values behavior in Crafty's case. It would be nice if you can also show the draw games percentage for each match in results. Rémi Coulom's bayeselo.exe can give the all the values. Draw rate should increase as depth increases.

If you have time you can add 15ply vs. 16ply or more depth.

Thanks

Jarkko