Why Chess Might Be Almost "Solved" IMO

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

User avatar
towforce
Posts: 12653
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK
Full name: Graham Laight

Why Chess Might Be Almost "Solved" IMO

Post by towforce »

Ultimately, chess is about avoiding mistakes. I am defining a mistake as a move that has one of the following effects:

* turns a winning position into a drawn position

* turns a winning position into a losing position

* turns a drawn position into a losing position

A computer that plays perfectly never makes a mistake - and, by my personal definition (which is clearly different from the formal definition), also resolves chess. My definition of "near-perfect" play is making mistakes only very rarely (e.g. 1 mistake per 10,000 moves). The potential size of a chess game is very large (many thousands of ply deep) - but it may be that near-perfect play can be achieved with a relatively short ply depth (the exact depth required will, of course, depend on the quality of the eval - but I would be surprised if the difference between poor eval and good eval was more than the equivalent of an extra 5-10 ply of full-width search).

Whether we are close to near-perfect play (by my definition) depends critically on the shape of the graph that plots ply depth against probability of making a mistake - and it is this I would like to read people's opinions about. Here are some possibilties:

1. Diminishing returns

Code: Select all

Probability |*
Of Mistake  |*
            |*
            | *
            |  *
            |   *
            |    *
            |      *
            |        *
            |          *
            |             *
            |                *
            |                   *
            |                       *
            |                          *
            |                              *
            |                                   *
            |                                         *
            |                                                  *      *
            |
            |
            |
            |
            -----------------------------------------------------------
                 Depth of search
If the above is correct, then it is very surprising indeed that faster computers lead to significantly better play - and that faster computers gain further speed by enabling programmers to remove knowledge from the evaluation. It just doesn't match what has actaully happened - so I don't think that it is correct.

Note: IMO, the above graph also covers the "S" shape (a backward "S" in this case), where you get poor returns at first, then good returns (the steep part of the "S"), then poor returns beyond a certain point.

There are two other possibilities - both of which imply that we are actually close to resolving chess (by my definition):

2. Steady returns

Code: Select all

Probability |*
Of Mistake  |  *
            |    *
            |      *
            |        *
            |          * 
            |            *
            |              *
            |                *
            |                  *
            |                    *
            |                      *
            |                        *
            |                          *
            |                            *
            |                              *
            |                                *
            |                                  *
            |                                    *
            |                                      *
            |                                        *
            |                                          *
            |                                            *
            -----------------------------------------------------------
                 Depth of search
2. Increasing returns

Code: Select all

Probability |*   *
Of Mistake  |          *
            |               *
            |                   *
            |                      *
            |                        * 
            |                          *
            |                           *
            |                            *
            |                             *
            |                              *
            |                               *
            |                               *
            |                                *
            |                                *
            |                                *
            |                                 *
            |                                 *
            |                                 *
            |                                 *
            |                                  *
            |                                  *
            |                                  *
            -----------------------------------------------------------
                 Depth of search
Could you all share your thoughts as to what the true shape of this graph is, please?

Thanks for your input!
Human chess is partly about tactics and strategy, but mostly about memory
Uri Blass
Posts: 11042
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Why Chess Might Be Almost "Solved" IMO

Post by Uri Blass »

towforce wrote:Ultimately, chess is about avoiding mistakes. I am defining a mistake as a move that has one of the following effects:

* turns a winning position into a drawn position

* turns a winning position into a losing position

* turns a drawn position into a losing position

A computer that plays perfectly never makes a mistake - and, by my personal definition (which is clearly different from the formal definition), also resolves chess. My definition of "near-perfect" play is making mistakes only very rarely (e.g. 1 mistake per 10,000 moves). The potential size of a chess game is very large (many thousands of ply deep) - but it may be that near-perfect play can be achieved with a relatively short ply depth (the exact depth required will, of course, depend on the quality of the eval - but I would be surprised if the difference between poor eval and good eval was more than the equivalent of an extra 5-10 ply of full-width search).

Whether we are close to near-perfect play (by my definition) depends critically on the shape of the graph that plots ply depth against probability of making a mistake - and it is this I would like to read people's opinions about. Here are some possibilties:

1. Diminishing returns

Code: Select all

Probability |*
Of Mistake  |*
            |*
            | *
            |  *
            |   *
            |    *
            |      *
            |        *
            |          *
            |             *
            |                *
            |                   *
            |                       *
            |                          *
            |                              *
            |                                   *
            |                                         *
            |                                                  *      *
            |
            |
            |
            |
            -----------------------------------------------------------
                 Depth of search
If the above is correct, then it is very surprising indeed that faster computers lead to significantly better play - and that faster computers gain further speed by enabling programmers to remove knowledge from the evaluation. It just doesn't match what has actaully happened - so I don't think that it is correct.
I think that this is not correct that programmers remove knowledge from the evaluation because of faster computers.

Programmers may remove knowledge because the knowledge is counter productive but I need to see cases when they remove knowledge because it is only counter productive for faster computers and not counter productive for slower computers.

Uri
User avatar
towforce
Posts: 12653
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK
Full name: Graham Laight

Re: Why Chess Might Be Almost "Solved" IMO

Post by towforce »

Uri Blass wrote:I think that this is not correct that programmers remove knowledge from the evaluation because of faster computers. Programmers may remove knowledge because the knowledge is counter productive but I need to see cases when they remove knowledge because it is only counter productive for faster computers and not counter productive for slower computers.
I am surprised that you would say this. :?

Surely it is obvious that the deeper a computer can search, the less knowledge it will need? Many pieces of knowledge that are very important to evaluate at a shallow search depth simply won't be needed ar a deeper search depth - and hence they can be removed, and the evaluation can run faster - itself increasing the search depth!

You ask for an example - well the obvious one is the human player. They have only a shallow full-width search, and hence require an enormous amount of knowledge. Computers are now better players than humans - and they achieve this with only a tiny fraction of the knowledge that a human has.

Anyway - do you have an opinion as to what the shape of the graph plotting search depth against probability of making a mistake is, please?
Human chess is partly about tactics and strategy, but mostly about memory
Rob

Re: Why Chess Might Be Almost "Solved" IMO

Post by Rob »

Code: Select all

Prob.of
mistake
|******************************
|                        
|                        
|                        
|                        
|                        
|                        
|                              *
 -------------------------------
                depth


Many endgames need hundreds of ply to solve.
User avatar
towforce
Posts: 12653
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK
Full name: Graham Laight

Re: Why Chess Might Be Almost "Solved" IMO

Post by towforce »

Rob wrote:

Code: Select all

Prob.of
mistake
|******************************
|                        
|                        
|                        
|                        
|                        
|                        
|                              *
 -------------------------------
                depth


Many endgames need hundreds of ply to solve.
Are you saying that unless an end game is fully resolved, there is a high (> 0.5) probability that a computer with a weak eval will make a mistake (as defined by me in the first post in this thread) in each move in which it is possible to make a mistake (e.g. not possible if already in a losing position)?

This would strike me as being an over-confident claim.

In any case - you are implying that the correct graph shape is "increasing returns" (the last graph in this thread's initial post).
Human chess is partly about tactics and strategy, but mostly about memory
User avatar
Marek Soszynski
Posts: 587
Joined: Wed May 10, 2006 7:28 pm
Location: Birmingham, England

Re: Why Chess Might Be Almost "Solved" IMO

Post by Marek Soszynski »

My definition of "near-perfect" play is making mistakes only very rarely (e.g. 1 mistake per 10,000 moves).
Note that by your definition missing consecutive one-move mates still counts as "near-perfect" play if the final result of the game is unchanged. So the near-perfection could actually look pretty stupid.
it may be that near-perfect play can be achieved with a relatively short ply depth
What is ply depth? Engines do not all search in the same way or report their searches in the same way. The engines that report the greatest depth of search are not necessarily the strongest, and that is not a problem of evaluation. An engine may look deeply but miss something shallow. Possibly your ideas could apply to brute-force bean counters, but that is not what modern engines are.
Surely it is obvious that the deeper a computer can search, the less knowledge it will need?
No. Here's one example. If an engine doesn't know about "wrong" bishops in endings with rook-pawns, then one extra ply of search depth will make no difference, nor will two, three, four... Suddenly with say thirty extra ply the engine will realise what is going on, but not at plies twenty-nine, twenty-eight... All thoses many extra plies couldn't substitute for that particular piece of knowledge. At the least, your graphs will be more rectilinear than parabolic.
Marek Soszynski
User avatar
towforce
Posts: 12653
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK
Full name: Graham Laight

Re: Why Chess Might Be Almost "Solved" IMO

Post by towforce »

Marek Soszynski wrote:
My definition of "near-perfect" play is making mistakes only very rarely (e.g. 1 mistake per 10,000 moves).
Note that by your definition missing consecutive one-move mates still counts as "near-perfect" play if the final result of the game is unchanged. So the near-perfection could actually look pretty stupid.
Agreed. If you're unhappy about this, then you may expand the definition of a mistake to include a move that isn't the fastest way to a win.
it may be that near-perfect play can be achieved with a relatively short ply depth
What is ply depth? Engines do not all search in the same way or report their searches in the same way. The engines that report the greatest depth of search are not necessarily the strongest, and that is not a problem of evaluation. An engine may look deeply but miss something shallow. Possibly your ideas could apply to brute-force bean counters, but that is not what modern engines are.
I agree that I have simplified the truth to try to get people to focus on a narrow point.

If you're unhappy with "ply depth", then feel free to think in terms of either full-width ply depth, or, if you wish to focus on "modern engines", substitute "positions evaluated per second" for "ply depth".
Surely it is obvious that the deeper a computer can search, the less knowledge it will need?
No. Here's one example. If an engine doesn't know about "wrong" bishops in endings with rook-pawns, then one extra ply of search depth will make no difference, nor will two, three, four... Suddenly with say thirty extra ply the engine will realise what is going on, but not at plies twenty-nine, twenty-eight... All thoses many extra plies couldn't substitute for that particular piece of knowledge. At the least, your graphs will be more rectilinear than parabolic.
Agreed. The graphs represent what would be broad averages over large numbers of positions. Also, they would only be meaningful in positions where mistakes (as defined by myself) are possible.

I also believe, though, that with the higher standard of play that will result from bigger searches, positions where particular pieces of knowledge are important are just simply less likely to be encountered.
Human chess is partly about tactics and strategy, but mostly about memory
Uri Blass
Posts: 11042
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Why Chess Might Be Almost "Solved" IMO

Post by Uri Blass »

towforce wrote:
Uri Blass wrote:I think that this is not correct that programmers remove knowledge from the evaluation because of faster computers. Programmers may remove knowledge because the knowledge is counter productive but I need to see cases when they remove knowledge because it is only counter productive for faster computers and not counter productive for slower computers.
I am surprised that you would say this. :?

Surely it is obvious that the deeper a computer can search, the less knowledge it will need? Many pieces of knowledge that are very important to evaluate at a shallow search depth simply won't be needed ar a deeper search depth - and hence they can be removed, and the evaluation can run faster - itself increasing the search depth!

You ask for an example - well the obvious one is the human player. They have only a shallow full-width search, and hence require an enormous amount of knowledge. Computers are now better players than humans - and they achieve this with only a tiny fraction of the knowledge that a human has.

Anyway - do you have an opinion as to what the shape of the graph plotting search depth against probability of making a mistake is, please?
Your example is bad.
Computers clearly have more than a tiny fraction of the knowledge that humans have.


Computers evaluate every piece on the board when they evaluate chess positions.

mobility functions evaluate the exact number of squares that the bishop or the rook can go.

Humans do not do it and when I get a position in a leaf of my search during a chess game I do not stop to count the number of squares that every piece can go.

For your question
I believe in diminishing returns but I believe that they diminish slowly.
I think that graphs about errors are meaningless because the number of errors is dependent on the opponent and strong opponent help you to play more errors.

I cannot claim that some level is doing X errors during the game.

Uri
Rob

Re: Why Chess Might Be Almost "Solved" IMO

Post by Rob »

towforce wrote:
Rob wrote:

Code: Select all

Prob.of
mistake
|******************************
|                        
|                        
|                        
|                        
|                        
|                        
|                              *
 -------------------------------
                depth


Many endgames need hundreds of ply to solve.
Are you saying that unless an end game is fully resolved, there is a high (> 0.5) probability that a computer with a weak eval will make a mistake (as defined by me in the first post in this thread) in each move in which it is possible to make a mistake (e.g. not possible if already in a losing position)?

This would strike me as being an over-confident claim.

In any case - you are implying that the correct graph shape is "increasing returns" (the last graph in this thread's initial post).
My graph is really for a single position. My point is that you need a very high depth to avoid mistakes.

Over all positions it will average out. The resulting graph could still be your first one (the backward s-shape) because of its 'statistical' nature. I mean it could be that we end up with the central limit theorem, which says that whatever the distribution of individual positions, the cumulative distribution will be gaussian. Of course I can be very wrong here.
Dann Corbit
Posts: 12803
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Why Chess Might Be Almost "Solved" IMO

Post by Dann Corbit »

towforce wrote:Ultimately, chess is about avoiding mistakes. I am defining a mistake as a move that has one of the following effects:

* turns a winning position into a drawn position

* turns a winning position into a losing position

* turns a drawn position into a losing position

A computer that plays perfectly never makes a mistake - and, by my personal definition (which is clearly different from the formal definition), also resolves chess. My definition of "near-perfect" play is making mistakes only very rarely (e.g. 1 mistake per 10,000 moves). The potential size of a chess game is very large (many thousands of ply deep) - but it may be that near-perfect play can be achieved with a relatively short ply depth (the exact depth required will, of course, depend on the quality of the eval - but I would be surprised if the difference between poor eval and good eval was more than the equivalent of an extra 5-10 ply of full-width search).

Whether we are close to near-perfect play (by my definition) depends critically on the shape of the graph that plots ply depth against probability of making a mistake - and it is this I would like to read people's opinions about. Here are some possibilties:

1. Diminishing returns

Code: Select all

Probability |*
Of Mistake  |*
            |*
            | *
            |  *
            |   *
            |    *
            |      *
            |        *
            |          *
            |             *
            |                *
            |                   *
            |                       *
            |                          *
            |                              *
            |                                   *
            |                                         *
            |                                                  *      *
            |
            |
            |
            |
            -----------------------------------------------------------
                 Depth of search
If the above is correct, then it is very surprising indeed that faster computers lead to significantly better play - and that faster computers gain further speed by enabling programmers to remove knowledge from the evaluation. It just doesn't match what has actaully happened - so I don't think that it is correct.

Note: IMO, the above graph also covers the "S" shape (a backward "S" in this case), where you get poor returns at first, then good returns (the steep part of the "S"), then poor returns beyond a certain point.

There are two other possibilities - both of which imply that we are actually close to resolving chess (by my definition):

2. Steady returns

Code: Select all

Probability |*
Of Mistake  |  *
            |    *
            |      *
            |        *
            |          * 
            |            *
            |              *
            |                *
            |                  *
            |                    *
            |                      *
            |                        *
            |                          *
            |                            *
            |                              *
            |                                *
            |                                  *
            |                                    *
            |                                      *
            |                                        *
            |                                          *
            |                                            *
            -----------------------------------------------------------
                 Depth of search
2. Increasing returns

Code: Select all

Probability |*   *
Of Mistake  |          *
            |               *
            |                   *
            |                      *
            |                        * 
            |                          *
            |                           *
            |                            *
            |                             *
            |                              *
            |                               *
            |                               *
            |                                *
            |                                *
            |                                *
            |                                 *
            |                                 *
            |                                 *
            |                                 *
            |                                  *
            |                                  *
            |                                  *
            -----------------------------------------------------------
                 Depth of search
Could you all share your thoughts as to what the true shape of this graph is, please?

Thanks for your input!
Even the best chess engines make mistakes all the time. I guess that if two strong chess engines play a game of 100 moves, there are at least 30 mistakes in it. Consider this analysis by Rybka:

Code: Select all

[D]8/4p1p1/4P1P1/1p1p4/pP1k1Pp1/P4prb/PK1P1rp1/1BR3Bn w - - dm 12; id "ChestDB.4787"; bm Bc2;

48) Bf5;                
    Avoid move: 
    Best move (Rybkav2.3.2a.w32): Bb1-c2
    Not found in: 06:25
      5	00:00	          43	44.032	 0.00	Bb1f5
      6	00:00	          53	54.272	 0.00	Bb1f5
      7	00:00	          67	68.608	 0.00	Bb1f5
      8	00:00	          92	94.208	 0.00	Bb1f5
      9	00:00	         176	180.224	 0.00	Bb1f5
     10	00:00	         281	287.744	 0.00	Bb1f5
     11	00:00	         477	488.448	 0.00	Bb1f5
     12	00:00	         745	762.880	 0.00	Bb1f5
     13	00:00	       1.075	1.100.800	 0.00	Bb1f5
     14	00:00	       1.885	120.640	 0.00	Bb1f5
     15	00:00	       3.647	116.704	 0.00	Bb1f5
     16	00:00	       5.568	118.784	 0.00	Bb1f5
     17	00:00	       8.867	114.934	 0.00	Bb1f5
     18	00:00	      14.299	116.207	 0.00	Bb1f5
     19	00:00	      23.360	101.789	 0.00	Bb1f5
     20	00:01	      46.764	98.734	 0.00	Bb1f5
     21	00:01	      90.533	98.833	 0.00	Bb1f5
     22	00:04	     307.672	95.529	 0.00	Bb1f5
     23	00:07	     619.780	97.624	 0.00	Bb1f5
     24	00:12	   1.149.224	98.966	 0.00	Bb1f5
     25	00:21	   2.069.888	99.739	 0.00	Bb1f5
     26	00:40	   3.822.880	99.260	 0.00	Bb1f5
     27	01:14	   7.027.719	98.222	 0.00	Bb1f5
     27	01:14	   7.096.841	98.307	+2.18	Bb1c2 Kd4c4 Bc2f5+ Kc4d4 Kb2c2 Kd4c4 Kc2d1+ Kc4d4 Bf5c2 Kd4c4 Bc2b1+ Kc4d4 Rc1c2
     27	03:24	  17.274.772	86.772	+M12	Bb1c2 Kd4c4 Bc2f5+ Kc4d4 Kb2c2 Kd4c4 Kc2d1+ Kc4d4 Bf5c2 Kd4c4 Bc2b1+ Kc4d4 Rc1c2 Kd4d3 Rc2c5+ Kd3d4 Kd1c2 Kd4e4 Kc2c3+ Ke4xf4 Rc5xd5 Rf2f1 Bg1e3+
   1/1/2008 7:07:31 AM, Time for this analysis: 00:06:25, Rated time: 5:08:00

It took 1:14 to find the right move.  So for 74 seconds, the idea was the wrong one.  Now back up 20 plies from this position.  Will the engine see it?  Almost for sure, the answer is no.

In order to solve chess, there is no such thing as almost.  A solution to a game is a formal proof of the outcome.  Solving chess and playing perfect chess are also not the same thing.  It might be possible to prove that chess is a draw and yet not be able to provide a move sequence to demonstrate it.

Here is another position, and in this case, playing at the classical 40 moves in 2 hours, it would be likely to miss it:
[d]8/8/6p1/7b/7R/8/p1p5/k1K5 w - - dm 12; id "ChestDB.4698"; bm Rh1;

85) ;                   
    Avoid move: 
    Best move (Rybkav2.3.2a.w32): Rh4-h1
    Not found in: 06:25
      5	00:00	         658	673.792	-1.02	Kc1xc2 Bh5f3
      6	00:00	       1.245	1.274.880	-0.97	Kc1xc2 Bh5f3 Rh4f4
      7	00:00	       2.067	124.506	-0.98	Kc1xc2 Bh5d1+ Kc2c1 Bd1f3 Rh4f4
      8	00:00	       2.842	88.188	-0.47	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Be4b1
      9	00:00	       3.226	100.103	-0.47	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Be4b1 Rh2f2
     10	00:00	       3.850	119.466	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Be4b1 Rh2f2 g6g5
     11	00:00	       4.417	94.229	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Be4b1 Rh2f2 g6g5 Rf2g2
     12	00:00	       7.005	64.622	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2
     13	00:00	       9.088	65.536	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2
     14	00:00	      12.598	51.395	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2
     15	00:01	      18.129	56.425	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2
     16	00:01	      29.203	53.021	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2
     17	00:01	      41.201	51.830	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2
     18	00:01	      58.731	53.410	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5
     19	00:02	      85.807	53.511	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5
     20	00:02	     118.394	53.478	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5
     21	00:03	     165.140	54.637	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5
     22	00:05	     244.423	52.848	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5
     23	00:06	     316.574	54.019	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     24	00:07	     397.652	56.398	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     25	00:09	     511.882	58.638	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     26	00:11	     663.806	60.750	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     27	00:15	     857.989	60.004	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     28	00:18	   1.060.241	61.376	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     29	00:21	   1.296.507	61.925	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     30	00:26	   1.607.878	63.898	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     31	00:31	   2.027.925	66.051	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     32	00:39	   2.589.804	67.162	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     33	00:47	   3.137.658	68.065	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     34	00:55	   3.792.785	69.566	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     35	01:05	   4.575.989	71.248	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     36	01:17	   5.584.875	73.303	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     37	01:31	   6.798.009	75.561	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     38	01:49	   8.346.199	77.584	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     39	02:08	  10.059.642	79.640	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     40	02:46	  13.348.007	81.685	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     41	03:24	  16.721.936	83.292	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     42	03:55	  19.808.866	85.491	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     43	04:33	  23.501.612	87.411	 0.00	Kc1xc2 Bh5f3 Rh4h2 Bf3e4+ Kc2b3 Ka1b1 Rh2b2+ Kb1c1 Kb3xa2 g6g5 Rb2b4 Be4d5+ Ka2a3 Bd5f3
     43	04:33	  23.521.532	87.426	+2.50	Rh4h1 Bh5f3 Rh1e1 Bf3d1 Re1g1 g6g5 Rg1h1 Bd1f3 Rh1e1 Bf3d1 Re1g1 g5g4 Rg1e1 g4g3
     43	04:34	  23.563.016	87.411	+M12	Rh4h1 Bh5f3 Rh1e1 Bf3d1 Re1g1 g6g5 Rg1h1 Bd1f3 Rh1e1 Bf3d1 Re1g1 g5g4 Rg1e1 g4g3 Re1g1 g3g2 Rg1e1 g2g1Q Re1xg1 Bd1e2 Kc1xc2+ Be2d1+ Rg1xd1+
     44	04:35	  23.750.929	87.632	+M12	Rh4h1 Bh5f3 Rh1e1 Bf3d1 Re1g1 g6g5 Rg1h1 Bd1f3 Rh1e1 Bf3d1 Re1g1 g5g4 Rg1e1 g4g3 Re1g1 g3g2 Rg1e1 g2g1Q Re1xg1 Bd1e2 Kc1xc2+ Be2d1+ Rg1xd1+
     45	04:38	  24.123.835	88.056	+M12	Rh4h1 Bh5f3 Rh1e1 Bf3d1 Re1g1 g6g5 Rg1h1 Bd1f3 Rh1e1 Bf3d1 Re1g1 g5g4 Rg1e1 g4g3 Re1g1 g3g2 Rg1e1 g2g1Q Re1xg1 Bd1e2 Kc1xc2+ Be2d1+ Rg1xd1+
     46	04:42	  24.625.447	88.794	+M12	Rh4h1 Bh5f3 Rh1e1 Bf3d1 Re1g1 g6g5 Rg1h1 Bd1f3 Rh1e1 Bf3d1 Re1g1 g5g4 Rg1e1 g4g3 Re1g1 g3g2 Rg1e1 g2g1Q Re1xg1 Bd1e2 Kc1xc2+ Be2d1+ Rg1xd1+
     47	04:49	  25.554.759	89.794	+M12	Rh4h1 Bh5f3 Rh1e1 Bf3d1 Re1g1 g6g5 Rg1h1 Bd1f3 Rh1e1 Bf3d1 Re1g1 g5g4 Rg1e1 g4g3 Re1g1 g3g2 Rg1e1 g2g1Q Re1xg1 Bd1e2 Kc1xc2+ Be2d1+ Rg1xd1+
     48	04:56	  26.698.799	91.464	+M12	Rh4h1 Bh5f3 Rh1e1 Bf3d1 Re1g1 g6g5 Rg1h1 Bd1f3 Rh1e1 Bf3d1 Re1g1 g5g4 Rg1e1 g4g3 Re1g1 g3g2 Rg1e1 g2g1Q Re1xg1 Bd1e2 Kc1xc2+ Be2d1+ Rg1xd1+
     49	05:39	  30.920.651	92.631	+M12	Rh4h1 Bh5f3 Rh1e1 Bf3d1 Re1g1 g6g5 Rg1h1 Bd1f3 Rh1e1 Bf3d1 Re1g1 g5g4 Rg1e1 g4g3 Re1g1 g3g2 Rg1e1 g2g1Q Re1xg1 Bd1e2 Kc1xc2+ Be2d1+ Rg1xd1+
   1/1/2008 11:05:21 AM, Time for this analysis: 00:06:22, Rated time: 9:05:25

Of course, a 7 man tablebase file destroys this one (and a 6 man tablebase set would help immensely  -- I used only 5 man tablebase files in analyzing).  But the point is that for quite a while Rybka was willing to make what is clearly a wrong move.

I think that we are about as close to solving chess as having pulled an eyedropper of water from the ocean, we claim to have drained it dry.

On the other hand, there are always alternatives.  The main difficulty is the assumption that the solution to chess is deep in the tree (and there is not any proof of that).