Othello is solved (weakly)

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

ernest
Posts: 2053
Joined: Wed Mar 08, 2006 8:30 pm

Othello is solved (weakly)

Post by ernest »

Just seen in German CSS-Forum :

https://arxiv.org/pdf/2310.19387.pdf
Jouni
Posts: 3666
Joined: Wed Mar 08, 2006 8:15 pm
Full name: Jouni Uski

Re: Othello is solved (weakly)

Post by Jouni »

Chess is solved (practically) :) .
Jouni
abulmo2
Posts: 479
Joined: Fri Dec 16, 2016 11:04 am
Location: France
Full name: Richard Delorme

Re: Othello is solved (weakly)

Post by abulmo2 »

And they use my engine (Edax) to solve it :-)
Richard Delorme
Jouni
Posts: 3666
Joined: Wed Mar 08, 2006 8:15 pm
Full name: Jouni Uski

Re: Othello is solved (weakly)

Post by Jouni »

Yes really stunning, because Othello has 10 exp 28 positions! But I don't understand term weakly. Either it's solved or not .
Jouni
User avatar
Ras
Posts: 2701
Joined: Tue Aug 30, 2016 8:19 pm
Full name: Rasmus Althoff

Re: Othello is solved (weakly)

Post by Ras »

Jouni wrote: Mon Nov 06, 2023 10:39 amBut I don't understand term weakly. Either it's solved or not .
A two-player game can be solved on several levels:
Ultra-weak solution
Prove whether the first player will win, lose or draw from the initial position, given perfect play on both sides. This can be a non-constructive proof (possibly involving a strategy-stealing argument) that need not actually determine any moves of the perfect play.
Weak solution
Provide an algorithm that secures a win for one player, or a draw for either, against any possible moves by the opponent, from the beginning of the game.
Strong solution
Provide an algorithm that can produce perfect moves from any position, even if mistakes have already been made on one or both sides.

Source: https://en.wikipedia.org/wiki/Solved_game#Overview
Rasmus Althoff
https://www.ct800.net
jkominek
Posts: 72
Joined: Tue Sep 04, 2018 5:33 am
Full name: John Kominek

Re: Othello is solved (weakly)

Post by jkominek »

abulmo2 wrote: Sun Nov 05, 2023 9:35 pm And they use my engine (Edax) to solve it :-)
The part of the paper that causes me pause is the first bullet point of section 3.3 Modifications to Edax.
We disabled aspiration search during iterative deepening. In other words, when narrowing down alpha and
beta for a solving, we modified it to take wider alpha and beta values during the shallow iterations of iterative
deepening. This is because there is a tendency for the computation time to increase when the principal variation
is updated. Therefore, during shallow iterations, even if a move results in a fail-high, the search should not be
terminated, and instead, the best move should be sought.
The author is stating that the selective pruning techniques that break pure alpha-beta required of a proving attempt, in particular aspiration search, has been disabled. To me the description does not read like it. But I'm not an expert. How does widening the aspiration schedule eliminate all potential oversights? This is a sticking point, and the entire computational proof hinges on it. I anticipate that this is the crucial part of the paper the author will have to clarify in order to get past the reviewers.

Congratulations on Edax by the way. The program itself and its usage in this solving attempt is impressive.
abulmo2
Posts: 479
Joined: Fri Dec 16, 2016 11:04 am
Location: France
Full name: Richard Delorme

Re: Othello is solved (weakly)

Post by abulmo2 »

jkominek wrote: Mon Nov 20, 2023 12:37 am
abulmo2 wrote: Sun Nov 05, 2023 9:35 pm And they use my engine (Edax) to solve it :-)
The part of the paper that causes me pause is the first bullet point of section 3.3 Modifications to Edax.
We disabled aspiration search during iterative deepening. In other words, when narrowing down alpha and
beta for a solving, we modified it to take wider alpha and beta values during the shallow iterations of iterative
deepening. This is because there is a tendency for the computation time to increase when the principal variation
is updated. Therefore, during shallow iterations, even if a move results in a fail-high, the search should not be
terminated, and instead, the best move should be sought.
The author is stating that the selective pruning techniques that break pure alpha-beta required of a proving attempt, in particular aspiration search, has been disabled. To me the description does not read like it. But I'm not an expert. How does widening the aspiration schedule eliminate all potential oversights? This is a sticking point, and the entire computational proof hinges on it. I anticipate that this is the crucial part of the paper the author will have to clarify in order to get past the reviewers.

Congratulations on Edax by the way. The program itself and its usage in this solving attempt is impressive.
Thanks for the congratulations.

Aspiration search is not a selective pruning technique. I have not looked at the code modified by the author but he just claims it makes the engine faster for its particular purpose.
Stock Edax disables selective pruning technique (ie probcut) during its last stage when it is solving the position. Edax can already solve any position, if enough time is alloted. I have tested it on thousands of positions (from wthor files) and I am pretty sure its endgame solver is correct.
Richard Delorme
abulmo2
Posts: 479
Joined: Fri Dec 16, 2016 11:04 am
Location: France
Full name: Richard Delorme

Re: Othello is solved (weakly)

Post by abulmo2 »

abulmo2 wrote: Mon Nov 20, 2023 3:30 am Aspiration search is not a selective pruning technique. I have not looked at the code modified by the author but he just claims it makes the engine faster for its particular purpose.
I have look at the code modified, and I approve Takizawa's modification.
In Othello 8x8 the score is from -64 to +64 (by counting the difference in discs and giving the remaining empty squares to the winner). So, at the start of the search the usual alpha beta bounds are -64, +64. Some people just want to know if the position is won, drawn or lost. It is then enough to call it with -1, +1 or -3, +3 (I guess this is what Takizawa did at some stage of its proof). Reducing the alpha beta bounds reduces the tree complexity and makes the search faster. So in Edax you have an option to change the alpha beta bounds at the start of the search.
During iterative deepening, Edax uses this alpha beta bound, whereas Takizawa's modification uses -64, +64. At the final depth (equals to the number of empty squares), both program uses the alpha beta bounds provided by the optional settings. Aspiration search is never disabled in Takizawa's version. The disadvantage of using a small alpha beta bound like -3, +3, is that if, for instance, your move is losing below alpha a random move completely crap can be returned by the search. If at the final depth the score is within the alpha beta window, the previous shallow iteration could have been useless and only crap is in the hash table, so the final search can take a bigger amount of time to terminate.
Takizawa's modification will make the search faster when using a reduce alpha beta window and change nothing in the general case.

I tried it:

Code: Select all

from the following comand;
$ edax --alpha -1 --beta 1 -n 1
[...]
> bench

I get

before:
37 positions solved: 2954875019 nodes in  1:09.356 (42604461 nodes/s).

after:
37 positions solved: 2941556212 nodes in  1:08.206 (43127529 nodes/s).

Richard Delorme
syzygy
Posts: 5743
Joined: Tue Feb 28, 2012 11:56 pm

Re: Othello is solved (weakly)

Post by syzygy »

jkominek wrote: Mon Nov 20, 2023 12:37 am
abulmo2 wrote: Sun Nov 05, 2023 9:35 pm And they use my engine (Edax) to solve it :-)
The part of the paper that causes me pause is the first bullet point of section 3.3 Modifications to Edax.
We disabled aspiration search during iterative deepening. In other words, when narrowing down alpha and
beta for a solving, we modified it to take wider alpha and beta values during the shallow iterations of iterative
deepening. This is because there is a tendency for the computation time to increase when the principal variation
is updated. Therefore, during shallow iterations, even if a move results in a fail-high, the search should not be
terminated, and instead, the best move should be sought.
The author is stating that the selective pruning techniques that break pure alpha-beta required of a proving attempt, in particular aspiration search, has been disabled. To me the description does not read like it. But I'm not an expert. How does widening the aspiration schedule eliminate all potential oversights? This is a sticking point, and the entire computational proof hinges on it. I anticipate that this is the crucial part of the paper the author will have to clarify in order to get past the reviewers.
The way I read it, he lets Edax search with a wider window because that was found to reduce the number of fail highs, which were found to increase total search time.

So basically the author states that well-known techniques for speeding up the search are not working. I wonder if he has really tested this other than by trying a dozen test positions and drawing the wrong conclusion from insufficient evidence.
syzygy
Posts: 5743
Joined: Tue Feb 28, 2012 11:56 pm

Re: Othello is solved (weakly)

Post by syzygy »

abulmo2 wrote: Mon Nov 20, 2023 3:16 pmThe disadvantage of using a small alpha beta bound like -3, +3, is that if, for instance, your move is losing below alpha a random move completely crap can be returned by the search. If at the final depth the score is within the alpha beta window, the previous shallow iteration could have been useless and only crap is in the hash table, so the final search can take a bigger amount of time to terminate.
It wouldn't be a crap move but just potentially not the best move that would have been found with a full window. I suppose your engine uses zero-window searches, so this is only about the PV. If a change in the PV would indeed slow down the search too much (but your numbers on 37 positions do not really prove this), perhaps this could be addressed by something like internal iterative deepening (if you're not doing that already).

But why isn't is sufficient to just show that the result is >= 0 (or <= 0 as needed)?

Code: Select all

before:
37 positions solved: 2954875019 nodes in  1:09.356 (42604461 nodes/s).

after:
37 positions solved: 2941556212 nodes in  1:08.206 (43127529 nodes/s).
That is very close. But it does suggest that the modification at least does not hurt the search.