Proving the correctness of the algorithm

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
Bill Rogers
Posts: 3562
Joined: Thu Mar 09, 2006 3:54 am
Location: San Jose, California

Re: Proving the correctness of the algorithm

Post by Bill Rogers »

Fred
As long as you don't use any random number generators within you program and search to the same fixed depth every time then nothing should change. It make no difference is you use minimax or negamax or alpha/beta the results should remain the same except alpha/beta should find the answer quicker.
Bill
JVMerlino
Posts: 1357
Joined: Wed Mar 08, 2006 10:15 pm
Location: San Francisco, California

Re: Proving the correctness of the algorithm

Post by JVMerlino »

Aaron Becker wrote:You really don't need a sophisticated evaluation function to make a reasonably strong engine. I'm working on an engine that currently uses only material and piece square tables in its evaluation, and it scores 291/300 on WAC. Of course this is with a hash table and a more sophisticated search than plain alpha-beta, but these problems should all be solvable if your implementation is correct.
Hi Aaron,

May I ask how long you tested each WAC position for, and on what hardware? My engine definitely has more than just material and PSTs, but it gets only 271/300 at 10 seconds per move on a P4-3.0GHz. If I test at one minute per move it gets 286/300.

jm


jm
Aaron Becker
Posts: 292
Joined: Tue Jul 07, 2009 4:56 am

Re: Proving the correctness of the algorithm

Post by Aaron Becker »

The 291/300 score was 30s per move on a 1.8 GHz Core Duo chip (the older 32 bit variety). With 5s per move it scored 276/300.

edit: I would speculate that a sophisticated eval might actually hurt you on WAC. The problems are tactical, so the extra time spent in evaluation might be profitably spent searching more nodes.
plattyaj

Re: Proving the correctness of the algorithm

Post by plattyaj »

Aaron Becker wrote:The 291/300 score was 30s per move on a 1.8 GHz Core Duo chip (the older 32 bit variety). With 5s per move it scored 276/300.

edit: I would speculate that a sophisticated eval might actually hurt you on WAC. The problems are tactical, so the extra time spent in evaluation might be profitably spent searching more nodes.
True - the other day I ripped all eval except material out of Schola to start from scratch and it did better on WAC than ever before. The few that it failed to get were the only ones where it didn't mate or win material quickly.

Andy.
Fguy64
Posts: 814
Joined: Sat May 09, 2009 4:51 pm
Location: Toronto

Re: Proving the correctness of the algorithm

Post by Fguy64 »

I find a lot of information on the net about the WAC suite of test positions, but what is the source? Is there a definitive source or origin for this?
JVMerlino
Posts: 1357
Joined: Wed Mar 08, 2006 10:15 pm
Location: San Francisco, California

Re: Proving the correctness of the algorithm

Post by JVMerlino »

plattyaj wrote:
Aaron Becker wrote:The 291/300 score was 30s per move on a 1.8 GHz Core Duo chip (the older 32 bit variety). With 5s per move it scored 276/300.

edit: I would speculate that a sophisticated eval might actually hurt you on WAC. The problems are tactical, so the extra time spent in evaluation might be profitably spent searching more nodes.
True - the other day I ripped all eval except material out of Schola to start from scratch and it did better on WAC than ever before. The few that it failed to get were the only ones where it didn't mate or win material quickly.

Andy.
Amusing. I tried that, with the 29 positions that it didn't find in 10 seconds before, and it found another 11.

I had no idea what a love/hate relationship writing a chess engine would be.... :roll:

jm
JVMerlino
Posts: 1357
Joined: Wed Mar 08, 2006 10:15 pm
Location: San Francisco, California

Re: Proving the correctness of the algorithm

Post by JVMerlino »

Fguy64 wrote:I find a lot of information on the net about the WAC suite of test positions, but what is the source? Is there a definitive source or origin for this?
I have links to several test suites, including WAC, on my engine's site:

http://computer-chess.org/doku.php?id=c ... ddin:index

jm
Fguy64
Posts: 814
Joined: Sat May 09, 2009 4:51 pm
Location: Toronto

Re: Proving the correctness of the algorithm

Post by Fguy64 »

Well, I went through a dozen or so of the WAC positions, and another dozen or so mates and combinations from a Reinfeld book and did well. fchess even identified a couple of errors in the Reinfeld book.

And spotted a few obscure bugs in my program in the process. In particular there was one one where my isSquareAttacked method didn't spot an attack from a8 to h1 and vice versa because it couldn't figure out whether it was a top left bottom right attack or a top right bottom left attack, since 63 is divisible by both 7 & 9. :o

So, I guess I am in pretty good shape then. Pleased actually, considering just a few months ago even basic recursion made my head hurt. Thanks again to all. Onwards and upwards.
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Re: Proving the correctness of the algorithm

Post by sje »

Fguy64 wrote:I find a lot of information on the net about the WAC suite of test positions, but what is the source? Is there a definitive source or origin for this?
Turn on Java/Javascript on go to:

http://idisk.mac.com/chessnotation-Public?view=web

The EPD test suites are in the EPD folder.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Proving the correctness of the algorithm

Post by diep »

Bill Rogers wrote:Fred
As long as you don't use any random number generators within you program and search to the same fixed depth every time then nothing should change. It make no difference is you use minimax or negamax or alpha/beta the results should remain the same except alpha/beta should find the answer quicker.
Bill
Hi,

Such theoretic models are not correct in practice.

In computerchess we use a quiescencesearch which is pruning
on beta. So that is asymmetric pruning so to speak.

Further just alfabeta without hashtable can't get very deep.

A hashtable causes massive problems.

In combination with nullmove that already CAN cause a problem
for getting exact mainline and therefore same root score.

Practical in tests with Diep when only using a controlled parallel search with nullmove, hashtables and alfabeta and a good move ordering,
that's giving nearly always (but not in 100% of the cases) the
same line like alfabeta without nullmove (but with hashtable of course).

That has however to do with actual search depths reached at the time of doing the tests were not big (not more than 10 ply).

At todays search depths the effects are far bigger, so if we'd redo the same tests, most likely it is very complicated to get the same result each time in the root as just an alfabeta search without nullmove and without hashtable would give.

A complete theoretic approach is practical simply not possible as that would eat too much nodes.

Determinism can however get achieved to a very high degree at small search depths. At todays search depths such plans are total unthinkable though.

* with controlled i try to expres that the processors listen to each other instead of uncontrollable just search *something* without looking to other cpu's.

** note i skip the extension problem what to do when in check; we could have transpositions.

like e4 e5 Nc3 Bc5 qf3 nc6 qxf7+ diep can't evaluate when in check so i need to extend that to kxf7.

However e4 e5 qf3 bc5 qxf7+ kxf7 Nc3 now we have had also 7 ply no need to extend later on.

I'm not extending in diep by default upon GIVING a check of course. Too expensive. I extend upon walking away out of check.

You see you can theoretical forever raise some tiny issue that there is a solution, but that rapes a program so fully that it's not even interesting to do experiments then.