Repetition detection structure.

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
bob
Posts: 20636
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Repetition detection structure.

Post by bob » Mon Mar 03, 2008 8:46 pm

michiguel wrote:
wgarvin wrote:
hristo wrote:Here I disagree with you. The "other things" that can be improved are never-ending and it makes perfect sense to improve the little things that you are certain of, while still dwelling on the heavy/important optimization issues. This is a practical approach to improving software and it makes good sense, IMO.
It's important not to lose sight of the fact that every optimization has a cost. That there's a tension between "optimization" and all the other factors that lead to good code. Optimizing tends to make your code harder to read, harder to debug, harder to maintain and harder to change.

Some times, the piece of code affected by the optimization is very small and localized and doesn't mess up the program very much, so its worth doing even if the gain is small. Or maybe it affects a lot of code, but you can get big performance gains by doing the optimization and so its worth dealing with the decreased readability and maintainability. (Just remember to put comments in the optimized code so that when you come back to fix a bug in it 6 months later you can figure out wtf it does!) And still other times, the gain from the optimization is small (like, less than 0.51% ?) and you are better off just writing the simple code that you will be able to maintain more easily later.

Simplicity is a very valuable thing when programming. A very valuable thing. Many people overlook it. Just remember that when you're optimizing some code and its growing to be 2x as long and 3x as complex as the original! Optimization has costs, and you only want to pay those costs if they are justified by real and significant performance gains.
If I were a faculty at a computer science dept., I would print this message and hand it to the students.

Putting things in terms of "costs" is an interesting way to look at it. Some costs are short term (programming time spent + debugging), but some are long term (readability, maintainability, time spent fixing bugs etc.).

Miguel
that is the primary reason I have always recommended "write first, optimize later. Way later." I am just starting to optimize some parts of Crafty now that they have "withstood the test of time" and seem to be functional as planned.

Nothing wrong with considering speed when designing things, but readability comes first for a long time.

User avatar
hgm
Posts: 23772
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Repetition detection structure.

Post by hgm » Wed Mar 05, 2008 7:04 pm

hristo wrote:This is fascinating!
I don't understand how you are going to encode the 'messages' so that they are thread safe, but that is probably because I'm missing something.

I did go back and reread all of the posts related to SMP (within this thread) and you outline a simple strategy that might be beneficial when searching for and setting up a split point.
Well, the scheme was originally discussed here some time ago in a thread called "reinventing the SMP wheel". If we really want to elaborate on that, it deserves a separate thread.
What bothers me, however, are the following aspects:
1) Some data must be shared between the threads.
2) The shared data must be protected by a mutex, or semaphore, if it is used in a non-atomic operations.
3) What is the data that will have to be protected and how invasive is such protection? (!!)

In a general sense it is much better to reduce the contention for singular resources in an SMP environment. IOW, "lock -> divorce -> unlock -> split and after that the 'divorce' gets to run wild" is better than "checking for chastity too often".
If you don't copy any data during a split then it might become expensive to ensure that the split occurs without errors, unless you introduce lock/unlock in places that are not part of the split.
The basic idea was to protect the code for a thread claiming ownership of a node (on a hash miss or an unsatisfactory hit on a thread that is currently not being searched) by a semaphore residing in the hash bucket. Hash probing is not critical. Replacement is (to not interfere with ownership claims), but writing the semaphore does not cause any overhead, as the replacement would have to write the cache line anyway.
One more thing,
I watched quite a few more games on your server (thank you) and ... well, Joker8014K (or something like that) made an illegal move this morning -- it did move a pawn from c6 to d5 without anything being on d5 to be captured.
Then another program (not sure which one) made a move with the rook a1-c1 while there was a Knight on b1. (?)
Hmm, the viewer sometimes misses the end of a game, (because the file is already overwritten by the time it does the next probe) and then continues to apply moves of the next game to the position of the old game. (The viewer is completely stupid, it just copies the from-square to the to-square, no matter what is there or if it is empty.) I pause between games in order to give all viewers an opportunity to poll, but if one of the viewers is occupied for very long by a backlog of moves (what typically happens if you enter a long game near the end) the pause is not long enough.

I cannot imagine that illegal moves would really be played: I have WInBoard_F set for legality checking, and most opponents would not accept illegal moves either...

hristo

Re: Repetition detection structure.

Post by hristo » Thu Mar 06, 2008 3:28 am

wgarvin wrote:
hristo wrote:Here I disagree with you. The "other things" that can be improved are never-ending and it makes perfect sense to improve the little things that you are certain of, while still dwelling on the heavy/important optimization issues. This is a practical approach to improving software and it makes good sense, IMO.
It's important not to lose sight of the fact that every optimization has a cost. That there's a tension between "optimization" and all the other factors that lead to good code. Optimizing tends to make your code harder to read, harder to debug, harder to maintain and harder to change.
As a general rule the above is true.
There are, however, a few issues of a development cycle that must be considered, IMO.
First you are putting way too much blame on optimizations, painting a picture that is inaccurate. One shouldn't be discouraged to look for optimizations. In fact 'optimizations' should be considered a natural aspect of a proper development cycle.

The major issue is that people often try to optimize too early and that causes the low level optimizations (the ones that would yield minor improvements) to dictate higher level design decisions. This is not a problem of 'optimizations' but is instead a problem of improperly set goals and lack of experience.

As a piece of code becomes mature it also becomes more complicated -- and this is expressed in the general interdependence of the code.

The code can get harder to debug for a myriad of reasons, where extreme optimizations is just an example, but there are many others.

The code can become harder to maintain and change because of it's complexity and design, and this is completely independent of any optimizations.

The code is hard to understand if it lacks proper documentation, regardless of whether there are explicit optimizations or not. (.)

Putting undue emphasis on the dangers of optimization doesn't establish a proper attitude for those who want to (and in some cases must) optimize their software.
Optimization is no different than debugging your code ... it is just a part of the development cycle and the important aspect here is to outline when (and where) optimizations should occur.
wgarvin wrote: Some times, the piece of code affected by the optimization is very small and localized and doesn't mess up the program very much, so its worth doing even if the gain is small.

Regression testing!
Opaque changes that enhance the software and can be regression tested are always a good candidate. (The situation becomes more intricate when such changes are in a piece of code that is a candidate for purge, but that is a different situation.)

In this sense, one must have a good idea (plan) of where the software is going to go and then make sure that it's [the software] foundations are solid.

The problem, often, is that if there is no clear vision (plan, design) by the developer and if there is no clear design then any optimization becomes an ad-hoc exercise.
wgarvin wrote: Or maybe it affects a lot of code, but you can get big performance gains by doing the optimization and so its worth dealing with the decreased readability and maintainability.
The decrease of maintainability and readability is not function of optimizations, alone. Assigning such relation is misleading, IMO, and puts unreasonable burden (cost) on optimizations.

The interfaces (internal or external) that a program uses can (and should) hide any implementation details from the overall application complexity. It doesn't matter how "foo" is implemented so long as it behaves properly and doesn't violate the designed interface.

Before thinking about optimizations of any sort, one should concentrate on the overall design and define the proper interfaces that will be used within the application.
wgarvin wrote: (Just remember to put comments everywhere in the code so that when you come back to fix a bug in it 6 months later you can figure out wtf it does!)
Here ... fixed the above, for you! (in bold) ;-)
wgarvin wrote: And still other times, the gain from the optimization is small (like, less than 0.51% ?) and you are better off just writing the simple code that you will be able to maintain more easily later.
"Simple" being a relative term, in some cases, but otherwise we agree.
wgarvin wrote: Simplicity is a very valuable thing when programming. A very valuable thing. Many people overlook it. Just remember that when you're optimizing some code and its growing to be 2x as long and 3x as complex as the original! Optimization has costs, and you only want to pay those costs if they are justified by real and significant performance gains.
Simplicity goes out of the window if you enhance the software in any way, even if you don't optimize it at all. Soon enough you are at a point where the software is way too complicated and there is no easy way out of that. So, again, "optimization" is not the root of the problem.

Regards,
Hristo

User avatar
Graham Banks
Posts: 33216
Joined: Sun Feb 26, 2006 9:52 am
Location: Auckland, NZ

Re: Repetition detection structure.

Post by Graham Banks » Fri Mar 07, 2008 5:59 am

The first game cited should have been a draw, but Joker kept avoiding it for some reason.
I don't think that the GUI can be blamed for that.

I can't see anything wrong with the second game cited.

I can't see a problem with the third game either.

There is nothing wrong with Buzz's decision in the extra game cited.

Regards, Graham.
My email addresses:
gbanksnz at gmail.com
gbanksnz at yahoo.co.nz

User avatar
hgm
Posts: 23772
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Repetition detection structure.

Post by hgm » Fri Mar 07, 2008 7:10 am

The first game is a theoretical draw in the position where it is adjudicated!

In the third game Joker is not checkmated and you declare a loss. Why are you doing that? How can that be no problem? Aren't we playing Chess, here, according to FIDE rules? If we are playing another game, why aren't the rules of this other game made public? If you would continue the game it would be draw, as Garbochess seems to have no way to avoid the perpetual checking by white's queen. How can it be no problem that you change draw results into wins?

Forget the Buzz game, it was posted here for another reason.

User avatar
Graham Banks
Posts: 33216
Joined: Sun Feb 26, 2006 9:52 am
Location: Auckland, NZ

Re: Repetition detection structure.

Post by Graham Banks » Fri Mar 07, 2008 7:27 am

hgm wrote:The first game is a theoretical draw in the position where it is adjudicated!

Is it?

In the third game Joker is not checkmated and you declare a loss. Why are you doing that? How can that be no problem? Aren't we playing Chess, here, according to FIDE rules? If we are playing another game, why aren't the rules of this other game made public? If you would continue the game it would be draw, as Garbochess seems to have no way to avoid the perpetual checking by white's queen. Are you sure? How can it be no problem that you change draw results into wins?

The GUI options are set to allow resigning. Tester's choice. Same as for CEGT testers unless they've changed since I was a member.

Forget the Buzz game, it was posted here for another reason.
Please post the diagrams of the positions in which you're disputing the results so that others can see and decide.

Regards, Graham.
My email addresses:
gbanksnz at gmail.com
gbanksnz at yahoo.co.nz

Uri Blass
Posts: 8598
Joined: Wed Mar 08, 2006 11:37 pm
Location: Tel-Aviv Israel

Re: Repetition detection structure.

Post by Uri Blass » Fri Mar 07, 2008 8:04 am

Graham Banks wrote:
hgm wrote:The first game is a theoretical draw in the position where it is adjudicated!

Is it?

In the third game Joker is not checkmated and you declare a loss. Why are you doing that? How can that be no problem? Aren't we playing Chess, here, according to FIDE rules? If we are playing another game, why aren't the rules of this other game made public? If you would continue the game it would be draw, as Garbochess seems to have no way to avoid the perpetual checking by white's queen. Are you sure? How can it be no problem that you change draw results into wins?

The GUI options are set to allow resigning. Tester's choice. Same as for CEGT testers unless they've changed since I was a member.

Forget the Buzz game, it was posted here for another reason.
Please post the diagrams of the positions in which you're disputing the results so that others can see and decide.

Regards, Graham.
I am not sure about the garbochess result in case of continuing the game

Here is the diagram when the game was adjudicated and the diagram 39 moves earlier
Maybe the 50 move rule could convince garbochess to push a pawn but I am unsure about it and even assuming garbo push the h pawn I am not sure if garbo could win it.

[D]2KQ2k1/2R2p2/6p1/8/8/1q6/7P/8 b - - 0 102

[D]8/4qpk1/6p1/Q7/8/8/R5KP/8 b - - 0 63

Note that the first game is a draw position that both engines evaluated wrong and arena adjudicated based on the evaluation and here is the diagram of the position of the first game that was adjudicated against joker.

[D]6QQ/8/8/8/1k1K4/8/5q2/8 w - - 0 182

Uri

User avatar
hgm
Posts: 23772
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Repetition detection structure.

Post by hgm » Fri Mar 07, 2008 8:32 am

The last diagram Uri gives is a theoretical draw, as black can continue checking along the h-file when the white King is on the h-file (so that Qg8 cannot be interposed), or diagonally (in the direction a1-h8) when the King is on any other file (so that neither Queen can be interposed).

In the other position it more difficult to give hard proef of a draw. But so what? The rules of Chess are not such that you have to prove that you can reach a draw after 5-moves or 3-fold-rep before you are allowed to claim one! And the only acceptable proof for that you can win should be to actually do it!

I don't understand the remark about "GUI options set to allow resigning". That seems irrelevant. Joker certainly did not resign in this situation, so whether the GUI allows resigning or not can not play a role. Resigning has nothing to do with adjudication. Except that you normally don't do it in draw positions either!

User avatar
Graham Banks
Posts: 33216
Joined: Sun Feb 26, 2006 9:52 am
Location: Auckland, NZ

Re: Repetition detection structure.

Post by Graham Banks » Fri Mar 07, 2008 8:44 am

hgm wrote: I don't understand the remark about "GUI options set to allow resigning". That seems irrelevant. Joker certainly did not resign in this situation, so whether the GUI allows resigning or not can not play a role. Resigning has nothing to do with adjudication. Except that you normally don't do it in draw positions either!
If you can prove it's a draw, I'll accept your argument.
I do have some experience as a chessplayer. I'm not in the habit of allowing a dead drawn position to be claimed as a win.

Regards, Graham.
My email addresses:
gbanksnz at gmail.com
gbanksnz at yahoo.co.nz

User avatar
hgm
Posts: 23772
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Repetition detection structure.

Post by hgm » Fri Mar 07, 2008 8:55 am

Well, I just gave the proof above, didn't I?

But this now starts to take a very strange twist: You awarded a zero to Joker when it was not checkmated, an when it did not resign. So how come I now have prove anything? Seems to me you would have to prove not only that the position is lost to Joker, but that Amateur or Garbochess would actually have been able to win it!

Post Reply