maksimKorzh wrote: ↑
Thu Sep 10, 2020 10:12 am
Your proofs are unarguable but let me ask one more clarification question:
So if we call your checkup() NOT EVERY move within a movegen but ON DEMAND.
Now assuming good move ordering and LOTS of nodes reduced we will have say 100 000 nodes to traverse on depth 7
So 100 000 is MUCH LESS compared to the number of nodes traversed in PERFT for depth 8
So we call checkup() (or ZobristKey() according to Bruce Moreland) only 100 000 instead of 100 000 000 000 times
This would be slower, but would it be 60% slower? Please clarify.
As Andreas says below, you can't compare this. Obviously doing a full zobrist calculation in an alpha-beta search has less of an impact than doing this in perft, because alpha-beta does not search all positions. So overall, the engine will not become 60% slower, but there will be an impact.
This is true for anything you calculate on the fly:
- material count
- not keeping a piece list but finding each piece every time you need to move it
- pawn locations, even...
Now please let me explain why do I want to avoid incremental updates (I know nobody cares but still):
I'm writing a didactic bitboard chess engine and covering it on youtube for novice auditory.
Now incremental updates of hash keys (even though I have a general idea on how to do this based on VICE implementation) might lead to bugs if not careful and I'm trying to AVOID all the possible pitfalls where it is possible so the viewers can catch the gist but not getting bothered with implementation details.
In my honest opinion, creating a chess engine is one huge implementation detail; get ONE thing wrong, and it either doesn't work at all, it plays poorly, or incorrectly.
Now if my engine would give some ELO point handicap to other SAME engines but with incremental updates THIS IS GOOD
because it gives my viewers a prospect to improve engine on their own. So what I'm trying to do can be described as:
THE MINIMAL WORKING IMPLEMENTATION PROVING THE CONCEPT OR IDEA.
And I do sacrifice performance slightly from time to time just to make the code EASIER TO UNDERSTAND and POINTING TO THE GIST of the technique WITHOUT GETTING DISTRACTED on IMPLEMENTATION DETAILS.
The concepts of an alpha/beta chess engine are quite easy to grasp... it's the implementation details that make creating a good chess engine difficult. Even your use of a 0x88 board is an implementation detail; you don't -need- to check for off-board squares if your move generator keeps moving off the board into account; but then you have to count squares to see if a piece hits the edge or goes over it, instead of just moving it and then noticing it's off the board.
I understand you want to keep this as simple as possible, but as Einstein said: "Make it as simple as possible... but not simpler." You can leave out things such as incremental hashing, piece lists, material count, etc, in the first versions (if you want to), but you should at least mention that it is better to have incremental updates.
I'm leaving out the transposition table in my first version. Heck... maybe I'll even leave out quiescence search. Because I don't have a transposition table, I could have even left out zobrist hashing entirely for now. But I'm of the opinion that there are two kinds of implementations:
1: implement a working routine as fast as possible, as a prototype dummy, to be replaced by 2.
2: implement a routine as best as you can.
Even "the best you can do" might not be the absolute best yet (it probably isn't in my engine), but using a sub-par implementation if that is not absolutely necessary is not the right way to go. (In my opinion.)
I wish I could have an engine fitting the goals described above to learn from it because open source engines we have relying on particular implementation SO MUCH that implementation itself distracts from the gist of techniques.
What you mean is that many open source engines implement LOTS of functions, so it gets very hard to distill the absolute basics from them. I agree. It is not trivial to read the code of an open source chess engine if you have no experience. Therefore I'm creating my engine in stages.
The very first working versions will only have these parts:
- Board (Bitboard representation)
- Move Generator (Magic Bitboard version)
- Clean Alpha/Beta search with NO features except MAYBE quiescence search
- Zobrist hashing (because it's on there already and I'm not going to remove it now)
- Time management (needed for actually playing, and not losing on time)
- UCI interface (and a simple console interface for testing)
So the one and only thing it's going to do is to play (almost) brute force chess, and it'll be very weak. But, the things it can do, are implemented in the fastest/cleanest way I know in Rust at the time of writing.
Then, with each new version, I can add a single function, and describe how it is added, and what effect it will have. That version can test against the previous version, and in a gauntlet tournament with 9 other engines. That way, I can gauge the impact of each added function.
As far as I didn't yet see an engine that would be as described I decided to make my own so the NOOBS LIKE ME who has BIG ISSUES with UNDERSTANDING "best practices" of implementation ACTUALLY BECOME ABLE to pick up COMPLICATED TOPICS from scratch, so later on they can come here with reasonable questions (not this like noob questions I ask) so you guys can the help them to IMPLEMENT THINGS THE RIGHT WAY.
I understand your idea; I try to do exactly the same thing with regard to my engine. Still I think that building an overly simple implementation is not the right thing to do, because such an engine will become very hard to maintain in the future.
One thing I've seen is that you sometimes have very big functions with lots of indentation levels; in one of your video's I noticed clashing global variables. Those things are caused by trying to keep the implementation as straightforward as possible, but in the end, they make the code harder to reason about.