Chess varinat engine questions

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

No4b
Posts: 105
Joined: Thu Jun 18, 2020 3:21 pm
Location: Moscow
Full name: Alexander Litov

Chess varinat engine questions

Post by No4b »

Hello all!
Recently I have been working on the AI for my unity game that is can be reduced to be a chess variant.
I am completely new to the computer chess and well mostly new to the programming so I have some questions.

1) How people do "levels" in chess engines?
As my engine is firstly a game AI, some sort of level system is clearly needed.
Currenly on its max capabilities my program managed to lose slightly-draw 20 game matches with some weak chess engines (i tested with EnkoChess and ShallowBlue), so i guess its max ability would be entartaining enough for most people (plz correct me if i`m mistaking). But obviously lower difficulties needed for people who have not played chess or any similar games.

For now i go about it by restricting searching depth (5-7 for the "hardest" (without extentions and QSearch, maxDepth dependent on the game phase, higher in the endgame), 3-5 for "medium" and 1-3 "weakest" with "0" difficulty being random mover. But i feel like such curve is not smooth: i`m somewhat decent i chess, and i fell almost no difference between 5-7 ply and 3-5 ply (meaning i may win both of them, but i must think some time before making a move). The random mover is waaaay weaker then 1 ply search, so currently difficulty curve is clearly far from ideal.
One of my thoughts was maybe turning off QSearch for some of the difficulties, but as it is my only idea and i hope to find help here.


2) I have a Zobrist hashing now only for 50-move and 3-fold repetition detection, but i thought of having a small transposition table too help speed up the search (my goal is to have depth-6 search done in under 1 sec on most of the machines). But i`m kind of worried of bugs with it because my game even now have ~30 different piece types and it can be (and most likely will be) scaled up to 127. So i`m worried of hash index clashing.
I can be on the safer side and check how many of the piece types are on the board in the current game before generating Zobrist keys (currently all of the are generating on the app startup for all piece types), but then again, how many of different piece types can i have on the board without clashing? I`m completely clueless here.
On the bright side i never plan my engine to search deeper than depth = 6-7, maybe 8 in very simple positions (without my single in check extention being counted), so maybe there is simply no way for them to clash in such shallow search?

3) What would be a way (if there is one even?) for such simple programm to improve its play in the closed positions, cause now it clearly really bad at it. The thing is by the nature of my variant this positions would arise quite often (for instance there is a piece that can block square from play for the rest of the game, and some squares can be blocked from play from the starting position and so on...).
For now i have really simple eval (higher eval for pieces that are closer to the center (no PSQT because of too many piece types and board size can vary from 4x4 to 10x11), open file rook (and all rook-like pieces) bonus, double pawn (or pawn with a "blocked" square on its path) penalty, pawn eval is higher when closer to promotion sqare).
The next logical step would be to eval bishop-like pieces higher on free diagonals (i was too lazy to do it before) as well as value knight higher in when cramped, and may be encourage breakthrough somehow in a winnig but cramped position, but other than that i dont know what to do (and even this ideas can be bad ?)

Thanks in advance, Alexander
Angrim
Posts: 97
Joined: Mon Jun 25, 2012 10:16 pm
Location: Forks, WA
Full name: Ben Nye

Re: Chess varinat engine questions

Post by Angrim »

One method to get an in-between strength is to do a shallow search with a wide window, then pick at random from the moves that score within that window.
Like if your windows is 20 centpawns, and you search 4 ply deep and get a score of 50, then search all moves with a window of 30 to 50, and any move that scores more than 30 is given a random chance to be chosen. If you have much in the way of knowlege in your eval, then this will hurt your engine's strategy ability without hurting it's tactics, which already got axed by the shallow search depth.

As an example, 1 ply deep with qsearch and a window of 210 points would move pretty crazy, but it wouldn't miss mate in 1 or a free queen or free rook, but it might trade a knight for a pawn or a rook for a knight. Reasonable for an extreme newbi.
Angrim
Posts: 97
Joined: Mon Jun 25, 2012 10:16 pm
Location: Forks, WA
Full name: Ben Nye

Re: Chess varinat engine questions

Post by Angrim »

2) And the risk of hash collisions should only be affected by speed and number of hash table entries, not number of piece types. At least if you do it right. Just have an array of random 64 bit numbers indexed by piece type and location on the board, so as you get more piece types your array gets bigger.
3) Having a hash tables is a great way to help with closed positions, as long as you aren't using a fixed depth. With less open squares to move to, the odds of a transposition go up, and so will your search depth for a given amount of time.
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Chess varinat engine questions

Post by hgm »

The largest variant I have built an engine for so far is Tenjiku Shogi (16x16 board, 78 pieces per player). Zobrist hashing is not a problem; as mentioned by Angrim key collisions only depend on key length and filling fraction of the hash table. I use a 64-bit key. The table for the Zobrist basis keys would get uncomfortably high (since most types can promote to another type, there are about 64 piece types), so I generate them on the fly as a product of (32-bit) piece key and square key.

Having all pieces share the same centralization table (even when weighting it with a per-type factor) did not work very well; it gave rise to pointless moves where a slider (repeatedly) stepped one closer to the center as soon as a square became available, in a direction where it could also slide. Tenjiku Shogi has quasi 1-dimensional pieces, which slide in one direction, but step in the perpendicular one, and then centralizing in the stepping direction makes some sense, but in the sliding direction not at all. So I diversified the PST a bit to at least account for that; many types still share the same PST, but I divided the types into groups with similar behavior, and tailored a table for each group. E.g. there are many 'generals', which all move as a King with some moves missing, and they need a completely different table from the pieces that move as Queens with a range restriction in some directions. My conclusion was that for pieces without diagonal slides centralizing makes very little sense. Short-range leapers mainly suffer from being close to the edge (limiting their 'future mobility'), but as soon as the are well away from that, it doesn't really matter where in the central area there are. For diagonal sliders you would want both their forward moves to hit the enemy camp, and for that they have to be centralized much more. Purely orthogonal sliders can best use a completely neutral table, their positioning being driven only by mobility considerations (half-open files and such).

In very large games it helps to make your search spend more on 'coherent' sequences of moves: the war often 'factorizes' into a number of independent battles occurring in different places, like in a combinatorial game, and if the search haphazardly moves now here, then there, most branches will not reach a useful depth in any of the battles, even when you do take account of transpositions. So you would want moves that 'switch attention' from one battle to another to be reduced.

Having an opponent that occasionally donates a Knight for a Pawn without reason doesn't seem very satisfactory. A more natural way to build in tactical blunders would be to consistently prune one particular move throughout the entire tree, with a probability that increases with the distance to the root of the position where this move first became possible. You could implement this by maintaining a table of all possible moves (like for the history heuristic), in which you record whether the search has already used that move. And when you haven't, you could decide whether you will allow this move in the current search or not, with a probability that depends on the ply level of the current node, and write that decision in the table.

As for closed positions; the easiest way is to just have the engine try to avoid reaching such positions. For this you would need an asymmetric evaluation. E.g. add a penalty for the side the AI is playing when there are many Pawns (especially locked head-to-head) or inaccessible squares. You could do this in the Pawn evaluation, and then hash that, so that it isn't really relevant how time-consuming the determination of closedness is, as you will have 99% hit rate to get it from the Pawn hash. This solves the problem in a natural way, because in positions that are nearly closed, you provide the engine with a goal to prevent further closure, or try to re-open it, at a time where it otherwise would just indecisively shuffle its pieces because it cannot see any tactical gains.
No4b
Posts: 105
Joined: Thu Jun 18, 2020 3:21 pm
Location: Moscow
Full name: Alexander Litov

Re: Chess varinat engine questions

Post by No4b »

Thank you for your suggestions.
2) And the risk of hash collisions should only be affected by speed and number of hash table entries, not number of piece types. At least if you do it right. Just have an array of random 64 bit numbers indexed by piece type and location on the board, so as you get more piece types your array gets bigger.
3) Having a hash tables is a great way to help with closed positions, as long as you aren't using a fixed depth. With less open squares to move to, the odds of a transposition go up, and so will your search depth for a given amount of time.
I am happy i can just go and implement hashtable. I started to implement it yesterday, it still has some bugs it seems, but with the help of previous topics on this forum i step by step getting through it.

As for AI leveling systems, i think both approaches can be implemented and tested, but i feel like Angrim`s variant is more suited for low levels (especially viable to replace random mover), while H.G.Muller may be used for higher levels. Although i`ll try to test both approaches with as many people as i can.
Having all pieces share the same centralization table (even when weighting it with a per-type factor) did not work very well; it gave rise to pointless moves where a slider (repeatedly) stepped one closer to the center as soon as a square became available, in a direction where it could also slide. Tenjiku Shogi has quasi 1-dimensional pieces, which slide in one direction, but step in the perpendicular one, and then centralizing in the stepping direction makes some sense, but in the sliding direction not at all. So I diversified the PST a bit to at least account for that; many types still share the same PST, but I divided the types into groups with similar behavior, and tailored a table for each group. E.g. there are many 'generals', which all move as a King with some moves missing, and they need a completely different table from the pieces that move as Queens with a range restriction in some directions. My conclusion was that for pieces without diagonal slides centralizing makes very little sense. Short-range leapers mainly suffer from being close to the edge (limiting their 'future mobility'), but as soon as the are well away from that, it doesn't really matter where in the central area there are. For diagonal sliders you would want both their forward moves to hit the enemy camp, and for that they have to be centralized much more. Purely orthogonal sliders can best use a completely neutral table, their positioning being driven only by mobility considerations (half-open files and such).
Thank you for sharing this info!
Based on observation of games played by my engine i thought of turning off centralization table for rooks, but this suggestion to have limited number of tables divided by mobility-types never crossed my mind. I will definitely implement it and really curious about how much of an impact it could be.
As for closed positions; the easiest way is to just have the engine try to avoid reaching such positions. For this you would need an asymmetric evaluation. E.g. add a penalty for the side the AI is playing when there are many Pawns (especially locked head-to-head) or inaccessible squares. You could do this in the Pawn evaluation, and then hash that, so that it isn't really relevant how time-consuming the determination of closedness is, as you will have 99% hit rate to get it from the Pawn hash. This solves the problem in a natural way, because in positions that are nearly closed, you provide the engine with a goal to prevent further closure, or try to re-open it, at a time where it otherwise would just indecisively shuffle its pieces because it cannot see any tactical gains.
Thank you! I`ll try to implement it.

Btw how many entries pawn hashtable should hold?
(I know that people usually just allocate memory dedicated to it, but as i not really expirienced in programming, it would be simplier for me to just know how much entries approximately i would need)
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Chess varinat engine questions

Post by hgm »

A megabyte Pawn hash will get you a long way. It depends a bit on how large your entries are, of course; Pawn hash can be used for many otherwise hard to calculate infos about the Pawn structure other than the overall evaluation, such as what are (half-)open files, what is your most advanced passer, what its promotion square is, and how far it still has to go, how good the Pawn shield is in each wing, pawn counts per square shade... And it can also depend on how many Pawns you have participating in your variant.

Best way is to just maintain counters for hits and misses in the probing code, and print those after the search. Then you can set the size to something that gives you a hit rate that you consider good enough.
No4b
Posts: 105
Joined: Thu Jun 18, 2020 3:21 pm
Location: Moscow
Full name: Alexander Litov

Re: Chess varinat engine questions

Post by No4b »

hgm wrote: Sun Jun 28, 2020 4:27 pm A megabyte Pawn hash will get you a long way. It depends a bit on how large your entries are, of course; Pawn hash can be used for many otherwise hard to calculate infos about the Pawn structure other than the overall evaluation, such as what are (half-)open files, what is your most advanced passer, what its promotion square is, and how far it still has to go, how good the Pawn shield is in each wing, pawn counts per square shade... And it can also depend on how many Pawns you have participating in your variant.
Got it. Everything should be tested.
Thank you for your suggestions about what to store in it. I will try to implement it along with eval features suggested by you earlier next (and i hope to be able too stop improving engine because its too addicting :) )

I will be aiming at something like ~95% hitrate in pawn hashtable, right (i think i read it somewhere...) ?

I just finished regular TT implementation and was really happy with search speed up (especially in the endgame, just wow). After reading your comment i got curious about hitrate of the main TT and I ran a couple of games with hit-rate report after each move.

It is usually ~50-70% with occasional drop to ~40% or raise to 90%. Is this ok hitrate for main TT (i always replace, if this matters) ?
Sorry if this is stupid question but i couldnt find exact answer in the forum after some (also may be not super deep) research.