- What do you think of the technique?
How have you implemented UCI_LimitStrength?
- Removing positional knowledge.
Reducing search speed.
Performing a MultiPV search and selecting a weaker move, within a configurable error range.
Moderator: Ras
Make the decision for pruning a function of the positions hash key and you avoid the history table alltogether.hgm wrote:I once proposed to consistently prune moves from the search, to simulate human oversight. So basically make a history table that keeps track of moves that have already occurred in the tree at previous iterations, and then for each new move that appears in the current iteration decide whether you will overlook that move, with a probability that increases with depth. And when you decide to overlook it, mark it as such in the table, and suppress thus marked moves in future move generations (or discard them during move sorting).
Thank you for sharing.emadsen wrote:I spent time writing an algorithm to reduce the playing strength of my chess engine. Thought I'd share it here and ask my fellow programmers what they think.To summarize, playing strength is reduced by...
- What do you think of the technique?
How have you implemented UCI_LimitStrength?ELO calibration is done by interpolating engine parameters between the two personalities that bound the given ELO. Details are on my website. See The MadChess UCI_LimitStrength Algorithm.
- Removing positional knowledge.
Reducing search speed.
Performing a MultiPV search and selecting a weaker move, within a configurable error range.
That is not what I mean by consistent. It would prune different moves in each node. I want it to prune the same move in the entire tree. When humans overlook a move, they overlook it anywhere.Edmund wrote:Make the decision for pruning a function of the positions hash key and you avoid the history table alltogether.
Thanks Pawel. I like the idea of overlooking moves. Seems like a good simulation of human limitations. I was concerned though about making a mess of the hash table. To simulate higher ELO levels I need a coherent hash table so the engine can search deep enough to see tactics. If the engine overlooks random moves, different moves will be overlooked on depth 1, depth 2, depth 3, etc. So scores returned by the hash table are meaningless and will mislead the search- slowing it down. Does the hash score represent searching 30 moves on ply 1, 36 on ply 2... or does it represent searching 28 moves, then 34 moves? Which moves?PK wrote:my approach is to prune random moves in the search
I once proposed to consistently prune moves from the search, to simulate human oversight. So basically make a history table that keeps track of moves that have already occurred in the tree at previous iterations, and then for each new move that appears in the current iteration decide whether you will overlook that move, with a probability that increases with depth. And when you decide to overlook it, mark it as such in the table, and suppress thus marked moves in future move generations (or discard them during move sorting).
I agree. This addresses my concern about hash table coherence. If the same moves always are overlooked, scores from the hash table are valid (for a given ELO).I want it to prune the same move in the entire tree. When humans overlook a move, they overlook it anywhere.