Progress on Loki

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Progress on Loki

Post by niel5946 »

Hi :)

Following a request in the general subforum, I am now creating a thread concerning the development of Loki. The reasons for creating it are two fold: 1) To - perhaps somewhat sporadically - document the development of Loki, and 2) To have a place where I can gather my thoughts and ideas during development which can hopefully become useful for others.
As a starting point, I have collected some information about the engine below.

A quick introduction to Loki: Loki is a UCI-compatible chess engine written in C++17. It is the second engine I have written, the first one being Copper. Copper was a mess and had a lot of features either (too) heavily influenced or taken directly from Vice (author: BluefeverSoft on Youtube). It was written because I needed a project to work on in order to learn C++. Secondly, chess is an interest of mine, and I had for a long time been wondering how on earth chess engines worked (started out with the classical: "Don't they just look ahead? That sounds pretty easy... Wonder why it took a supercomputer, IBM Deep Blue, to surpass human chess knowledge").
After getting stuck with the development of Copper (primarily due to many severe bugs), wanting a multi-threaded engine, and wanting to create an engine that I could actually call my own, I started working on Loki. It is a work in progress, and will probably not become a top tier chess engine. But that is fine because making a 3400+ rated engine wasn't my goal to start with. Although it would be lovely to one day make it strong enough to beat Magnus Carlsen :D

Features and their implementations: Loki uses bitboards and a piece list as its board representation.
Move generation:
  • Magic Bitboards, as implemented by maksimKorzh, for generation of sliding piece attacks. This gives a perft (no bulk-counting) speed at depth 5 from the initial position of 282 ms
Move ordering:
  • MVV/LVA as an initial sorting of captures and then SEE when they get fetched from the list of moves
  • Killer moves
  • Countermoves are implemented, but due to an elo loss and crash bug they are disabled ATM.
  • History heuristic.
This gives Loki a cutoff on the first move of around 95% of the times a beta-cutoff occurs.

Search:
  • Minimax with alpha beta pruning in a negamax framework (fail-hard).
  • Lazy SMP supporting up to 8 threads.
  • A two-bucket transposition table from 1MB to 1000MB. It is configured such that the first bucket has a depth- and age-preferred replacement scheme, and the second is always replace.
  • Iterative deepening.
  • Mate distance pruning.
  • Adaptive null move pruning.
  • Enhanced futility pruning.
  • Reverse futility pruning.
  • Razoring.
  • Quiescence search. Here, moves with SEE < 0 are pruned.
Evaluation:
  • Material and (simple) material imbalances.
  • Piece-square tables
  • Pawn structure and passed pawns.
  • Space in the middlegame.
  • Safe mobility evaluation of pieces.
  • King safety evaluation.
  • Evaluation of pieces. This is disabled since all tests of it has shown a decrease in elo, but I will hopefully make it work some day.
  • Tapered eval.
I have implemented a rather simple SPSA/Texel tuning algorithm to optimize all evaluation terms. It is multithreaded and can usually converge (assuming relatively good starting values) in under 300 iterations (~1 second per iteration). I hope to one day automatically decide the hyper-parameters of the algorithm since it is quite a hassle to always have to decide the step size and perturbation size for each variable.

Future improvements:
This list is not very exhaustive, and a good deal of features and optimizations will probably be added on the way, but the main improvements I want are the following:
  • Staged move generation, which I am working on right now.
  • Late move reductions.
  • Late move pruning.
  • Better time management. At the moment, Loki divides the remaining time by 30, if not told otherwise, which is the number of moves, that is assumed to be left of the game. This makes Loki use way too much time in the opening, and then suffer from time trouble in the late middlegame and endgame.
  • A working piece activity evaluation.
  • If Loki gets to a descent strength, I will look into some machine learning for the evaluation. This is probably pretty far out in the future, but seeing the success Stockfish has with NNUE, it seems like something to try out. However, I do not want to use Stockfish's implementation directly since I think it will be way too much external code considering the enormous role evaluation plays in deciding the engine's strength and style.
Loki 1.2.0 has been tested to have a rating of 1820 on CCRL Blitz, but a lot has been changed since then. My own tests now estimate Loki 3.0 to have a strength of around 2510 elo, which is hopefully also what CCRL will find :)

Link to the github repository: https://github.com/BimmerBass/Loki/tree/master

I remember reading a post on the forum when I just started work on Copper. There someone warned about how addictive chess programming is. I didn't really think too much of it back then, but to the surprise of probably no one here, he was right. And look at me now: An "nps-junkie" always anxiously looking to get more node-count reductions :D
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |
User avatar
mvanthoor
Posts: 1784
Joined: Wed Jul 03, 2019 4:42 pm
Location: Netherlands
Full name: Marcel Vanthoor

Re: Progress on Loki

Post by mvanthoor »

Good luck :)

Even though many chess engines have been written, it's just cool to have your own. (If you like chess, and are a programmer, that is.) A chess engine is never done, so it's a project that can run for years and years, which you can work on a feature at a time, if you have the time and the motivations.

Chess programming can be addictive, if only to see your engine become stronger and stronger. It's a bit like teaching a pupil to play chess, and see him/her go from a beginner to a super-grandmaster :)

Your path was similar to mine. I implemented my first chess engine in Borland Pascal in the 90's, using only some library books. That engine was great at playing mostly legal chess (but not always) between 2 and 4 ply deep, if it didn't crash somewhere in between. I never got that playing to some useful degree, being a teenager without internet and only some books written in the 80's to learn from, I don't think that's a surprise. I've still maintained the wish to, someday, have a *good* chess engine (but not written in C or C++, because there are so many already), and when Rust hit Edition 2018, I decided to start on one.

Rustic is now at 1810 in the CCRL Blitz list.

It'll take quite some time before I'll reach 2500+ Elo though, because I'm extensively testing and documenting each feature I add (and I only work on this in the weekends, and sometimes an hour or two in the evening).

I'll certainly take a look at your engine's tuner, because after Killers, History, Aspiration Windows and PVS, that's the next thing on the list, and I have _no_ idea how to start with that.
Author of Rustic, an engine written in Rust.
Releases | Code | Docs | Progress | CCRL
niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Re: Progress on Loki

Post by niel5946 »

mvanthoor wrote: Sat Apr 17, 2021 2:37 am Chess programming can be addictive, if only to see your engine become stronger and stronger. It's a bit like teaching a pupil to play chess, and see him/her go from a beginner to a super-grandmaster :)
Apart from seeing your engine become stronger, chess programming is also addicitve in the sense that there are nearly always bugs or values that need refinement. Additionally, I love the feeling of changing some - seemingly - minor feature, and although it doesn't get used very often, it still results in a significant elo gain. For example, at the moment I am working on staged move generation, and just by changing a ">=" to ">", the strength went from -38 to nearly equal (relative to version 3.0) :D
mvanthoor wrote: Sat Apr 17, 2021 2:37 am I'll certainly take a look at your engine's tuner, because after Killers, History, Aspiration Windows and PVS, that's the next thing on the list, and I have _no_ idea how to start with that.
I hope you will find it helpful! One of the primary goals when making it was for it to be readable and easy to implement into other engines. Therefore I have tried making it as non-engine specific as possible :)
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |
User avatar
maksimKorzh
Posts: 771
Joined: Sat Sep 08, 2018 5:37 pm
Location: Ukraine
Full name: Maksim Korzh

Re: Progress on Loki

Post by maksimKorzh »

niel5946 wrote: Fri Apr 16, 2021 11:26 pm Hi :)

Following a request in the general subforum, I am now creating a thread concerning the development of Loki. The reasons for creating it are two fold: 1) To - perhaps somewhat sporadically - document the development of Loki, and 2) To have a place where I can gather my thoughts and ideas during development which can hopefully become useful for others.
As a starting point, I have collected some information about the engine below.

A quick introduction to Loki: Loki is a UCI-compatible chess engine written in C++17. It is the second engine I have written, the first one being Copper. Copper was a mess and had a lot of features either (too) heavily influenced or taken directly from Vice (author: BluefeverSoft on Youtube). It was written because I needed a project to work on in order to learn C++. Secondly, chess is an interest of mine, and I had for a long time been wondering how on earth chess engines worked (started out with the classical: "Don't they just look ahead? That sounds pretty easy... Wonder why it took a supercomputer, IBM Deep Blue, to surpass human chess knowledge").
After getting stuck with the development of Copper (primarily due to many severe bugs), wanting a multi-threaded engine, and wanting to create an engine that I could actually call my own, I started working on Loki. It is a work in progress, and will probably not become a top tier chess engine. But that is fine because making a 3400+ rated engine wasn't my goal to start with. Although it would be lovely to one day make it strong enough to beat Magnus Carlsen :D

Features and their implementations: Loki uses bitboards and a piece list as its board representation.
Move generation:
  • Magic Bitboards, as implemented by maksimKorzh, for generation of sliding piece attacks. This gives a perft (no bulk-counting) speed at depth 5 from the initial position of 282 ms
Move ordering:
  • MVV/LVA as an initial sorting of captures and then SEE when they get fetched from the list of moves
  • Killer moves
  • Countermoves are implemented, but due to an elo loss and crash bug they are disabled ATM.
  • History heuristic.
This gives Loki a cutoff on the first move of around 95% of the times a beta-cutoff occurs.

Search:
  • Minimax with alpha beta pruning in a negamax framework (fail-hard).
  • Lazy SMP supporting up to 8 threads.
  • A two-bucket transposition table from 1MB to 1000MB. It is configured such that the first bucket has a depth- and age-preferred replacement scheme, and the second is always replace.
  • Iterative deepening.
  • Mate distance pruning.
  • Adaptive null move pruning.
  • Enhanced futility pruning.
  • Reverse futility pruning.
  • Razoring.
  • Quiescence search. Here, moves with SEE < 0 are pruned.
Evaluation:
  • Material and (simple) material imbalances.
  • Piece-square tables
  • Pawn structure and passed pawns.
  • Space in the middlegame.
  • Safe mobility evaluation of pieces.
  • King safety evaluation.
  • Evaluation of pieces. This is disabled since all tests of it has shown a decrease in elo, but I will hopefully make it work some day.
  • Tapered eval.
I have implemented a rather simple SPSA/Texel tuning algorithm to optimize all evaluation terms. It is multithreaded and can usually converge (assuming relatively good starting values) in under 300 iterations (~1 second per iteration). I hope to one day automatically decide the hyper-parameters of the algorithm since it is quite a hassle to always have to decide the step size and perturbation size for each variable.

Future improvements:
This list is not very exhaustive, and a good deal of features and optimizations will probably be added on the way, but the main improvements I want are the following:
  • Staged move generation, which I am working on right now.
  • Late move reductions.
  • Late move pruning.
  • Better time management. At the moment, Loki divides the remaining time by 30, if not told otherwise, which is the number of moves, that is assumed to be left of the game. This makes Loki use way too much time in the opening, and then suffer from time trouble in the late middlegame and endgame.
  • A working piece activity evaluation.
  • If Loki gets to a descent strength, I will look into some machine learning for the evaluation. This is probably pretty far out in the future, but seeing the success Stockfish has with NNUE, it seems like something to try out. However, I do not want to use Stockfish's implementation directly since I think it will be way too much external code considering the enormous role evaluation plays in deciding the engine's strength and style.
Loki 1.2.0 has been tested to have a rating of 1820 on CCRL Blitz, but a lot has been changed since then. My own tests now estimate Loki 3.0 to have a strength of around 2510 elo, which is hopefully also what CCRL will find :)

Link to the github repository: https://github.com/BimmerBass/Loki/tree/master

I remember reading a post on the forum when I just started work on Copper. There someone warned about how addictive chess programming is. I didn't really think too much of it back then, but to the surprise of probably no one here, he was right. And look at me now: An "nps-junkie" always anxiously looking to get more node-count reductions :D
re: Magic Bitboards, as implemented by maksimKorzh, for generation of sliding piece attacks.
- thanks for credit))) But just to be precise this is not my implementation, but a magic bitboards generator implementation by Tord Romstad.
see the source here: https://www.chessprogramming.org/Looking_for_Magics
I just adopted that code for slider attacks generation purposes and tried to make it as clear and as didactic as possible)

re: Loki
- Just had a look at your code and I need to say I love it even despite the fact I consider C++ to be too overwhelming (hence don't like it)
You write a simple code but in a very solid and clean way.
VICE influence is also beneficial to me so reading through it was a real pleasure to me)

Keep up good work!
niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Re: Progress on Loki

Post by niel5946 »

maksimKorzh wrote: Sat Apr 17, 2021 6:37 pm re: Magic Bitboards, as implemented by maksimKorzh, for generation of sliding piece attacks.
- thanks for credit))) But just to be precise this is not my implementation, but a magic bitboards generator implementation by Tord Romstad.
see the source here: https://www.chessprogramming.org/Looking_for_Magics
I just adopted that code for slider attacks generation purposes and tried to make it as clear and as didactic as possible)
I didn't know that actually. I just saw your Youtube video and thought the implementation looked easier than many others, I had tried previously. When I change the README.md file next time, I will also name Tord Romstad. But considering that I, as said, found it in your Youtube video, you also still deserve part of the credit IMO :)
maksimKorzh wrote: Sat Apr 17, 2021 6:37 pm re: Loki
- Just had a look at your code and I need to say I love it even despite the fact I consider C++ to be too overwhelming (hence don't like it)
You write a simple code but in a very solid and clean way.
VICE influence is also beneficial to me so reading through it was a real pleasure to me)
Thank you very much for the kind words :D . I am happy to hear that my code is clean since I myself find it quite messy and disorganized...

Speaking of VICE influence: Another point, I would like to add to the to-do list, is to reduce (maybe even remove) the amount of code inspired or copied from VICE. This is especially true for all contents in the misc.h and misc.cpp files, and the UCI-implementation. I was happy to have a point of inspiration when I started, but the more I've been working on Loki, the more I have come to appreciate writing code that I can - at least partly - call my own implementation.
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |
niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Re: Progress on Loki

Post by niel5946 »

As previously stated, I am working on staged move generation these days. My goal is to get it working, implement LMR and LMP, and then - depending on the elo change - make a new release.

My implementation is primarily inspired by Stockfish and Ethereal, but I have also consulted the source code of Laser a little:
1. When in search, I instantiate an object of a MoveGenerator class that takes the board, move ordering heuristics and hash-move as inputs to its constructor.
2. If a hash move has been given, it will assume that this is legal (FIXME: This should probably have a pseudo-legality test...) and return it before generating or scoring any other moves.
3. If no hash move has been given, or it has been searched, it generates all pseudo-legal moves (since I am using magic bitboards, it is not the move generation itself that is slow but rather the scoring) and scores the captures using Mvv/Lva.
4. When a capture is fetched, a static exchange evaluation is performed if it is a HxL and if this is < 0, the capture gets flagged for being searched later. If we find a sufficient capture, we return this, otherwise we go on to the quiet moves.
5. After the good or equal captures, all quiet moves are generated and scored using killers and history (I am still experimenting with getting countermoves to work). If a quiescence search flag has been set, the quiet scoring and quiet searching stages are of course skipped.
5. Quiet moves are returned without any further analysis.
6. When there are no more quiets available, the captures that were deemed losing using SEE, will get searched.

Yesterday, I ran a quick tournament between the version in development and version 3.0. Unfortunately, it showed no elo gain, which I think has two reasons:
1. In order to safely assume a legal hash move when running multithreaded search, I changed the transposition table to use lockless hashing again (which was otherwise removed in version 3.0). This makes individual tt-entries larger and thus results in fewer such entries, which could be a point of error. I will try to experiment with equal number of entries in the two versions, which would hopefully remove this error (although a larger table in dev. version would risk making the probing slower). If a tournament with equal numbers of entries shows a positive elo change, I will try making the entries smaller and hope this gives better results.
2. The implementation might be too slow. I have tried profiling it, but didn't really get any useful results from it. To get a more precise profiling, I will have to implement this as the move generation in perft.

Regarding the last point about perft: I have already had perft use this implementation to generate moves, but reverted the changes since I thought it wouldn't be useful. I used it for perft because I wanted to make sure it generated the right amount of moves for all positions (didn't repeat or ignore moves).
It seems I will have to experiment with the staged move generation further :)

On another note: I would like to add a point to the list of to-dos. I would like to implement a tuner for parameters in the search function. I am comfortable with the mathematical optimization algorithms, but I can't seem to think of a way of measuring the error between two values of the same variable in search. Maybe using self-play? Or using the amount of right guesses in a test like WAC as a sort of fitness function? I dont know yet...
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |
niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Re: Progress on Loki

Post by niel5946 »

I just tried to create a method for determining the pseudo-legality of a move, in the hopes of using a separate stage for the killer moves. Unfortunately it didn't really work out, and Loki lost ~27 elo.

I guess engines are even more sensitive to move generation speed than I previously thought... I think it's necessary for me to re-think this whole staged move generation implementation :(

New to-do: This is quite far out in the future, but some day it would be nice to port the tuner to python. This way, the chess engine it self, and nothing more, would be in c++ and I would then interact with it using other programs/scripts. Probably a cleaner way to do it IMO.
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |
niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Re: Progress on Loki

Post by niel5946 »

While not working on the staged move generation, I have decided to create a new branch for LMR and LMP. My goal is to gain 100-200 elo points with these features and then merge the branch into master again.

Yesterday, I started with a very simple implementation from (https://web.archive.org/web/20150212051 ... m/lmr.html) which is the following:

Code: Select all

// Step 13. Principal variation search
			
new_depth = depth - 1 + extensions;

if (moves_searched == 0) { // Always search the first move at full depth with an open window
	score = -alphabeta(ss, new_depth, -beta, -alpha, true, &line);
}

else {
				
	// Step 13A. Late move reductions. If we haven't raised alpha yet, we're probably in an ALL-node,
	//	so we'll reduce the search depth and do a full-depth re-search if the score is (surprisingly) above alpha.
	if (moves_searched >= lmr_limit && depth > lmr_depth && !is_tactical && !is_pv && !root_node && extensions == 0) {
		int R = 1;

		score = -alphabeta(ss, depth - 1 - R, -(alpha + 1), -alpha, true, &line);
	}
	else { // Trick to search with regular depth in case LMR is not used.
		score = alpha + 1;
	}

	if (score > alpha) {
		score = -alphabeta(ss, new_depth, -(alpha + 1), -alpha, true, &line);

		if (score > alpha && score < beta) {
			score = -alphabeta(ss, new_depth, -beta, -alpha, true, &line);
		}
}
This resulted in a big initial elo gain of around 68 points :D .
I have also tried to increase the reductions for moves with bad history:

Code: Select all

// Increase reduction for moves with history < 0 (~5 elo)
if (ss->stats.history[(ss->pos->side_to_move == WHITE) ? BLACK : WHITE][fromSq][toSq] < 0) {
	R += 1;
}
And now I am trying to decrease it for killers.

I have some other things, I want to try:
  • Increase reduction for moves with SEE < 0
  • An idea from futility pruning: If the static evaluation is below alpha then, depending on the margin, increase reductions.
  • Increase reduction if we're not improving our static evaluation across plies (inspired by Ethereal)
  • Decrease depth and amount of moves searched before LMR kicks in.
  • If the transposition table retrieval indicated an ALL-node, but the depth wasn't sufficient for a cutoff, increase reduction.
  • Decrease reduction for moves with good history (idea from Stockfish).
  • Increase initial reduction. I already have a logarithmic function of two variables (depth, moveCount) which I've fitted to match other engines, but it didn't work out in the past. This is probably the last one I'll try.
I hope, that the above methods will further increase the LMR elo gain. I will only begin working on LMP when I am satisfied with the elo gain of LMR.
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |
niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Re: Progress on Loki

Post by niel5946 »

Alright, after testing some different ideas (especially for initial reduction values, which is still just one ply), my current LMR implementation inside PVS is:

Code: Select all


// Step 13. Principal variation search
			
new_depth = depth - 1 + extensions;

if (moves_searched == 0) { // Always search the first move at full depth with an open window
	score = -alphabeta(ss, new_depth, -beta, -alpha, true, &line);
}

else {
				
	// Step 13A. Late move reductions (~89 elo). If we haven't raised alpha yet, we're probably in an ALL-node,
	//	so we'll reduce the search depth and do a full-depth re-search if the score is (surprisingly) above alpha.
	if (moves_searched >= lmr_limit && depth > lmr_depth && !is_tactical && !is_pv && !root_node && extensions == 0) {
		int R = 1; // For now, we use an initial reduction of one ply only.

		// Increase reduction for moves with history < 0 (~5 elo)
		if (ss->stats.history[(ss->pos->side_to_move == WHITE) ? BLACK : WHITE][fromSq][toSq] < 0) {
			R += 1;
		}

		// Decrease reduction if the move is a killer (~4 elo)
		if (move == ss->stats.killers[ss->pos->ply - 1][0] || move == ss->stats.killers[ss->pos->ply - 1][1]) {
			R -= 1;
		}

		// Increase reduction for captures with SEE < 0. Decrease otherwise (~12 elo)
		if (capture) {
			if (moves[m]->score < 0) {
				R += 1;
			}
			else {
				R -= 2;
			}
		}

		// If the TT probe returned an ALL-entry, increase the reduction. (~25 elo)
		if (ttHit && entry->flag == ttFlag::ALPHA && ttScore <= alpha) {
			R += 1;
		}

		// Decrease reduction for moves with good history (~28 elo)
		if (ss->stats.history[(ss->pos->side_to_move == WHITE) ? BLACK : WHITE][fromSq][toSq] > std::min(200, (depth * depth) / 2)) {
			R -= 1;
		}

		// If the static evaluation is below alpha based on a futility margin, increase reduction (~23 elo)
		if (ss->stats.static_eval[ss->pos->ply - 1] + futility_margin(depth, improving) <= alpha) {
			R += futility_margin(depth, improving) / 220;
		}

		int d = std::max(1, std::min(depth - 1, depth - 1 - R));

		score = -alphabeta(ss, d, -(alpha + 1), -alpha, true, &line);
	}
	
		/* LMR end */
	
	else {
		score = alpha + 1; // Trick to make full-depth search if LMR doesn't kick in.
	}

	if (score > alpha) {
		score = -alphabeta(ss, new_depth, -(alpha + 1), -alpha, true, &line);

		if (score > alpha && score < beta) {
			score = -alphabeta(ss, new_depth, -beta, -alpha, true, &line);
		}
	}

}
Where the lmr_limit is 4 moves, lmr_depth is 2 and the is_tactical flag is given by,

Code: Select all

bool is_tactical = gives_check || in_check || SPECIAL(move) == PROMOTION || SPECIAL(move) == ENPASSANT;
Below are the results from a 400-game test I have run. Loki_dev has the above LMR implementaiton, and it plays against Loki 3.0:

Code: Select all

Score of Loki_dev vs Loki 3.0.0 64-bit: 181 - 81 - 138 [0.625]
...      Loki_dev playing White: 97 - 41 - 62  [0.640] 200
...      Loki_dev playing Black: 84 - 40 - 76  [0.610] 200
...      White vs Black: 137 - 125 - 138  [0.515] 400
Elo difference: 88.7 +/- 28.0, LOS: 100.0 %, DrawRatio: 34.5 %
400 of 400 games finished.
Which means LMR gives an elo gain of ~89 . I am satisfied with this at the moment, but I'll probably try to increase it later.

On to late move pruning...
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |
niel5946
Posts: 174
Joined: Thu Nov 26, 2020 10:06 am
Full name: Niels Abildskov

Re: Progress on Loki

Post by niel5946 »

I just experimented with late move pruning, but unfortunately I didn't see any elo gain :( . But luckily no loss either.
My implementation is the following:

Code: Select all

// Step 13. Late move pruning (~19 elo). If we are near the horizon and the highest ordered moves haven't caused a beta cutoff, we're probably not 
//	going to get one, so we'll (depending on the depth) prune quiet moves
if (moves_searched > late_move_pruning(depth)
	&& !(is_tactical || capture)
	&& ss->pos->non_pawn_material()
	&& !is_pv
	&& !root_node
	&& extensions == 0) {
	
	ss->pos->undo_move();
	continue;
}
And the late_move_pruning function:

Code: Select all

// Here we use the formula 0.5*d^2 + 1.5*d + 1
int late_move_pruning(int d) {
	return (int)std::round(0.5 * std::pow(double(d), 2) + 1.5 * double(d) + 1.0);
}
I have tried making it more aggressive, but it only decreases the playing strength, and making it less aggressive doesn't do anything. I will have to look at it again.
Additionally, I tested the complete LMR and LMP implementation:

Code: Select all

Score of Loki_dev vs Loki 3.0.0 64-bit: 175 - 78 - 147 [0.621]
...      Loki_dev playing White: 97 - 38 - 65  [0.647] 200
...      Loki_dev playing Black: 78 - 40 - 82  [0.595] 200
...      White vs Black: 137 - 116 - 147  [0.526] 400
Elo difference: 86.0 +/- 27.4, LOS: 100.0 %, DrawRatio: 36.8 %
400 of 400 games finished.
I would ideally like to just get above 100 elo points, but it doesn't seem feasible... Perhaps I'll experiment with making the initial reduction in LMR dependent on depth and move ordering, and then focus on decreasing it depending on how good the move in question seems.
Author of Loki, a C++ work in progress.
Code | Releases | Progress Log |