Need a little help with improving search speed

Discussion of chess software programming and technical issues.

Moderators: hgm, Dann Corbit, Harvey Williamson

Stan Arts
Posts: 179
Joined: Fri Feb 14, 2014 10:53 pm
Location: the Netherlands

Re: Need a little help with improving search speed

Post by Stan Arts »

So we are searching to depth 8.

At the root I have a score for the first move of +0.50

We search the second move.
My opponent on one ply deeper than the root searches and on his first move came to +0.80 and +0.70. (from 8-1=7 ply searches) But to a score of +0.30 on his 3rd move.
My opponent likes that more than the initial +0.50 but I would never allow this continuation. His +0.30 is 0.20 "above" beta. Now it's already proven my 2nd move is worse than my first.
We can safely stop searching here after the opponents 3rd move. A beta cutoff.
duncan
Posts: 12038
Joined: Mon Jul 07, 2008 10:50 pm

Re: Need a little help with improving search speed

Post by duncan »

Thecoolman5 wrote:Hi everybody. I have been developing a rather simple chess engine for a while now and I have hit a wall. I recently finished my searching algorithm and as of right now, it is in its most bald state meaning it searches every possible move and looks for what it can kill and what can be killed. Now, I am looking for ways to improve the search speed. Besides all the basics like improving my move generation and stuff, how should I go about this? I have read about alpha beta pruning but it makes absolutely no sense. How can you know if a tree is safe to be pruned when you don't search down it? Any insight would be great. Thanks!
https://web.archive.org/web/20071030084 ... habeta.htm


Alpha-beta and the bag example

Fortunately, there is a way to reduce the branching factor. Furthermore, it's perfectly safe, and in fact there is absolutely no down-side -- it's a pure win. It relies on the idea that if you know that you are guaranteed to get a score of X if you follow one path, you can stop examining any other particular path once it becomes obvious that you will score less than X -- you don't have to figure out exactly how much less than X you will score.

This may seem trivial, but the result is a reduction in the effective branching factor, from 35 to as low as 6 for chess.

I'll try an example. Let's say that your worst enemy has a bunch of bags sitting in front of him. The bags are there because he lost a bet, and now he has to give you something, but the rules about what he has to give you are very obtuse.

Each bag contains a few things. You are going to get one of the things. You get to pick which bag the thing will come out of, but he gets to pick what you get out of that bag. You want to get your thing quickly and leave, because you have better things to do than sit there digging through bags while your worst enemy glares at you.

You are going to look through one bag at a time, and you are going to look through the items in each bag one item at a time.

Obviously, what is going to happen is that your enemy is going to give you the worst thing in the bag you choose (he knows you well, so both of you will agree upon what the worst thing is), so your goal is to pick the bag that has the nicest worst thing.

It's easy to see how you'd apply min-max to this problem. You are the maximizing player, and you are going to take the best bag. Your enemy is the minimizing player. He's going to try to make sure that the best bag is as bad as possible, by giving you the worst thing out of it. All you have to do to use min-max is take the bag that has the best worst thing in it, if that makes any sense.

Min-max gets you a provably correct result, assuming that you are able to evaluate the items in the bags accurately. Accuracy of evaluation isn't important here, and it doesn't have anything to do with how either min-max or alpha-beta works. Let's stipulate that you can evaluate them correctly.

The problem with min-max, as has been described previously, is that it is inefficient. You have to look at everything in every bag, which takes a lot of time.

But how can we do this more efficiently than min-max?

Let's start with the first bag, and look at everything, and score the bag. Let's say that this bag contained a peanut butter sandwich and the keys to a new car. You figure that the sandwich is worth less (and since your opponent knows you, he will agree), so if you take this bag you will get a sandwich. The fact that there are also car keys in the bag doesn't matter, because you know that you won't get the car.

Now you start in on the next bag. The process you use is a little different than min-max this time. You look at items one at a time, and compare them against the best thing you know you can get (the sandwich). As long as items are better than the sandwich, you handle this as you do in min-max -- you just keep track of the worst thing in this bag. If you run out of things in this bag, without finding anything worse than a sandwich, this will become your bag of choice, and the worst thing in it will become your expected result.

But if you find another sandwich in this bag, or something you think is worse than the sandwich, you discard this bag without looking at any more items. The reason is that you know that if you take this bag, the absolute best you can do is no better than the sandwich (and might be worse).

Let's say that the first item in the bag is a twenty-dollar bill, which is better than the sandwich. If everything else in the bag is no worse than this, that's the item that your opponent will be forced to give you if you take this bag, and this becomes your bag of choice.

The next thing in the bag is a six-pack of pop. You think this is better than the sandwich, but it's worse than the twenty, so this bag is still your bag of choice, but you expect to get a six-pack of pop if you take this bag. The next thing is a rotten fish. This is worse than the sandwich. You say "no thanks," hand the bag back, and forget about this bag.

It doesn't matter what else is in the bag. There could be the keys to another car, which doesn't matter, since you are going to get the fish. There could be something much worse than the fish (I leave this to your imagination). This doesn't matter either, since the fish is already bad enough, and you know you can do better by taking the bag with the sandwich.
jdart
Posts: 4361
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: Need a little help with improving search speed

Post by jdart »

You can take a look at a simple program like MSCP (https://marcelk.net/mscp/) and see how this is implemented. (MSCP is not a great model to follow in general but will help you understand some things maybe).

I think it is implied but maybe not mentioned by other posters: if you have a very inferior algorithm (no alpha beta) then optimizing parts of your search like movegen should not be a high priority. Implementing alpha/beta will give you a lot of performance gain because even if your program is slow, it will more efficiently search the move tree by skipping unessential work.

--Jon
Henk
Posts: 7210
Joined: Mon May 27, 2013 10:31 am

Re: Need a little help with improving search speed

Post by Henk »

If you would ask why LMR is valid you can read all kinds of creative non sense explanations.