Questions on SMP search
Moderator: Ras
-
Daniel Shawul
- Posts: 4186
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: Questions on SMP search
What do you mean by storing everything ? Selective search is used in both depth first and best first search. You can not and should not store the whole tree but only what is relevant . The pruned part is left out. When you say depth first search, it doesn't say anything about selectivity. Depth first search can be selective/brute so is best first search...
-
Daniel Shawul
- Posts: 4186
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: Questions on SMP search
UCT with 0 exploration is a best first search. Here is a google search of "UCT best first search" with many publications http://www.google.com/#hl=en&sugexp=ldy ... 51646d98a3
Please listen again. For best first you have to store _everything_. I am not talking about UCT, minimax, alpha/beta, or anything else. The question was about best-first. It is completely unreleated to depth-first search as we use in chess programs.
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Questions on SMP search
I agree with the comment. Although it is a best-first search of a different type than what is normally envisioned...Daniel Shawul wrote:UCT with 0 exploration is effectively a best first search. Anyway lets not divert away from the discussion.With huge branching factor games , the widening effect becomes vivid best first or not. I get the same effect with depth_first + LMR. How do you think I can search depth=7 with a game of branching factor 3000 ? Please address that first and we will talk about best first search later.
I do not understand your reference to "widening". That comment makes no sense in terms of normal tree search vocabulary. Clearly with a BF of 3000, you have to significantly _narrow_ the search to get around the 3000^d or 2 * 3000 ^ (d/2) with alpha/beta. but the way you are using "widening" doesn't say much to me...
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Questions on SMP search
With best first you have to store the entire tree searched so far, except for those few positions you can classify as won or lost. Otherwise, how will you ever have the "best node" to search. At some depth, all the "best moves" can appear to suck, which leaves some very old and unexplored move as the next thing to search. Common terminology is an "open list" containing nodes that have not been completed so far.Daniel Shawul wrote:What do you mean by storing everything ? Selective search is used in both depth first and best first search. You can not and should not store the whole tree but only what is relevant . The pruned part is left out. When you say depth first search, it doesn't say anything about selectivity. Depth first search can be selective/brute so is best first search...
As far as your latter comment, it is best to first pick up an AI book and look at the classic definition of "best-first search." Then you see why you don't actually "prune" anything, you just keep expanding the most promising nodes and let the others sit and collect dust, unless everything else slowly drops enough to look worse than some of those dust collectors...
If you throw things away, it can't be true best-first, because you can't know what will be best at some point in the future, so you just keep expanding the best nodes until you have to make a move.
-
Daniel Shawul
- Posts: 4186
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: Questions on SMP search
With best first search the tree gets expanded very unsymmetrically along lines of _best_ play unlike the case of breadth first search where all the siblings and their children have to be expanded to the same depth. Lets say f.i the first move is always best, then the the expansion is along the left line. The other parts of the tree never get expanded and we never _store_ their sibilings in any way or form. So by your definition, this moves must be stored but they are not.With best first you have to store the entire tree searched so far, except for those few positions you can classify as won or lost. Otherwise, how will you ever have the "best node" to search. At some depth, all the "best moves" can appear to suck, which leaves some very old and unexplored move as the next thing to search. Common terminology is an "open list" containing nodes that have not been completed so far.
As far as your latter comment, it is best to first pick up an AI book and look at the classic definition of "best-first search." Then you see why you don't actually "prune" anything, you just keep expanding the most promising nodes and let the others sit and collect dust, unless everything else slowly drops enough to look worse than some of those dust collectors...
UCT never has an explict prune in it, it just expands the tree very unsymmetrically implicitly pruning most of it. That is what is meant by pruning in this discussion.
An example search :
Code: Select all
e2e4 1200 (searched to depth 15 for example with a high winning score)
d2d4 2
a1a2 1
b2b3 1
So "everything" in this context is the relevant (i.e unpruned) part of the tree...
For example if more exploration revels e2e4 is actually a bad move then , the other moves start to get expanded. So effectively the tree grows along lines of play where it thinks has a higher winning chance at that moment. It is dynamic so it is not something rigid where you can't undo pruned moves.If you throw things away, it can't be true best-first, because you can't know what will be best at some point in the future, so you just keep expanding the best nodes until you have to make a move.
-
Gian-Carlo Pascutto
- Posts: 1260
- Joined: Sat Dec 13, 2008 7:00 pm
Re: Questions on SMP search
Bob, I already pointed out that there is a hard, mathematical proof that many best-first algorithms, including those relevant to chess, search an identical tree to alpha-beta with a sufficiently large hashtable. So what you say above is already proven wrong in the strictest way possible.bob wrote: Please listen again. For best first you have to store _everything_. I am not talking about UCT, minimax, alpha/beta, or anything else. The question was about best-first. It is completely unreleated to depth-first search as we use in chess programs.
Even if you fail to grasp this, saying that UCT is not best-first is insane. It has the exact start-at-root,pick-best-node, expand-best-node, backtrack-node-result-up-to-root cycle that characterizes best-first algorithms.
And you know what? It can be rewritten as depth-first!
Wait, let's take another example. Proof-number-search. Another classical example of a best-first algorithm.
You know what? It's been rewritten as depth-first, too!
Notice a pattern here?
Again, this is already proven wrong. Chinook uses MTD(f), which is best-first (again, a hard, mathematical proof is in the thesis of Aske Plaat). Cilkchess also used MTD(f). All the strongest suicide and losers chess engines (which btw has a very NARROW tree): best-first.Best-first is more human like, to be sure. But it has never approached depth-first in chess or checkers. Perhaps in GO or another game, who knows.
Your "argument" about storing everything is a complete straw man.
From one side, we can make the hashtable large enough (or use a normal std::map) and just store everything, which works until we run out of memory. From another side, *practical implementations* of best-first searches have to deal with the possibility of memory running out and will do garbage-collection. There is no difference between the two.
The distinction between best-first and depth-first is an artifact of days long gone when it was not realized yet that they were one and the same.
But it was ended in 1995. It's time to move on.
-
Daniel Shawul
- Posts: 4186
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: Questions on SMP search
I agree with the comment. Although it is a best-first search of a different type than what is normally envisioned...
I do not understand your reference to "widening". That comment makes no sense in terms of normal tree search vocabulary. Clearly with a BF of 3000, you have to significantly _narrow_ the search to get around the 3000^d or 2 * 3000 ^ (d/2) with alpha/beta. but the way you are using "widening" doesn't say much to me...
Now that we got the "best-first" terminology out of the way, lets define what you mean by width W and the associated widening or narrowing. For me width at a node W is the moves explored at that particular node... Let us fix the depth so that we avoid possible confusion because of that. For a game of high BF = 3000 , we want to search to a depth of 7 in iterative widening manner (no iterative deepening). So lets say we consider only 1 move at all nodes first, we finish that, then we increase it to 3 moves and finish that etc...
While it more or less remains constant in the purely iterative deepening search from iteration to iteration (i.e aside from odd/even effect), it does increase in an iterative widening search confirming the fact that we are progressively increasing the EBF.
Definiation of branching factor from wiki
I do not understand your reference to "widening". That comment makes no sense in terms of normal tree search vocabulary. Clearly with a BF of 3000, you have to significantly _narrow_ the search to get around the 3000^d or 2 * 3000 ^ (d/2) with alpha/beta. but the way you are using "widening" doesn't say much to me...
Now that we got the "best-first" terminology out of the way, lets define what you mean by width W and the associated widening or narrowing. For me width at a node W is the moves explored at that particular node... Let us fix the depth so that we avoid possible confusion because of that. For a game of high BF = 3000 , we want to search to a depth of 7 in iterative widening manner (no iterative deepening). So lets say we consider only 1 move at all nodes first, we finish that, then we increase it to 3 moves and finish that etc...
This is iterative widening which starts from an artifical very low branching factor and progressively increases it approaching the true BF asymptotically. You can calculate the EBF in the same way it is done for iterative deepening.iteration 1 -> 1
iteration 2 -> 3
iteration 3 -> 12
iteration 4 -> 64
iteration 5 -> 128
...
iteration n -> asymptotically approaches 3000
Code: Select all
EBF = nodes at iteration N / nodes at iteration N - 1
Definiation of branching factor from wiki
In computing, tree data structures, and game theory, the branching factor is the number of children at each node. If this value is not uniform, an average branching factor can be calculated.
-
Gian-Carlo Pascutto
- Posts: 1260
- Joined: Sat Dec 13, 2008 7:00 pm
Re: Questions on SMP search
With depth-first, you have to store the entire tree searched so far, except for those few positions you can classify as too good or too bad (and hence are reduced heavily = low depth).bob wrote: With best first you have to store the entire tree searched so far, except for those few positions you can classify as won or lost.
Otherwise, your move ordering sucks and you will search a much bigger tree than minimal.Otherwise, how will you ever have the "best node" to search.
At some depth, all the "best moves" can appear to suck, which leaves some very undeep and under-explored move as the next thing to search to full depth. Common terminology is to call this a "fail-low". Because the open-list, err, hashtable will not have a lot of information in this case, the branching factor tends to suck.At some depth, all the "best moves" can appear to suck, which leaves some very old and unexplored move as the next thing to search. Common terminology is an "open list" containing nodes that have not been completed so far.
As for your latter comment, it is best to pick up AI research papers written after your AI book was and note the proofs of equivalence.As far as your latter comment, it is best to first pick up an AI book and look at the classic definition of "best-first search."
Then you see you don't actually "prune" anything, you just keep searching good, non-reduced moves deeper and let the LMR-ed nodes sit and collect dust in the hashtable. Unless the score suddenly drops below that of the LMR-ed nodes.Then you see why you don't actually "prune" anything, you just keep expanding the most promising nodes and let the others sit and collect dust, unless everything else slowly drops enough to look worse than some of those dust collectors...
If you throw some things out of your hashtable, alpha-beta cannot work well, because you can't know the best move to search first in each position and your branching factor starts to explode to pure minimax.If you throw things away, it can't be true best-first, because you can't know what will be best at some point in the future, so you just keep expanding the best nodes until you have to make a move.
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Questions on SMP search
OK, I will buy that description. However, one minor terminology issue. "width" has not traditionally been used as "number of moves searched from any node" it has more traditionally been "number of possible moves out of a node". If you whittle that "number of possible moves" down, then that is traditionally called forward pruning.Daniel Shawul wrote:Now that we got the "best-first" terminology out of the way, lets define what you mean by width W and the associated widening or narrowing. For me width at a node W is the moves explored at that particular node... Let us fix the depth so that we avoid possible confusion because of that. For a game of high BF = 3000 , we want to search to a depth of 7 in iterative widening manner (no iterative deepening). So lets say we consider only 1 move at all nodes first, we finish that, then we increase it to 3 moves and finish that etc...bob wrote:I agree with the comment. Although it is a best-first search of a different type than what is normally envisioned...
I do not understand your reference to "widening". That comment makes no sense in terms of normal tree search vocabulary. Clearly with a BF of 3000, you have to significantly _narrow_ the search to get around the 3000^d or 2 * 3000 ^ (d/2) with alpha/beta. but the way you are using "widening" doesn't say much to me...iteration 1 -> 1
iteration 2 -> 3
iteration 3 -> 12
iteration 4 -> 64
iteration 5 -> 128
...
iteration n -> asymptotically approaches 3000
For the record, I am not sure I would do the "widening" as you are doing it, I would likely do a "tapered widening" myself so that I don't look at the same number of moves at every depth, I would likely reduce the number as ply goes up.
That is _really_ mangling the use of EBF and doesn't match chess trees at all.This is iterative widening which starts from an artifical very low branching factor and progressively increases it approaching the true BF asymptotically. You can calculate the EBF in the same way it is done for iterative deepening.While it more or less remains constant in the purely iterative deepening search from iteration to iteration (i.e aside from odd/even effect), it does increase in an iterative widening search confirming the fact that we are progressively increasing the EBF.Code: Select all
EBF = nodes at iteration N / nodes at iteration N - 1
That definition I agree with. But note "children" is a specific concept. Not "children we plan on searching."
Definiation of branching factor from wikiIn computing, tree data structures, and game theory, the branching factor is the number of children at each node. If this value is not uniform, an average branching factor can be calculated.
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Questions on SMP search
And that is a problematic definition. "pruning" means to cut away, _permanently_. best-first does _not_ do that. You might not expand a specific node for a long time, because it looks relatively bad. But if, eventually, all the other nodes get expanded and their score drops enough, eventually this node will be searched further.Daniel Shawul wrote:With best first search the tree gets expanded very unsymmetrically along lines of _best_ play unlike the case of breadth first search where all the siblings and their children have to be expanded to the same depth. Lets say f.i the first move is always best, then the the expansion is along the left line. The other parts of the tree never get expanded and we never _store_ their sibilings in any way or form. So by your definition, this moves must be stored but they are not.With best first you have to store the entire tree searched so far, except for those few positions you can classify as won or lost. Otherwise, how will you ever have the "best node" to search. At some depth, all the "best moves" can appear to suck, which leaves some very old and unexplored move as the next thing to search. Common terminology is an "open list" containing nodes that have not been completed so far.
As far as your latter comment, it is best to first pick up an AI book and look at the classic definition of "best-first search." Then you see why you don't actually "prune" anything, you just keep expanding the most promising nodes and let the others sit and collect dust, unless everything else slowly drops enough to look worse than some of those dust collectors...
UCT never has an explict prune in it, it just expands the tree very unsymmetrically implicitly pruning most of it. That is what is meant by pruning in this discussion.
That is why this is not called "pruning". We have to remember that position, until we terminate the search, because it can _always_ work its way up to "best move" and then be searched...
Never, unless that first move drops like a rock. This is _not_ pruning. It is simply "best first search."
An example search :This could be a tree from UCT with zero exploration. Now the children of the moves except the first one are never expanded (which makes them effectively pruned).Code: Select all
e2e4 1200 (searched to depth 15 for example with a high winning score) d2d4 2 a1a2 1 b2b3 1
There is no pruning going on here, so again, this is the wrong term.Only the best gets expanded, unlike breadth first search, which would have required us to expand all the other moves to depth=15 as well.
So "everything" in this context is the relevant (i.e unpruned) part of the tree...
And that statement won't cut it at all. It is _impossible_ to "unprune" something. Once it is pruned, it is gone for all time. Prune a limb off of a tree and then glue it back on later and see what happens. That's why a reasonable terminology is required to discuss these things. When you prune a branch in a tree, you can _never_ come back and decide to search it.For example if more exploration revels e2e4 is actually a bad move then , the other moves start to get expanded. So effectively the tree grows along lines of play where it thinks has a higher winning chance at that moment. It is dynamic so it is not something rigid where you can't undo pruned moves.If you throw things away, it can't be true best-first, because you can't know what will be best at some point in the future, so you just keep expanding the best nodes until you have to make a move.