frosch wrote:bob wrote:frosch wrote:first of all: why are you interested in a node count of another program?
isn't this count just useful for comparing different hardware?
further: you claim a node is a node and this was defined very clearly.
is this really true? just an example: what about transpositions in search? same position occurs in search and hashentry has to be used. is it an additional node or not?
No. A node is produced when the search updates a position by making a move and recursively calling itself to deal with that new position. On a graph, nodes are the things connected by arcs. Arcs are represented by moves here. That is a definition that is given in any good AI textbook dealing with minimax and alpha/beta search. If you are at position A, and make a move that leads to positon B, B counted as a new node. Now that you are at B, you can make moves to lead to other nodes, or you might get a hash hit and not search any further here and produce no further sub-nodes from this node...
even if a node was clearly defined, a node can be treated differently. not every node has to be evaluated by the same algorithm. some nodes can be treated in a cheaper way than others. would it make sense to neglect this? no! it would be impossible to compare the node count on different hardware/positions and it would lose it's only usefulness.
The reason is vocabulary and communication. If we use different definitions of "pi" then how can we ever communicate mathematically? If we use different definitions of nodes, then we can't discuss tree searching and pruning ideas since we won't have a standard set of terms to use.
There is some wriggle room in the definition, as you could just count legal nodes, or you could count all, which might be different between two programs. But to take a normal node count, and divide by 10, is _not_ a reasonable definition of anything, regardless of the nonsensical explanations offered.
maybe it would help yourself, if you awnsered my first question: why are you in the n/s of any program but crafty interested?
the definition of pi is trivial to anyone and there's no way to define it in some other way. it's not the same with n/s, to which you agree slightly yourself (legal/illegal moves).
I think it's less useful for the programmer to define it in your way, than take into account how useful a node is or how long the program spends at a node.
if we agree, that completely different nodecounts are possible, there's no reason to bother about n/s in other programs at all.
rajlich might even do a division by 10 for optical reasons!?
Because the question is not important. Since I am _not_ interested in the NPS of Rybka or any other program. If you just look back to my first post, someone asked the question "why would someone obfuscate their node counts and search depths?" And I answered that. I don't need to be interested in Calculus to answer the question "what is the first derivative of X^2". If I respond "2X" does that deserve a follow-up on "why are you interested in the first derivative of X^2?
As far as the rest of your comments, I will again simply reply with "common vocabulary". With the classic definition of "node", from which follows a consistend definition of NPS, one can compare two different programs and draw reasonable conclusions. We've been doing this for 20 years now. You can find numerous references to NPS values for Fritz vs (say) Hiarcs. And when one program has 10x the nps of another, then the second has to be doing far more work per node than the first. Almost certainly time spent in the evaluation in the case of Hiarcs. Or time spent in analyzing search extension possibilties in the case of "the King".
And if (big IF) I were to buy that, then how would one justify reporting a mangled depth value as well? Depth has _always_ been defined as the number of full-width plies you search, before you start becoming more selective. Not the average number of full-width plies. Not the maximum number of full width plies. But the minimum number. yes, it is very hard to compare this value between programs when there are so many variables such as extensions, reductions, and raw forward pruning.
And since _nobody_ else fudges both depth and node counts, even though there are some admittedly possible variances in how programs behave, one would have to wonder "everybody else does it this way, why would one programmer be different?" And I specifically and precisely answered that question, whether you like the answer or not was not a consideration when framing the answer.