I do not have any idea where this is coming from. But to break it down.Martin T wrote:Indeed so.Dr.Wael Deeb wrote: And that is an absolute fact that most users don't understand unfortunately or they don't want to,even worse,they consider themselves cheated by Vasik
If there was a single, foolproof, universal method of calculating nodes per second, then every engine author had to embrace this method and incorporate it in their code.
I don't think people understand that if Vas wanted Rybka 3 to display 10 times the number of nodes calculated per second, he could just multiply the number with 10 and then (probably) we wouldn't have this discussion.
There's isn't an organization or 3rd-party jury who decides what the nodes per second number will be.
It's the engine author alone, and he chooses (yes chooses) the number he think fits the overall scheme of things.
So if Vas feels that the nodes per second for Rybka 3 is 5 times lower than Rybka 2, so be it. It makes no difference, what so ever.
So now... just forget about the nodes per second.
1. "node" is _precisely_ defined in tree searching. Making a move leads to a new position, which is called a node. If you have a position with 20 legal moves, and you search exactly one ply (no extensions) deep.you search 20 nodes. no more, no less. One can argue about the "illegal moves" that are made, but if you "make illegal moves" then counting them is reasonable, and they represent a tiny fraction of the total nodes anyway and don't change the significance of that number.
2. "second" is a precisely defined unit of time. Standard everywhere
3. "per" as in a per b, means that for time interval B, there are "a" things happening.
So how is that _not_ precise? "depth" is far more vague, because of extensions and reductions and even pure forward-pruning. But nodes per second has been measured for 40 years and until this case came up, everyone was computing them the same way. Even in Belle, Ken connected an integrating timer to a pin that toggled each time a complete hardware cycle (make/select move/etc) was done, which was precisely NPS. Only exception I know of is Deep Blue and its predecessors, because they simply didn't take the time to count the nodes searched, and they estimated by using the theoretical max NPS per chip, and multiplied by the effective duty cycle (how often they were busy) to come up with a number that matched what everyone else considered NPS...
We have one program that is trying to hide things. That doesn't make it impossible for the rest of us to produce a real value for NPS that is comparable. Comparing NPS values reveals details. How long it takes to expand a node, which could be attributed to evaluation or complexities in the search brought on by pruning/etc.