Rybka 4 just around the corner it seems

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

Gian-Carlo Pascutto
Posts: 1260
Joined: Sat Dec 13, 2008 7:00 pm

Re: Rybka 4 just around the corner it seems

Post by Gian-Carlo Pascutto »

oreopoulos wrote: So from the answer, you choose to quote something irrelevant!
It's very relevant, and your misunderstanding that is why you don't understand what I'm saying.

IDeA is not just a tool which analyzes a queue of positions.

IDeA is a tool which builds a tree. It builds this tree by minimaxing previous analysis and expanding based on those results.

This is the same as a chess engine is doing. I'm claiming the parallelism in the chess engines is likely to be at least as efficient as IDeA.
bigo

Re: Rybka 4 just around the corner it seems

Post by bigo »

Dr.Wael Deeb wrote:
Henrik Dinesen wrote:The cost of Rybka+Aquarium was, if I remember correctly, about if not the same last. It's possible to check, but I'm too lazy.

However, since it's the price for the package, not the singular product, I can't imagine that Rybka 4 UCI will be priced much different than Rybka 3, meaning around 100$. The standalone Quarium 3 costs 19$ now, which I believe is reduced from 25$. This means there's a little discount buying the package Deep Rybka Aquarium 2010 which is set to 119$.

For the thickheaded among us; the point is that the preorder prices we see, is the price for Rybka and Aquarium, not just Rybka 4.

And for the online Rybka: Just another item on the shelf. If you don't want it, don't buy it!

And yes, Vas has been dragged through alot here recently, mostly bad, despite he surely wasn't the one initiating all that.
And it's pity that people can't have fun with an engine without being called fanboys or part of some religion! The "religion" here as far as I'm concerned, is computerchess.
Hi,
After reading your post,I've decided to call it an end of my comments regarding this issue....It's only getting worse and I am afraid that I am about to lose friends whose friendship has been gained through many years just because our opinions and visions related to this topic are different....
Chilling out regards,
Dr.D
I think chilling out is a great decision Doc, I think it is pretty obvious to everyone that you are prejudiced against Rybka. When was the last time you said anything positive? Actually I think the online Rybka is a good ideal and if I had the money I would jump on it in a minute. I see it as further proof that Vas is providing great service. Ok he let us down with the Rybka 3+ deal, Personaly I think this is trivial. Vas is not perfect, the world is not perfect , things don't always go according to plan. I think he has more than made up. For instance he is giving away for free to everyone the previous Rybka program. He has an excellent product and he is in business I don't see anything wrong with him trying to get the maximum dollar for his product, especially since many out there has gotten Rybka3 for Free with pirating. I think the man has a great chracter how he ignores unmerited attacks. I guess this is why many of the commerial programmers stay out of this place. Smart people!
oreopoulos
Posts: 110
Joined: Fri Apr 25, 2008 10:56 pm

Re: Rybka 4 just around the corner it seems

Post by oreopoulos »

Gian-Carlo Pascutto wrote:
oreopoulos wrote: But this is not the claim. The claim is

a) Rybka analyzing 100 positions on 4 cores each position till depth 17 takes X seconds
b) Rybka analyzing 100 positions on 4x 1-cores (splitting work in 4) each position till depth 17 takes less than X seconds
You do not know the positions in advance. That's the point of IDeA, it builds the tree dynamically and picks the positions from that. It does much more than just analyze a list of positions, it produces new one based on the analysis already done.

Because of this, for what you claim you don't need IDeA, and obviously the people who want to sell IDeA don't make that claim in their marketing, because it hasn't got anything to do with IDeA in the first place.
Common. You dont need to know the positions. Just that they are unrelated (not sequential). You know that fixed depth has to do with mp scaling. It is obvious that splitting the job is much faster and the proof is in the number of positions examined i see in my computer.

I wonder if you can create a set of positions, that the mp engine would be close to the split engine performance.

Let me get your point. You are saying the the previous claim is false?
Please give me any set of unrelated positions that the single MP engine will be faster than the batch of engines.

The only way the mp engine can match is if its a serial number of positions, to take advantage of the hash.

Do we agree at least at this?
User avatar
Graham Banks
Posts: 44913
Joined: Sun Feb 26, 2006 10:52 am
Location: Auckland, NZ

Re: Rybka 4 just around the corner it seems

Post by Graham Banks »

Gian-Carlo Pascutto wrote: Renting seems like a good solution to having the program hacked and rereleased for free as Ippolit. Very understandable decision by Vasik in order to safeguard his livelihood. And more evidence to users that pirating engines just will come back to hurt them in the long run.
Understandable, but still disappointing. The actions of a few have wrecked things for the rest of us.
gbanksnz at gmail.com
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Rybka 4 just around the corner it seems

Post by bob »

Spock wrote:
Gian-Carlo Pascutto wrote: If running Rybka on 4 cores gives you a speedup of 3, then they are claiming the new IDeA will give you a speedup of 4 instead. So that's one core more.

Of course the claim is totally bogus. If it would be true, then Rybka should switch it SMP algorithm to IDeA by default, and the claim would be false again.
I don't see why it is bogus. Running on 4 cores gives you an effective speed-up of 3 due to SMP "losses", but 4 x 1 = 4, i.e. running 4 instances of single core, means there is no SMP loss. So I see the logic in that statement. Whether it gives a better result or not is another story.
There is _NO_ way (emphasis intended) to have zero overhead. With a _very_ good SMP algorithm, using much more regular trees than we see to day, Cray Blitz got quite close to 4.0 on 4 cpus, but at 8 we were down, and when we got to 16, we were at roughly the same point some of see today, which is 11-12x.

You are right, there is no SMP loss. There is a _HUGE_ cluster loss however. I don't know of a single programmer that would choose a 4-way cluster over a 4-core box. The idea is so far beyond silly, it takes sunlight 6 months to get from silly to that idea.

I have never seen so much disinformation from a single source. But once you notice the trend (bogus NPS, bogus total nodes, bogus PVs, bogus depth, bogus scores) then "bogus parallel search results" can't be too unexpected.

The statement made is simply a twisted and contorted piece of logic. If you have lots of things to analyze, yes you are better off analyzing each different thing with just one core. You analyze 4 at a time with no overhead. What is new about that? But if you want to analyze _one_ thing, which is where parallel search is important in the first place, this idea is worthless. It isn't something new. It is old. _very_ old. And misleading, of course.

If you can create a circumstance where you have 4 absolutely unrelated positions, and they are not derived from your own calculations (which would introduce a dependency since you first must derive the positions before you can analyze the independently) then you might see something useful here. Or if you want to analyze 4 independent games, you might have something useful. Most are not interested in that that however, and the minute positions are dependent on previous positions, the search overhead comes into play. You can either do extra work, or you can wait until dependencies are resolved and accumulate idle time. Either way, it won't be 4x.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Rybka 4 just around the corner it seems

Post by bob »

Leto wrote:
Gian-Carlo Pascutto wrote:
dadij wrote: Either I don't understand you or you don't understand how IDeA works. It's true that IDeA decides about the next positions to analyze based on the results of previous analysis. However, IDeA only does that at the end of each stage when it finds the next batch of positions to analyze.
In this case, the scalability would have a hard limit on the number of batched up positions and the efficiency of the method you describe is way lower than that of a search algorithm that can switch away from poor branches as soon as one of the analysis threads finds it.

So you confirmed my point exactly: either IDeA is a very poor algorithm, and it scales well (whereby you should take into account, that scaling well starting from something crappy is useless), or it is a good algorithm, and it scales not perfectly.

Nobody has managed to make a good tree search algorithm scale perfectly, and claiming IDeA does it is preposterous.

If IDeA could play games, you could play a match with Rybka parallelism (IDeA + Rybka 4 core) versus IDeA parallelism (IDeA + 4 x Rybka 1 core) and check the results.
I would bet that Vasik has got his parallelism working more efficiently than the Convekta guys.

So my claim is specifically: if Convekta claims IDeA scales perfectly, this means that either IDeA is crap or they're wrong. There's decades of research in parallelism to support that conclusion. TANSTAAFL
If Dadi is correct about IDeA analysing positions more quickly in an hour with 4 instances of Rybka then there's no need to do a match.
Sure there is. Anyone can analyze completely independent positions in parallel, with a perfect speedup. But analyzing a single game this way is another issue completely.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Rybka 4 just around the corner it seems

Post by bob »

oreopoulos wrote:
Gian-Carlo Pascutto wrote:
oreopoulos wrote: But this is not the claim. The claim is

a) Rybka analyzing 100 positions on 4 cores each position till depth 17 takes X seconds
b) Rybka analyzing 100 positions on 4x 1-cores (splitting work in 4) each position till depth 17 takes less than X seconds
You do not know the positions in advance. That's the point of IDeA, it builds the tree dynamically and picks the positions from that. It does much more than just analyze a list of positions, it produces new one based on the analysis already done.

Because of this, for what you claim you don't need IDeA, and obviously the people who want to sell IDeA don't make that claim in their marketing, because it hasn't got anything to do with IDeA in the first place.
Common. You dont need to know the positions. Just that they are unrelated (not sequential). You know that fixed depth has to do with mp scaling. It is obvious that splitting the job is much faster and the proof is in the number of positions examined i see in my computer.
Hand-waving is not convincing. _WHERE_ do these positions come from? Created by the program as it analyzes? There is an order dependency there that kills this claimed performance. If I don't "know" the positions, how on earth can I search them independently and in parallel. If I have to create them, how do I create them in parallel, as otherwise there is even more serial processing with zero speedup?

I wonder if you can create a set of positions, that the mp engine would be close to the split engine performance.

Let me get your point. You are saying the the previous claim is false?
Please give me any set of unrelated positions that the single MP engine will be faster than the batch of engines.

The only way the mp engine can match is if its a serial number of positions, to take advantage of the hash.

Do we agree at least at this?
lmader
Posts: 154
Joined: Fri Mar 10, 2006 1:20 am
Location: Sonora, Mexico

Re: Rybka 4 just around the corner it seems

Post by lmader »

bob wrote:The idea is so far beyond silly, it takes sunlight 6 months to get from silly to that idea.
I loved that.
"The foundation of morality is to have done, once for all, with lying; to give up pretending to believe that for which there is no evidence, and repeating unintelligible propositions about things beyond the possibilities of knowledge." - T. H. Huxley
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Rybka 4 just around the corner it seems

Post by bob »

oreopoulos wrote:
Gian-Carlo Pascutto wrote:
oreopoulos wrote: But this is not the claim. The claim is

a) Rybka analyzing 100 positions on 4 cores each position till depth 17 takes X seconds
b) Rybka analyzing 100 positions on 4x 1-cores (splitting work in 4) each position till depth 17 takes less than X seconds
You do not know the positions in advance. That's the point of IDeA, it builds the tree dynamically and picks the positions from that. It does much more than just analyze a list of positions, it produces new one based on the analysis already done.

Because of this, for what you claim you don't need IDeA, and obviously the people who want to sell IDeA don't make that claim in their marketing, because it hasn't got anything to do with IDeA in the first place.
Common. You dont need to know the positions. Just that they are unrelated (not sequential). You know that fixed depth has to do with mp scaling. It is obvious that splitting the job is much faster and the proof is in the number of positions examined i see in my computer.

I wonder if you can create a set of positions, that the mp engine would be close to the split engine performance.

Let me get your point. You are saying the the previous claim is false?
Please give me any set of unrelated positions that the single MP engine will be faster than the batch of engines.

The only way the mp engine can match is if its a serial number of positions, to take advantage of the hash.

Do we agree at least at this?
The problem is, you are leaving out a _lot_ of "middle-work". Where does that large set of independent positions come from? If you feed them to the program (say take 300 WAC positions and say "gimme the results for these" then I agree, it will be faster to search all to the same depth, independently. But if you are analyzing a _game_, how do you create those positions without first doing analysis to discover the interesting positions, and then analyzing those to discover more interesting positions, etc. That is not going to scale perfectly. It is going to scale poorly.

So if you can magically produce the positions in zero time, yes this works. I simply don't see how a genie is going to pop out of a magic lamp and make that happen, realistically. I'm much less interested in what something "can do" and care much more about what it "will do". Those are completely different things in the world of parallel search. It is easy to define constraints so that a game tree can be searched with perfect speedup. See my dissertation. The minor detail that those trees are not useful to actually play a game of chess is quite significant...
User avatar
Eelco de Groot
Posts: 4681
Joined: Sun Mar 12, 2006 2:40 am
Full name:   Eelco de Groot

Re: Rybka 4 just around the corner it seems

Post by Eelco de Groot »

I have never seen so much disinformation from a single source. But once you notice the trend (bogus NPS, bogus total nodes, bogus PVs, bogus depth, bogus scores) then "bogus parallel search results" can't be too unexpected.
I think this was a misunderstanding?

I think Ray and Gian-Carlo are discussing IDeA, which is just an analyzing scheme in the Aquarium interface. It has nothing to do with the Rybka Cluster and is not programmed by Vasik Rajlich, as far as I know totally an independant project from Convekta. I don't know anything about IDeA I must admit, but it possibly takes advantage of the fact that if there is a cluster of positions that is related but is not directly in one move sequence, it pays off to analyze these crucial positions first before doing more minimaxing of the root position. You can skip an awful lot of nonessential variations if you have a good idea of the crucial positions already. This is just a very crude guess of what is going on, Gian-Carlo has looked at this I suppose and knows what he is talking about. Have not read the thread further. Sorry for interrupting the discussion :)

Regards, Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan