hgm wrote:The most common argument of proponents for limited hardware was to not scare away amateurs from the tournament. It is not likely that more than two or three participants could afford clusters of 1000+ CPUs, and it is clear that a tournament with only two or three participants will not be viable.
What is the problem if not all participants can afford the same hardware? Not all participants can afford to take more than a week off or pay the fee to travel to remote locations, either. Not all participants can afford to work as full time professionals on their engine so they have a chance of catching up with Rybka. Not all amateurs can afford the have a semi-professional opening book team preparing book kills. Not all amateurs have the same programming skills.
Why arbitrarily focus on the hardware?
The delusion that limiting the hardware will improve participation has already been disproven this year, by equally the record of lowest participants ever, despite being in a not too remote location with a high interest in chess. Given that it was already proven wrong, why keep repeating such a verifiable false assertion? Do you think that keeping to repeat this will suddenly make it true?
Why not focus on the real reason: the length and costs of participating in this tournament do not match up with the attention that can be expected from it.
For commercial programs, playing on large clusters is not very attractve, as their main customer base does not have such clusters.
So, the rules are there to protect commercial interests?
So it was coined that if a limit is to be imposed, at the current level of technology a quad would be a more logical choice than an octal. Quads are more or less standard now, in the consumer market.
This is the one thing I agree with, and have pointed out several times before the tournament. Not that anybody listened, anyway...
On the other side of this issue it was pointed out that allowing unlmited cluster size would detract effort from programmers to cluster programming, while there time would have been better spent by improving the single-CPU performance of their engine. Most ordinary users would not benefit from better scaling of a custer engine at all.
You can't know this, unless you can predict the future. You think the future is 4 or 8 cores? With 800-core chips already on the market? Do you think the future is shared memory multiprocessing? With memory already being a bottleneck?
I am not sure why you think a name change would be required, or what the extra M stands for. A cluster is not a computer, and a championhip for Humans usually does not allow the participation of teams, rather than individuals.
A cluster is just a kind of multi-core system with the components in seperate physical cases, and a fairly slow interconnect. By the same reasoning, you should reject anything with more than 1 core, because those CPUs are teaming up to produce the best move. Or maybe all superscalar CPUs. Those darned execution units collaborating...