That paradox does not apply because of the sheer size of the landscape to be explored. In that "paradox", we evaluate the probability of a match, which is as high as 1/365. Here, the probabilities of a "match" are waaaay much smaller. Imagine you have two reasonable moves per position, and you have 1000 positions. What is the probability to have 70% match by random chance? negligible.bob wrote:When you write something, you have to consider the audience. To someone that understands parallel programming, I can use the term "atomic lock" and it will be perfectly clear what I am talking about without any explanation. To a casual reader, it will mean something completely different. Similar to the name change from "nuclear magnetic resonance imaging" to "magnetic resonance imaging" to get rid of that "nuclear" part where everyone assumed they were being exposed to atomic radiation when they were not.Don wrote:Why are you picking numbers and talking about derivatives? The tool is not designed to determine what program is a derivative of some other program.bob wrote:Simple. Someone is going to choose a number. Say 70%. If A matches B 70% of the time, it is likely a derivative.Don wrote:In order to have a false positive you need context. All this utility does is counts how many moves (out of approx 8000) that 2 programs play in common and returns the percentage. How can that be a false positive? It will be whatever it will be for any two programs.bob wrote:my only comment here is that this is likely going to run afoul of the "birthday paradox" frequently. Given enough programs. A new program will frequently choose the same moves as another program, "just because". The more samples, the greater the probability this will happen. Lots of false positives are not going to help a thing...Because of the title of this thread. "similarity". To the typical reader, that will be interpreted as "if two programs are similar, one is likely a clone of the other" with the certainty of that statement going up as the percentage of matches goes up.
You have to think about the "general audience". I know that the numbers have little relevance to anything. But I've been doing this a long time. The typical reader here will "just assume..." and there we go...
This would only have meaning if the tool was designed to determine if programs are related, but it's not. The tester will be able to determine if 2 unrelated program play a lot alike. That is not a false positive. It's only a false positive if the tester is rigged to say, "hey those programs are related!" or if someone like you comes along and tries to assign that meaning to it.
(replace 70% by any reasonable number you want). If you take program A and compare it to B, you might get 40%. If you compare it to C, you might get 50%. If you compare it to enough programs, you will get at least one 70% or higher. From unrelated programs...
It's completely normal and expected that some pairs of unrelated programs will play more alike than others, that is what the tester is designed to test.
The only conclusion this tool provides is how often two different programs play the same move. This is not a conclusion, it's a statistic. You are trying to make something out of it that it is not.
When you produce numbers, you have to expect _someone_ to use them to reach a conclusion. In this case, the conclusion might be right, wrong, or random.
Here, the audience is primarily non-programmers. And the majority are just casual computer chess users. They read this "similarity" stuff in a different light than others will. That was my point.
And unfortunately, that is bad. Because if two versions match 90%, and then two different programs match 90%, one might naturally conclude the obvious. Without thinking about the birthday paradox. General audiences interpret things differently than a technical group.Taking it at face value is not the problem. I think what you really mean is that they will impute meaning and context that don't exist, just like you are doing.All well and good. But the moment you produce numbers, you have to expect someone to take them at face value. I wouldn't consider such comparisons myself. But many will. And they will draw the wrong conclusion.
I continue to get comments over and over again from people who are assuming context which betrays a fundamental misunderstanding of what this tool does and how it works.
If you view this utility as a "clone tester", and you assign some arbitrary percentage value to signify that a program is a "clone", then you can have false positives. But that is not what this utility does and it's not what it's for.
For example: When I tested Robbolito and Houdini, I got a ridiculously high match rate, higher than most other pairs of programs and in many cases much higher than the match rate between two versions of the SAME chess program!
So is that a false positive? No, it's just a fact. The two program play a lot of moves the same. It does not mean Robbolito is a clone of Houdini or a derivative or anything else, it just means they both play the same move a lot more than almost any other program.
I used this analogy earlier, but hammers are very useful objects. However every once in a while someone uses one improperly and hits someone over the head with one and kills them. This tool can be used improperly but it can be useful too.
The numbers show how often 2 programs play the same move.
Exactly what do you expect the numbers to show?
It means exactly what you said, they play the same move 70% of the time.
What does it mean when two programs match 70% of the time?
I don't think it's meaningless, I have learned a LOT just from playing with it. But several people are doing their own research to learn more about this. On the OpenChess site BB has built a similar tool and is studying several aspects of it. I have also learned a lot about it and here is what I have found:That they have the same search but different evals? Same evals but different search? A combination of both? It is pretty much meaningless.
The program is uncanny in it's ability to identify different versions of the same program, even when the program has evolved substantially. The closest matches to Stockfish 1.9 is Sf 1.8, SF 1.7, SF 1.6 and Sf 1.5. This represents significant changes and ELO gains. It's this way for EVERY program I have tested that has multiple versions. Those versions tend to be the closest matches.
That might be more troubling to me. Depending on what time control you are using. Very fast might not be so sensitive to reduced pruning. But if you choose the same moves with and without, that does say something....
The ELO rating of the programs in question have very little impact on similarity scores. For example if you run the test 10x longer for program X, the tool is not fooled into thinking it's a different program or that it is much more like a stronger program.
I can change the search of Komodo and the test is not fooled. For example LMR can be turned off and the tester is almost oblivious, although this single change is a major search change.
Did not say that at all. I simply said that a single program will match different programs with wildly different percentages, given enough samples...
My tentative conclusion (and I'm still studying it) is that search does not have much to do with it. I think what makes each program play the way it does is more about the evaluation function than anything else by far. Every test I have done bears that out.
If you look at some of the results that the test is returning, you probably wouldn't see any humor in that as it implies that programs all play random moves and are not consistent about what they think is important.
Perhaps a good way to compute some random numbers for a Zobrist hashing scheme... but there are less expensive ways to do that.
The paradox still applies in two ways. First, for a big population it doesn't take too many before the probability of a match shows up. Simple enough. But if you play a single program against a group of others, as the size of that group increases, the probability of a much better (or much worse) match also goes up as well, due to the larger population from which you are looking at a single sample for each comparison...Why do you keep talking about clones and false positives? That has nothing to do with the tool.
For an isolated data point, that is a good place to start. But to compare a suspected clone against a huge suite of others? Again, false positives. Too many samples.
My intent for the tool was as a diagnostic aid and a tool to examine the playing styles of programs. It returns some result and it's up to you to figure out what it means or doesn't mean and to use good sense and judgement, an increasingly rare commodity these days.
I actually got the idea for this from YOU and John Stanback. I was at a tournament where a version of Crafty was claimed to be heavily modified in the evaluation and was allowed in the tournament. However this program was doing unusually well and Vincent suspected something and you were contacted and consulted. From what I was told, you checked the moves of the game against Crafty and felt too many were the same.
John Stanback in another tournament noticed the same thing simply by watching the tournament games on line and comparing the moves to his own program.
But even if the tool WERE to be used improperly to test clones, I need to point something out to you. The birthday paradox is about determining the odds that ANY two people have the same birthday in a small population of people. The odds that YOU have the same birthday as someone else in that small room is FAR less. In the context of using my tool (improperly) as a kind of "clone test", you are not checking every program ever written to see if any two match, you are interested in just one program, your own. For example if you suspect that Crafty is being cloned, you would test Crafty against the suspicious program along with several other control programs. If the suspected clone was by far the strongest correlated program, you would use this as circumstantial evidence to investigate further. You would NOT test every combination of 2 programs.
That will be equivalent to have a gene of 500 bases (4 options per base). A gene of that size will be enough to trace it and find (at least!) the taxonomic family of the organism, and there are many more species in planet earth than chess engines.
This technique is not devoid of weaknesses, but that stats is not one of them. If you are not happy with 1000 positions, you can increase it to 10,000 etc.
Miguel
I mentioned the paradox because the original implication was to play a match between a large number of programs and determine who matches with whom, the best. That is a direct birthday paradox issue.
I think it is a quite natural assumption for someone that is not directly involved in computer chess, which covers the majority of the people here...
For some reason whenever the birthday paradox comes up even smart people get confused about it, I guess that's why it's called a "paradox."
That sound pretty silly to me. There is this idea floating around that computers are someday going to become self-aware, take over the world and make us their slaves. I think the scenario you are talking about is more paranoid than realistic.
The danger is, as I said, that some will take these numbers to be something like a correlation coefficient, with some threshold beyond which clone is proven...
I think every good chess player who gets really familiar with chess program agree's that each program has it's own individual personality. Of course that can only be revealed through the moves it makes.
I understand what you are saying about the birthday paradox and agree, I just think it's not relevant without assuming the context of "clone testing." However, if you tested 1000 unique program by different authors who did not share ideas, etc. you would surely find 2 programs that played very similar chess. The fact that they might play very similar is not a paradox or a lie, it's just how it is.