Similarity Detector Available

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Similarity Detector Available

Post by Don »

Allard Siemelink wrote:
Don wrote:You were not doing anything wrong - the whole thing was buggy.

It is fixed now - sorry about any inconvenience.

Get it at: http://komodochess.com

Don
Thanks for the update.
I suspect there is still some kind of synchronisation/buffering issue.
When applying the tool to Spark, it is still stalling and I get erratic results.

However, when I disable all uci output (except bestmove), Spark gets 100% cpu and passes the self similarity test with flying colors:
------ spark-dev (time: 100 ms scale: 1.0) ------
99.26 spark-dev (time: 99 ms scale: 1.0)
3.79 Komodo64 1.2 JA (time: 100 ms scale: 1.0)
3.74 Komodo64 1.2 JA (time: 99 ms scale: 1.0)
I think this indicates a problem with spark. It is unthinkable that you could get numbers this high. Look at the similarity.data file do see if anything looks strange. Are all the moves there? There should be more than 8000 moves. I will try to make the tester detect and report any glitches if I can figure out what is happening.
When I applied the tool to Komodo, I noticed that cpu utilisation was fluttering around only 75%. And the results show a rather poor self similarity:
------ Komodo64 1.2 JA (time: 100 ms scale: 1.0) -----
70.96 Komodo64 1.2 JA (time: 99 ms scale: 1.0)
3.79 spark-dev (time: 100 ms scale: 1.0)
3.78 spark-dev (time: 99 ms scale: 1.0)
I wonder what % of self-similarity you get for Komodo?
Are you able to get it close to 99% if you disable all uci output (except bestmove) in Komodo?
Allard Siemelink
Posts: 297
Joined: Fri Jun 30, 2006 9:30 pm
Location: Netherlands

Re: Similarity Detector Available

Post by Allard Siemelink »

Laskos wrote:
Allard Siemelink wrote::
------ spark-dev (time: 100 ms scale: 1.0) ------
99.26 spark-dev (time: 99 ms scale: 1.0)
3.79 Komodo64 1.2 JA (time: 100 ms scale: 1.0)
3.74 Komodo64 1.2 JA (time: 99 ms scale: 1.0)
:shock:

The Komodo numbers don't look right. Even Spark-copy seems too deterministic. Something must be wrong there.

Kai
99% self similarity may be high, but I am more concerned about Komodo's self similarity of only 70%. Except for timing randomness,
both engines should be completely determistic, as all tests were run with 1 cpu only. Perhaps Don can shed some light on this.

I agree that the similarity with Komodo looks suspicially low. I too wonder what is going on here.
I had a quick look at the similarity data file, it appears to contain valid moves for Spark (it is not truncated, nor filled with bogus moves).
Perhaps the cause is an early missing move in Spark's data, after which the remainder of the moves are shifted. It would explain the low correlation with other engines.
I'll investigate further in the new year.
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: Similarity Detector Available

Post by Laskos »

Allard Siemelink wrote:
Laskos wrote:
Allard Siemelink wrote::
------ spark-dev (time: 100 ms scale: 1.0) ------
99.26 spark-dev (time: 99 ms scale: 1.0)
3.79 Komodo64 1.2 JA (time: 100 ms scale: 1.0)
3.74 Komodo64 1.2 JA (time: 99 ms scale: 1.0)
:shock:

The Komodo numbers don't look right. Even Spark-copy seems too deterministic. Something must be wrong there.

Kai
99% self similarity may be high, but I am more concerned about Komodo's self similarity of only 70%. Except for timing randomness,
both engines should be completely determistic, as all tests were run with 1 cpu only. Perhaps Don can shed some light on this.

.
70% is not so outlandish, 99%+ is. Example here: 1-core Robbo with its copy, 2-core Houdini with its copy. Some Rybkas and totally unrelated engines thrown out there too.

Code: Select all

C:\similar>a -r 9
------ RobboLito 0.09 x64 (time: 100 ms) ------
 73.89  RobboLito 0.09ax64 (time: 100 ms)
 67.72  IvanHoe_B49jAx64 (time: 100 ms)
 61.28  Houdini 1.51x64 (time: 100 ms)
 61.22  Houdini 1.5 x64 (time: 100 ms)
 60.83  Rybka 3  (time: 100 ms)
 57.06  Deep Rybka _4_x64 (time: 100 ms)
 51.17  Strelka 1.8 UCI (time: 100 ms)
 51.08  Rybka 1.0 Beta 32-bit (time: 100 ms)
 50.29  Fruit 2.1 (time: 300 ms)
 48.58  Deep Shredder 12 x64 (time: 100 ms)
 46.52  Glaurung 2-epsilon/5 (time: 100 ms)
 44.73  Chess Tiger 2007  (time: 100 ms)
 43.71  Deep Shredder 9 UCI (time: 100 ms)
 43.01  Ruffian 1.0.1 (time: 100 ms)


C:\similar>a -r 2
------ Houdini 1.5 x64 (time: 100 ms) ------
 67.81  Houdini 1.51x64 (time: 100 ms)
 62.31  IvanHoe_B49jAx64 (time: 100 ms)
 61.22  RobboLito 0.09 x64 (time: 100 ms)
 60.67  RobboLito 0.09ax64 (time: 100 ms)
 55.79  Rybka 3  (time: 100 ms)
 54.61  Deep Rybka _4_x64 (time: 100 ms)
 47.29  Strelka 1.8 UCI (time: 100 ms)
 47.28  Rybka 1.0 Beta 32-bit (time: 100 ms)
 46.99  Deep Shredder 12 x64 (time: 100 ms)
 46.66  Fruit 2.1 (time: 300 ms)
 43.81  Chess Tiger 2007  (time: 100 ms)
 42.60  Glaurung 2-epsilon/5 (time: 100 ms)
 42.27  Deep Shredder 9 UCI (time: 100 ms)
 39.39  Ruffian 1.0.1 (time: 100 ms)
It's true that self-similarity is higher on 1-core, but nowhere near 99%+, at least for many engines. It seems that at this time control, with these positions, the similarity is somewhere between 35% (totally unrelated) to 80% (apparently almost identical) for some normal, advanced engines (I don' know how Micro-Max or TSCP would behave).

Kai
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Similarity Detector Available

Post by Don »

Allard Siemelink wrote:
Laskos wrote:
Allard Siemelink wrote::
------ spark-dev (time: 100 ms scale: 1.0) ------
99.26 spark-dev (time: 99 ms scale: 1.0)
3.79 Komodo64 1.2 JA (time: 100 ms scale: 1.0)
3.74 Komodo64 1.2 JA (time: 99 ms scale: 1.0)
:shock:

The Komodo numbers don't look right. Even Spark-copy seems too deterministic. Something must be wrong there.

Kai
99% self similarity may be high, but I am more concerned about Komodo's self similarity of only 70%. Except for timing randomness,
both engines should be completely determistic, as all tests were run with 1 cpu only. Perhaps Don can shed some light on this.

I agree that the similarity with Komodo looks suspicially low. I too wonder what is going on here.
I had a quick look at the similarity data file, it appears to contain valid moves for Spark (it is not truncated, nor filled with bogus moves).
Perhaps the cause is an early missing move in Spark's data, after which the remainder of the moves are shifted. It would explain the low correlation with other engines.
I'll investigate further in the new year.
I'll see if I can figure out what it happening with Spark. I think the value 70% is approximately correct for most engines. 99% is not correct.

If I can figure out what the problem is, I will see what I can do to fix it and warn the user of any issues.

P.S.
Does your program have an usually long startup time between moves?
Adam Hair
Posts: 3226
Joined: Wed May 06, 2009 10:31 pm
Location: Fuquay-Varina, North Carolina

Re: Similarity Detector Available

Post by Adam Hair »

It appears there is still a little trouble.

This is the contents of my Stockfish 1.91 config file:

Code: Select all

exe = Stockfish_1.91.exe
name = Stockfish_1.91
scale = 2.0
Threads = 1
Hash = 128
The program runs with the scale set at 2.0. But Stockfish is still using
its default number of threads ( in this case 4 cores) and default hash, 32MB.

The same thing is occuring with Critter 0.90.
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Similarity Detector Available

Post by Don »

Adam Hair wrote:It appears there is still a little trouble.

This is the contents of my Stockfish 1.91 config file:

Code: Select all

exe = Stockfish_1.91.exe
name = Stockfish_1.91
scale = 2.0
Threads = 1
Hash = 128
The program runs with the scale set at 2.0. But Stockfish is still using
its default number of threads ( in this case 4 cores) and default hash, 32MB.

The same thing is occuring with Critter 0.90.
Please md5sum the sim02 executables and make sure you have latest. I actually made 3 or 4 versions as I found bugs.

I'm going to try to fix any more issues that I can and release sim03 later. How did you determine that Stockfish was using the default number of threads?

Don
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Similarity Detector Available

Post by Don »

Don wrote:
Adam Hair wrote:It appears there is still a little trouble.

This is the contents of my Stockfish 1.91 config file:

Code: Select all

exe = Stockfish_1.91.exe
name = Stockfish_1.91
scale = 2.0
Threads = 1
Hash = 128
The program runs with the scale set at 2.0. But Stockfish is still using
its default number of threads ( in this case 4 cores) and default hash, 32MB.

The same thing is occuring with Critter 0.90.
Please md5sum the sim02 executables and make sure you have latest. I actually made 3 or 4 versions as I found bugs.

I'm going to try to fix any more issues that I can and release sim03 later. How did you determine that Stockfish was using the default number of threads?

Don
Please don't bother, there is indeed a bug, I'm not actually SENDING the options to the engine! I'm going to release a version 03 later, but first I want to look into the issue with Spark.

Don
Sarciness
Posts: 43
Joined: Tue Nov 23, 2010 4:22 pm

Re: Similarity Detector Available

Post by Sarciness »

Fantastic tool, which should proive some valuable evidence in evaluating what are and are not clones (although this evidence should not be used in isoliation for such purposes.)

Congratulations on the new website! Really like Komodo and want to support you and Larry Kaufman! Bring on the Multi-cpu version!

Cheers,
Ish
Adam Hair
Posts: 3226
Joined: Wed May 06, 2009 10:31 pm
Location: Fuquay-Varina, North Carolina

Re: Similarity Detector Available

Post by Adam Hair »

Don wrote:
Don wrote:
Adam Hair wrote:It appears there is still a little trouble.

This is the contents of my Stockfish 1.91 config file:

Code: Select all

exe = Stockfish_1.91.exe
name = Stockfish_1.91
scale = 2.0
Threads = 1
Hash = 128
The program runs with the scale set at 2.0. But Stockfish is still using
its default number of threads ( in this case 4 cores) and default hash, 32MB.

The same thing is occuring with Critter 0.90.
Please md5sum the sim02 executables and make sure you have latest. I actually made 3 or 4 versions as I found bugs.

I'm going to try to fix any more issues that I can and release sim03 later. How did you determine that Stockfish was using the default number of threads?

Don
Please don't bother, there is indeed a bug, I'm not actually SENDING the options to the engine! I'm going to release a version 03 later, but first I want to look into the issue with Spark.

Don
That is fine, Don. Thanks for the extra work you are doing so that
config files can be used.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Similarity Detector Available

Post by bob »

Don wrote:
bob wrote:
Don wrote:
bob wrote:my only comment here is that this is likely going to run afoul of the "birthday paradox" frequently. Given enough programs. A new program will frequently choose the same moves as another program, "just because". The more samples, the greater the probability this will happen. Lots of false positives are not going to help a thing...
In order to have a false positive you need context. All this utility does is counts how many moves (out of approx 8000) that 2 programs play in common and returns the percentage. How can that be a false positive? It will be whatever it will be for any two programs.
Simple. Someone is going to choose a number. Say 70%. If A matches B 70% of the time, it is likely a derivative.
Why are you picking numbers and talking about derivatives? The tool is not designed to determine what program is a derivative of some other program.
Because of the title of this thread. "similarity". To the typical reader, that will be interpreted as "if two programs are similar, one is likely a clone of the other" with the certainty of that statement going up as the percentage of matches goes up.

You have to think about the "general audience". I know that the numbers have little relevance to anything. But I've been doing this a long time. The typical reader here will "just assume..." and there we go...



(replace 70% by any reasonable number you want). If you take program A and compare it to B, you might get 40%. If you compare it to C, you might get 50%. If you compare it to enough programs, you will get at least one 70% or higher. From unrelated programs...
This would only have meaning if the tool was designed to determine if programs are related, but it's not. The tester will be able to determine if 2 unrelated program play a lot alike. That is not a false positive. It's only a false positive if the tester is rigged to say, "hey those programs are related!" or if someone like you comes along and tries to assign that meaning to it.

It's completely normal and expected that some pairs of unrelated programs will play more alike than others, that is what the tester is designed to test.

When you produce numbers, you have to expect _someone_ to use them to reach a conclusion. In this case, the conclusion might be right, wrong, or random.
The only conclusion this tool provides is how often two different programs play the same move. This is not a conclusion, it's a statistic. You are trying to make something out of it that it is not.
When you write something, you have to consider the audience. To someone that understands parallel programming, I can use the term "atomic lock" and it will be perfectly clear what I am talking about without any explanation. To a casual reader, it will mean something completely different. Similar to the name change from "nuclear magnetic resonance imaging" to "magnetic resonance imaging" to get rid of that "nuclear" part where everyone assumed they were being exposed to atomic radiation when they were not.

Here, the audience is primarily non-programmers. And the majority are just casual computer chess users. They read this "similarity" stuff in a different light than others will. That was my point.

I continue to get comments over and over again from people who are assuming context which betrays a fundamental misunderstanding of what this tool does and how it works.

If you view this utility as a "clone tester", and you assign some arbitrary percentage value to signify that a program is a "clone", then you can have false positives. But that is not what this utility does and it's not what it's for.

For example: When I tested Robbolito and Houdini, I got a ridiculously high match rate, higher than most other pairs of programs and in many cases much higher than the match rate between two versions of the SAME chess program!

So is that a false positive? No, it's just a fact. The two program play a lot of moves the same. It does not mean Robbolito is a clone of Houdini or a derivative or anything else, it just means they both play the same move a lot more than almost any other program.
All well and good. But the moment you produce numbers, you have to expect someone to take them at face value. I wouldn't consider such comparisons myself. But many will. And they will draw the wrong conclusion.
Taking it at face value is not the problem. I think what you really mean is that they will impute meaning and context that don't exist, just like you are doing.

I used this analogy earlier, but hammers are very useful objects. However every once in a while someone uses one improperly and hits someone over the head with one and kills them. This tool can be used improperly but it can be useful too.


Exactly what do you expect the numbers to show?
The numbers show how often 2 programs play the same move.

What does it mean when two programs match 70% of the time?
It means exactly what you said, they play the same move 70% of the time.
That they have the same search but different evals? Same evals but different search? A combination of both? It is pretty much meaningless.
I don't think it's meaningless, I have learned a LOT just from playing with it. But several people are doing their own research to learn more about this. On the OpenChess site BB has built a similar tool and is studying several aspects of it. I have also learned a lot about it and here is what I have found:

The program is uncanny in it's ability to identify different versions of the same program, even when the program has evolved substantially. The closest matches to Stockfish 1.9 is Sf 1.8, SF 1.7, SF 1.6 and Sf 1.5. This represents significant changes and ELO gains. It's this way for EVERY program I have tested that has multiple versions. Those versions tend to be the closest matches.
And unfortunately, that is bad. Because if two versions match 90%, and then two different programs match 90%, one might naturally conclude the obvious. Without thinking about the birthday paradox. General audiences interpret things differently than a technical group.


The ELO rating of the programs in question have very little impact on similarity scores. For example if you run the test 10x longer for program X, the tool is not fooled into thinking it's a different program or that it is much more like a stronger program.

I can change the search of Komodo and the test is not fooled. For example LMR can be turned off and the tester is almost oblivious, although this single change is a major search change.
That might be more troubling to me. Depending on what time control you are using. Very fast might not be so sensitive to reduced pruning. But if you choose the same moves with and without, that does say something....


My tentative conclusion (and I'm still studying it) is that search does not have much to do with it. I think what makes each program play the way it does is more about the evaluation function than anything else by far. Every test I have done bears that out.


Perhaps a good way to compute some random numbers for a Zobrist hashing scheme... but there are less expensive ways to do that.
If you look at some of the results that the test is returning, you probably wouldn't see any humor in that as it implies that programs all play random moves and are not consistent about what they think is important.
Did not say that at all. I simply said that a single program will match different programs with wildly different percentages, given enough samples...



My intent for the tool was as a diagnostic aid and a tool to examine the playing styles of programs. It returns some result and it's up to you to figure out what it means or doesn't mean and to use good sense and judgement, an increasingly rare commodity these days.

I actually got the idea for this from YOU and John Stanback. I was at a tournament where a version of Crafty was claimed to be heavily modified in the evaluation and was allowed in the tournament. However this program was doing unusually well and Vincent suspected something and you were contacted and consulted. From what I was told, you checked the moves of the game against Crafty and felt too many were the same.

John Stanback in another tournament noticed the same thing simply by watching the tournament games on line and comparing the moves to his own program.
For an isolated data point, that is a good place to start. But to compare a suspected clone against a huge suite of others? Again, false positives. Too many samples.
Why do you keep talking about clones and false positives? That has nothing to do with the tool.

But even if the tool WERE to be used improperly to test clones, I need to point something out to you. The birthday paradox is about determining the odds that ANY two people have the same birthday in a small population of people. The odds that YOU have the same birthday as someone else in that small room is FAR less. In the context of using my tool (improperly) as a kind of "clone test", you are not checking every program ever written to see if any two match, you are interested in just one program, your own. For example if you suspect that Crafty is being cloned, you would test Crafty against the suspicious program along with several other control programs. If the suspected clone was by far the strongest correlated program, you would use this as circumstantial evidence to investigate further. You would NOT test every combination of 2 programs.
The paradox still applies in two ways. First, for a big population it doesn't take too many before the probability of a match shows up. Simple enough. But if you play a single program against a group of others, as the size of that group increases, the probability of a much better (or much worse) match also goes up as well, due to the larger population from which you are looking at a single sample for each comparison...

I mentioned the paradox because the original implication was to play a match between a large number of programs and determine who matches with whom, the best. That is a direct birthday paradox issue.



For some reason whenever the birthday paradox comes up even smart people get confused about it, I guess that's why it's called a "paradox."




I think every good chess player who gets really familiar with chess program agree's that each program has it's own individual personality. Of course that can only be revealed through the moves it makes.

I understand what you are saying about the birthday paradox and agree, I just think it's not relevant without assuming the context of "clone testing." However, if you tested 1000 unique program by different authors who did not share ideas, etc. you would surely find 2 programs that played very similar chess. The fact that they might play very similar is not a paradox or a lie, it's just how it is.
The danger is, as I said, that some will take these numbers to be something like a correlation coefficient, with some threshold beyond which clone is proven...
That sound pretty silly to me. There is this idea floating around that computers are someday going to become self-aware, take over the world and make us their slaves. I think the scenario you are talking about is more paranoid than realistic.
I think it is a quite natural assumption for someone that is not directly involved in computer chess, which covers the majority of the people here...