Peer Gynt wrote:The fingerprint method should be verified by applying it to programs that are KNOWN to be related and on other ones that are KNOWN to be unrelated.
That has already been done a million times and graphs have been posted on this forum showing very high correlations with clones and little correlation with programs known to be different.
I can already suggest a means of cheating: Clone a program and run the clone on a very fast or very slow computer. That makes the 1 second constraint meaningless.
Actually, the fingerprint detection is surprisingly robust with respect to depth of search. Running it more slowly will only make a little difference and probably not enough to fool the fingerprint.
This point is not generally understood (or perhaps just not believed) but how programs choose moves is pretty idiosyncratic to each program and related very much to depth. In most positions your program is probably going to play the same move on 10 ply search as it will on a 20 ply search and it wont' necessarily be the same move that some other program will play.
You can also slow the program down artificially by software means: What arcade game programmer hasn't written delay loops? That can be the default, and only you know how to disable the secret delay loops...
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
Peer Gynt wrote:The fingerprint method should be verified by applying it to programs that are KNOWN to be related and on other ones that are KNOWN to be unrelated.
I can already suggest a means of cheating: Clone a program and run the clone on a very fast or very slow computer. That makes the 1 second constraint meaningless.
You can also slow the program down artificially by software means: What arcade game programmer hasn't written delay loops? That can be the default, and only you know how to disable the secret delay loops...
As Miguel and Don already stated lots of research has been done, 2 good places to start with are:
tpetzke wrote:Open Source is great if a community works on a common goal, like creating the best operating system in the world.
In a competitive environment open source hurts more than it helps. This is not about giving mankind a strong chess engine, it is a competition and here demanding only open source to participate is not a very smart idea.
For someone like me that entered the scene only a few years ago it looks like the whole trouble started when Fruit became Open Source. I don't think the trouble was intend but the road to hell is paved with good intentions.
I think the idea idea of Richard and Marcel is good and worth a try
Thomas...
Not everyone in the community has the same motives, so you can't say open source is categorically bad for computer chess. Some people like the glory, others just want to share their work product with the community and hopefully advance the state of the art (or at least, enrich the scene with the variety of their implementation techniques).
Yes the clones and the cloners are a pox on the community, but don't blame this on the selfless programmers who generously shared their source with the community. Rather, place the blame where it belongs: on those individuals with weak morals, who clone and exploit the hard work of others for their own selfish gain. Blame the cloners, not their victims.
[Edit: I was responding mostly to the bolded portion above. Maybe requiring open source for competitions would be divisive/problematic. I was just irked by the suggestion that the people giving away their source code for free were the cause of the problems.. my view is that it is the unprincipled use of that code by others that actually causes the problem.]
Peer Gynt wrote:The fingerprint method should be verified by applying it to programs that are KNOWN to be related and on other ones that are KNOWN to be unrelated.
That has already been done a million times and graphs have been posted on this forum showing very high correlations with clones and little correlation with programs known to be different.
I can already suggest a means of cheating: Clone a program and run the clone on a very fast or very slow computer. That makes the 1 second constraint meaningless.
Actually, the fingerprint detection is surprisingly robust with respect to depth of search. Running it more slowly will only make a little difference and probably not enough to fool the fingerprint.
This point is not generally understood (or perhaps just not believed) but how programs choose moves is pretty idiosyncratic to each program and related very much to depth. In most positions your program is probably going to play the same move on 10 ply search as it will on a 20 ply search and it wont' necessarily be the same move that some other program will play.
You can also slow the program down artificially by software means: What arcade game programmer hasn't written delay loops? That can be the default, and only you know how to disable the secret delay loops...
When I think about this test, one thing that cames to my mind is the concept of random output when doing a time based search.
We all know that in every game the same engine can produce a different best move for the same position, just because when time is involved, the search result is non deterministic.
I guess what would be the percentage of coincidence if we execute the fingerprint N times with the same engine.
Can this test give us a (somehow) error margin? Has anyone tried this test?
Still learning how to play chess...
knigths move in "L" shape ¿right?
Peer Gynt wrote:The fingerprint method should be verified by applying it to programs that are KNOWN to be related and on other ones that are KNOWN to be unrelated.
That has already been done a million times and graphs have been posted on this forum showing very high correlations with clones and little correlation with programs known to be different.
I can already suggest a means of cheating: Clone a program and run the clone on a very fast or very slow computer. That makes the 1 second constraint meaningless.
Actually, the fingerprint detection is surprisingly robust with respect to depth of search. Running it more slowly will only make a little difference and probably not enough to fool the fingerprint.
This point is not generally understood (or perhaps just not believed) but how programs choose moves is pretty idiosyncratic to each program and related very much to depth. In most positions your program is probably going to play the same move on 10 ply search as it will on a 20 ply search and it wont' necessarily be the same move that some other program will play.
You can also slow the program down artificially by software means: What arcade game programmer hasn't written delay loops? That can be the default, and only you know how to disable the secret delay loops...
When I think about this test, one thing that cames to my mind is the concept of random output when doing a time based search.
We all know that in every game the same engine can produce a different best move for the same position, just because when time is involved, the search result is non deterministic.
I guess what would be the percentage of coincidence if we execute the fingerprint N times with the same engine.
Can this test give us a (somehow) error margin? Has anyone tried this test?
with ~8,000 positions, an error of ~ 0.5% 1 standard deviation, if I remember well.
CSVN Fingerprinting test tool v1.0
----------------------------------
What type of engine is used? (W/U) : u
What is the name of the engine executable? : CapivaraLK009a01a.exe
done.
Engine search: 1/10000 |
CSVN Fingerprinting test tool v1.0
----------------------------------
What type of engine is used? (W/U) : u
What is the name of the engine executable? : CapivaraLK009a01a.exe
done.
Engine search: 1/10000 |
It runs 10K positions @ 1 second per position, so 10K seconds. If you have a dual-core box, it will run two at a time and cut this to 5 seconds, which is what happened on my 2ghz I7 dual-core macbook.
BTW, when I downloaded the thing, all I got was the Perl script. I had to go back and download the eps file separately. It then worked just fine on my mac...