Re: Is any competent one here?? Correct the RYBKA libels!
Posted: Thu Mar 17, 2011 3:50 pm
my opinion
Don't waste your energy...
Don't waste your energy...
Computer Chess Club
https://talkchess.com/
Yes, _HUGELY_ incremented. From maybe .00000000000001% to .0000000001%. Still nearly zero. Your statements are completely uninformed. Don't pull this "I have talked to computer science department" nonsense either. I am in a computer science department. One that is actually reputable.Romy wrote:Contaminated examination due to connections of participants?bob wrote:divert attention from the actual examination that is in progress.
Respectfuley, that is nonsense.What is the probability of writing two different 100 line blocks of C code, independently by two different programmers, and have them produce the same optimized assembly language (and by inference, identical semantics)?Close enough to zero to call it so.
It depend wholly and holey on the brief to programmers. If one is writing a program to evaluate mobility and other to count beans then the probability is equivalenced zero. But if both are writing a null search (extended type 2a) probability is hugely incremented.
Stored for use."Theoretically possible". Practically impossible.
Is censorship and banning already necessary?Once you dig deep enough we won't have to deal with you any longer.
Usually some preliminaries, like your losing the argument?
For source code, this is not hard at all. I don't care if they "look and smell different." I grade student assignments every week. And I specifically look for exactly this. In fact, there are programming tools that will compare source programs. Several universities have developed them and far more use the tools. Of course, you don't know about that, correct? Comparing semantic equivalence between source programs is an automated process nowadays. And you didn't know that either, correct? In fact, you actually know _very_ little, it seems.Because you could hand compile, given a month. But I agree. I give you a week.Why "within one day"?
Again you are not in concentration!If you give me the 5 compiled versions, and enough time, I'll take the challenge by myself, and if I am successful, will you go away for all time?
The 5 versions P,Q... are SOURCE not COMPILED!
The sources will look and smell different. Even very differenced. Maybe the ones which look most different will compile to identical objects!
SInce you said "source" it is not any work at all, just run it through (say) the software plagiarism detection software from Stanford, or other places. You are _really_ showing just how little you know about this subject.
The point was three of them will not only produce identical output results when compiled with any compliant compiler, but also if juiced by a special compiler will compile to identical object codes. Your job will be to find which 3 of the 5.
No asm is "unrecognizable". I have absolutely no idea where you are getting your information, but you might consider finding an alternate source. The current source is hopelessly out of touch.
And I need panel of 3, because 1 in 10 makes a fluke possible.
Pardon, but you are very underestimating of compiler sophistication. The asm may be auto-optimised to the degree of unrecognisability. If SMP involved, more so.The asm expresses the semantics of the C code. It makes copying obvious to the casual observer, once it is laid out.
Bullshit, again. The fortran compiler from Cray was just as good at optimizing as any compiler around today, and actually better because it had to take advantage of the vector hardware that we don't have today. Boy, are you out of touch with reality... badly out of touch...
In the day of Cray and HiTech it was different, a compiler was just a little more than an assembler. But RYBKA is of 2005-6, not 1616 or 1986.
Wylie, I got sucked into his nonsense until his last post. Notice he is talking about giving someone 5 different _source_ programs. And we have to figure out which 3 have semantic equivalence. No compilers needed. No optimization required. I think he is perhaps one of an infinite number of monkeys in a room where the nonsense he typed at least makes grammatical sense, even though it is computer science nonsense.wgarvin wrote:All modern optimizing compilers are quite "sophisticated" (although the ones used to compile Rybka back in 2005-6 were not as good as they are today), but this "auto-optimised to the degree of unrecognisability" is nonsense.Romy wrote:Pardon, but you are very underestimating of compiler sophistication. The asm may be auto-optimised to the degree of unrecognisability. If SMP involved, more so.bob wrote:The asm expresses the semantics of the C code. It makes copying obvious to the casual observer, once it is laid out.
In the day of Cray and HiTech it was different, a compiler was just a little more than an assembler. But RYBKA is of 2005-6, not 1616 or 1986.
Anyone who understands how compiler optimizers work and knows how to read assembly, should be able to compare a short segment of source code with a short listing of assembly instructions and draw a conclusion about whether they do the same calculation or not.
The effects of most compiler optimizations are relatively simple to understand, even if the implementation of the compiler is quite complicated and difficult. Every optimizing compiler folds constants, does CSE, strength reduction, inlining, and loop optimizations (peeling, unrolling, hoisting invariants, etc.) Modern ones use SSA form and do more aggressive things like partial-redundancy elimination, pointer alias analysis. It all sounds complex until you realize that the compiled program still has to compute the same results that you asked it to compute in your source code. It can move the computations around a bit, and do them in a smarter way, but generally it still has to do them.
If you write some short programs and compile them and look at the instructions the compiler actually produces, you'll get a good feel for what the compiler actually can and can't do to your code. And anyone can learn what those optimizations do without having to learn how they actually do it.
"constant folding": it replaces things like (1 + 3 + 5) with the (9) at compile time. Most compilers do this even in debug builds because it makes the compilation process faster.
"CSE": Common subexpression elimination. If it can figure out that you asked it to do the same computation twice, it will just compute it once and use the result in both places.
"strength reduction": things like (a * 4) get replaced with (a << 2) if that generates faster code. div by constant gets converted into mul by constant, etc.
"loop-invariant code motion": it finds calculations inside the loop body that would just produce the same result every time, and hoists them above the loop so they only have to be done once.
"loop induction": it can replace the variable(s) or address expression(s) that change by a fixed amount on each iteration of a loop (such as a counter, or some array being accessed inside the loop) with some other expression which is cheaper for it to compute. If you write for (int i=0; i<size; i++) and then in the body you index an array of 8-byte structures, it might use (i*8) instead. It might even re-write it like for (t = -(i*8); t != 0; t += 8) so it can take advantage of super-cheap t != 0 test. etc.
"partial redundancy elimination": if a calculation is made in a basic block which is common to more than one possible code path, and then the result is used on one of these paths but not on all of them, it might decide to move the computation so that its only performed on the paths where its needed.
Anyways the point is, compiler output is only surprising if you don't know anything about optimizing compilers. Or I guess, if you expect it to optimize something that seems obvious to you but it fails to do so...
Romy wrote:Thank you for this admission.bob wrote:a computer scientist will support is that yes, going from asm to C is a 1 to many mapping.Romy wrote:It is a demonstrable fact that compilation with the best compilers is a many-to-one process. Not a one-to-one process.
It did not come out earlier among the learned commentators. Better they see it now, when they have access to brake function and gear, then after the irreversibility line is crossed.
Are you unfamiliar with the concept that a compiler recognizes the syntax, determines the semantics, and produces an object file that expresses those semantics? The compiler might produce many different object files depending on optimization settings you choose. But the semantics _must_ be identical each time, else the compiler is broken and the program won't do what you want.Well, with a given compiler and given settings, it is not. Else, you are wrong.But going from a C to ASM is not.
It is flaw or fly in someone else's balm, but it is irrelevant to mine.That's the flaw in your ointment.
Aha.The asm expresses the semantics of the C code.
Are you unclear about the meanings of syntax and grammar, with application to C compilation process?
It appears you are addressing that to yourself. The answer, therefore, is "no".Romy wrote:For some reason, I can remind the first sentence.Romy wrote:Is any competent one here?
wgarvin wrote:Don't feed the trolls...
I had a calculus teacher that told me "if you integrate 2xdx, you get x^2, and if you differentiate x^2 you get 2xdx." Everyone believed him, and the book proved why this is so. Was "following the crowd" wrong in light of such supporting evidence???geots wrote:wgarvin wrote:Don't feed the trolls...
I dont know if (ahem) Carol is right or wrong. But I had much rather be him who has the nerve to come on this forum and tell you what he believes- than to be all the people so far who know nothing about nothing- only saying it has to be true if Hyatt says it is. So if you dont agree with Bob and others, you are a troll. Much better than being a follow the crowd simpleton.
You are in parts correct.geots wrote:I dont know if (ahem) Carol is right or wrong. But I had much rather be him who has the nerve to come on this forum and tell you what he believes- than to be all the people so far who know nothing about nothing- only saying it has to be true if Hyatt says it is. So if you dont agree with Bob and others, you are a troll. Much better than being a follow the crowd simpleton.
andbob wrote:All modern optimizing compilers are quite "sophisticated" (although the ones used to compile Rybka back in 2005-6 were not as good as they are today)
So first he says that "modern" (in context, 2010-11) compilers are notedly superior to one's even from 2005-6.bob wrote:Bullshit, again. The fortran compiler from Cray was just as good at optimizing as any compiler around today, and actually better
I am a computer science department. But not Alabaman.I am in a computer science department
Please do not write such nonsenses.bob wrote:I had a calculus teacher that told me "if you integrate 2xdx, you get x^2, and if you differentiate x^2 you get 2xdx." Everyone believed him, and the book proved why this is so. Was "following the crowd" wrong in light of such supporting evidence???