Alex, is it too much to ask that you follow the entire discussion before making statements that simply do not belong? "go_parser()" is not the _only_ thing being looked at. It was the _first_ thing. when you disassemble something you have to start near the main() entry point and work your way out from there. So this was the _first_ place where the similarity was discovered. It was not the last. There are more to come.RegicideX wrote:I read things like these and I can only wonder if we are talking about the same things. We are talking about the UCI parser -- that has nowhere near 44 000 lines of code, more like 100-150, and their purpose is by design extremely similar to nearly identical. Of course you should expect similarities in such a procedure.bob wrote: yes there is non-equivalent code. But in a large program, one expects to find very little semantically equivalent code. That is the issue. Do you actually believe that in writing a 44,000 line program, that I will by pure chance duplicate what others have done _exactly_ here and there? when I don't even see this happen in 100-1000 line student programs???
I also note that Robert sometimes says something with which I agree, but then adds that I should "go read" something to see that it is true.
There is a lot of mis-communication taking place -- I'm not going to participate in all these post and my lack of participation should not be taken as either agreement or disagreement.
The problem right now, is that the small group that are looking at this are just going in the order that the program dictates. I have already asked Chris to give some idea of what he would consider reasonable evidence. What function(s). What kind of data. Etc. He is not happy with what has been presented so far. But I doubt he will offer any specific data he would accept, because I don't believe there is any. So the group has a "natural order" for analysis, and that seems to be unacceptable. There are only two choices... (1) wait until they find something everyone agrees is suspicious; or (2) direct their analysis by telling them what they should look at. Right now all that is being written is "that is not evidence", "that is not core AI", and so forth. No helpful suggestions, just cries of "this can't be..." "this is faked"... "this is dishonest"... "this is misleading..." and so forth, repeated ad nauseum.