Rebel wrote:Michael Sherwin wrote:I don't mean to be a pain but to me it sounds like you are describing the C++ way. I know that C++ objects can be simulated in C using structures and being strict in using function calls accordingly. But then I might as well use C++. In my way of thinking C++ using objects is superior for a team project so team members do not trample all over another team members variable names and function names. For a single person working on a one file source C++ methodology does not seem (as) beneficial. I can write 32 bit assembler with the best of em. However, the nuances of 64 bit assembly is giving me a hard time or I'd be writing this primarily in assembler. My goal is writing code that is as fast as it can be. I have sort of a reputation for that or at least I did when RomiChess first came out and also my perft examples I wrote. My perft example in 32 bit assembler runs at 65 million nodes per second using a single thread on my 3.4GHz i7. Not bragging, just saying I like to stick with my programming style. And learning a new paradigm at my age is not easy. What you are suggesting that I should do by passing a pointer around I understand would not make a very noticeable difference in speed but learning a whole new paradigm at my age would certainly slow my progress. I thank you for the philosophical discussion and anything more that you would like to add. I wonder what others might think about what you are suggesting.
Bob?
I am not Bob
but what I did in the past (32 bit of course) is to write stuff in C then look at the ASM code the compiler generated and then manually optimize it. Perhaps it makes sense for 64 bit as well.
I have done that myself. But it is not the optimal solution. Because you start with the framework dictated by the compiler. You might find more efficient instructions or combinations, but you are still working with how the compiler did things. For Cray Blitz, Harry and I started from scratch. Completely from scratch. Now we are able to make register assignments as we choose, and since we primarily write "leaf routines" we don't have to worry about registers getting overwritten when we don't call anything that could do so.
That being said, it is not a particularly efficient way to write software in terms of human development time, but the rewards can be significant. In more than one place, we wrote something from scratch that was 5x to 10x faster than the compiler (we were using Cray's vectorizing fortran compiler with CB). But we knew things the compiler could not know. IE this value can never be negative, it can never be less than 0 or larger than 15, etc... But when you want to make changes... ugh...
I would add that 64 bit code (Intel world) is beyond a messy environment. It was a HUGE kludge (64 bit instructions added) on top of an already HUGE kludge (the x86 instruction is not exactly the best laid out instruction set I have seen, just like the processor architecture really suck when compared to much better ones that didn't try to maintain backward compatibility with 8 bit architectures... I've taught assembly language for the IBM 1620, IBM /360, Xerox sigma series, data general MV8000/10000, DEC Vax, Sun including the motorola 680x0 chips and the SPARC processors. Intel is the worst of the group, particularly when factoring in the 64 bit kludges AMD added to make the 64 bit extensions compatible with the 8/16/32 bit instruction formats.
I've programmed a lot of other machines in assembly language, including my all-time most hated machine, the Intel Itanium...