Off topic: Floating point number crunching

Discussion of chess software programming and technical issues.

Moderator: Ras

User avatar
hgm
Posts: 28453
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Off topic: Floating point number crunching

Post by hgm »

Evert wrote:What is -mno-cygwin supposed to do? It's been years since I had Windows, but back then I used MinGW and never touched Cygwin. I had no problems making binaries, either GUI or console...
It uses a different set of .h files, and links to msvcrts.dll in stead of cygwin1.dll. I never used MinGW.
User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: Off topic: Floating point number crunching

Post by Evert »

Might be worth looking at.

Maybe not if you're pressed for time and worrying about breaking a setup that you know works...
Daniel Shawul
Posts: 4186
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Off topic: Floating point number crunching

Post by Daniel Shawul »

GPUs anyone? The trend is to use them for high throughput computation. For older codes that can run on vector computers, the conversion should be straight forward. I don't think i7 with AVX will outperform any of the latest GPUs on floating point arithmetic (may be not on double precision). Reason why some prefer SIMD: automatic vectorization vial intel fortran/c compilers, or if the code has lots of branches. The average matlab coder scientist will not be bothered to optimize code on gpu. Calculations similar to subroutines in BLAS , two orders of magnitude performance should not be a problem. I have written some linear equation system solvers (without preconditoners) and some optimization routines and got 10x speed up on an old gpu. The toughest challenge I encountered was..guess what.. tree search.
User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: Off topic: Floating point number crunching

Post by Evert »

Daniel Shawul wrote:GPUs anyone? The trend is to use them for high throughput computation. For older codes that can run on vector computers, the conversion should be straight forward. I don't think i7 with AVX will outperform any of the latest GPUs on floating point arithmetic (may be not on double precision). Reason why some prefer SIMD: automatic vectorization vial intel fortran/c compilers, or if the code has lots of branches.
That's actually not unimportant. Double precision is also an issue, but perhaps not so much these days as it used to be (and probably less of an issue on, say, a Tesla as opposed to a consumer card).
The problem sounds like it could benefit from being run on a GPU, but it'd be a lot more work to do, particularly to do right.
The average matlab coder scientist will not be bothered to optimize code on gpu.
Interesting. I don't think I know anyone who actually uses matlab (I hear it's used by engineers though), but I know several people who get a kick out of writing efficient N-body integrators on a GPU.

I guess it all depends on the particular field you're in.
Daniel Shawul
Posts: 4186
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Off topic: Floating point number crunching

Post by Daniel Shawul »

That's actually not unimportant. Double precision is also an issue, but perhaps not so much these days as it used to be (and probably less of an issue on, say, a Tesla as opposed to a consumer card).
The problem sounds like it could benefit from being run on a GPU, but it'd be a lot more work to do, particularly to do right.
Don't know why but Intels technology always leave a bad test in me. They sell them as something extraordinary to try and kill any competition. eg. HT vs Multi-core technology,and now SSE/AVX vs GPU. When AMD started producing cpus with many cores, Intel comes up with logical cores with only 20% efficiency and even that is for specific programs.The uniformed mass (even myself) will feel cheated by that. Then there is GPGPU computation which is a completely different way of doing things. Years of technology is embedded in CPUs to reduce latency which it thrives on. GPGPU should be praised for its 'new' approach alone let alone better performance. Intel's 'Larrabee' project failed but now is resurrected as an expensive 50 core processor. We will see how it copes with fermi in flops/dollar.
Interesting. I don't think I know anyone who actually uses matlab (I hear it's used by engineers though), but I know several people who get a kick out of writing efficient N-body integrators on a GPU.

I guess it all depends on the particular field you're in.
Matlab is heavily used where I am from. The largest software on my notebook (probably anywhere) is Matlab with 8GB diskspace req. I can submit matlab jobs from it to cpu/gpu/fgpa clusters so I don't even need to know what is going on behind the scenes. Well physicsts scare me so I wouldn't know if matlab is enough. I saw some cool n-body simulation on nvidia tutorials but that is just about it.
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Re: Off topic: Floating point number crunching

Post by sje »

hgm wrote:Well, one of the problems is that this was all done so long ago that the original C-source of the optimized routines seems to be lost. (I probably have it somewhere on a floppy, but even I could find that amongst the ~300 other floppies I have stashed in a box somewhere, I no longer have a way to transfer files from floppies to modern machines.)
You can get a floppy drive with a USB interface which handles both power and data. Linux should have drivers for most floppy disk formats. However, there could be difficulties with formats like the 400 KB/800 KB early Macintosh and the 880 KB Amiga.