Indeed your output is knps, but that is not the uci standard and it is comfusing the shreddder GUI which calculates the nps to knps and is showing a hash usage of i.e. 202.6%. Not that I have a problem with it, but nevertheless reporting it isn't a bad idea.
Reg. the hash size, taking powers of two doesn't help. Below is the info from the log file. I started with 1450MB and set it to 2GB. FYI, OS is WinXP x64 with 4GB RAM.
To be precise, the biggest hashsize I can give is 1453MB.
0411047203:<--uciok
0411047234:setoptionname Hash value 1450
resizing hash to 1450Mb
tt.kb =1484800
tt.entries=5aa0000
tt.addr =102e0080
resized
0411047984:isready
0411047984:<--readyok
0411169609:setoptionname Hash value 2048
resizing hash to 2048Mb
tt: malloc(-2147483648) failed, retrying...
tt.kb =2097152
tt.entries=4000000
tt.addr =102e0080
resized
btw, Ik woon ook ik nederland
Allard Siemelink wrote:Hi Ernst,
The nps in the uci output is actually knps (nps/1024), does it cause you any trouble?
You probaby discovered a bug in the hash allocation, I will look into that.
Meanwhile, you may try specifying an exact power of two of for the hash size, e.g. 1024 or 2048 Mb.
ernst wrote:Thanks Allard for this great engine.
However I am unable to use more than 1450MB hash or the memory used drops again. I have a Q6600 and 4GB RAM.
Futhermore when you look at the output from the engine, there are two anomalies.
Ok, I see now: it is a sign bug! it actually attempts to allocate -2048Mb of memory which fails, of course.
Then it devides the requested memory by 2 and allocates 1024Mb succesfully.
The hashfull% also has a bug: it fails to take into account some of the overwritten entries.
Thanks for reporting both issues.
Groeten,
-Allard
ernst wrote:Hello Allard,
Indeed your output is knps, but that is not the uci standard and it is comfusing the shreddder GUI which calculates the nps to knps and is showing a hash usage of i.e. 202.6%. Not that I have a problem with it, but nevertheless reporting it isn't a bad idea.
Reg. the hash size, taking powers of two doesn't help. Below is the info from the log file. I started with 1450MB and set it to 2GB. FYI, OS is WinXP x64 with 4GB RAM.
To be precise, the biggest hashsize I can give is 1453MB.
Happy new year to you too. Are you planning to fix the hashtable problem any time soon? I would love to be able to use bigger hashtables for analyzing.
It turns out that the 'sign bug' I mentioned only affects the log output.
The actual call to allocate the memory is correct, but fails.
Unfortunately it seems to be limitation of 32-bit Windows.
According to http://www.microsoft.com/whdc/system/pl ... AEmem.mspx,
the maximum address space for a process is 2Gb.
Since part of that is already in use, this would explain the inability to set the Hash to 2Gb or more.
(For Bright or any other 32-bit engine)
ernst wrote:Hello Allard
Happy new year to you too. Are you planning to fix the hashtable problem any time soon? I would love to be able to use bigger hashtables for analyzing.
Thanks Allard for this great engine! It's tactical speed is stunning as well as node speed (2 000 000 nps in my old PC). In one of my test suites it's
better than ANY pro engine... In endgame tests it's no patzer either with
4 piece bitbases.
Jouni wrote:Thanks Allard for this great engine! It's tactical speed is stunning as well as node speed (2 000 000 nps in my old PC). In one of my test suites it's
better than ANY pro engine... In endgame tests it's no patzer either with
4 piece bitbases.