Page 1 of 1

SF memory hard faults issue

Posted: Sun May 27, 2018 8:10 pm
by yorkman
Is it just me on does SF and its derivatives have memory hard fault issues? In windows 2012 for example on a dual e5-2699v3, looking at the Resource Monitor I have full bars all the time (around 400 hard faults) just like the cpu usage monitor. That's crazy! I have 64 GB of ram and I'm using 32 GB of hash which has always been fine before. Unfortunately I don't know when this problem started as I haven't looked at this for months.

Re: SF memory hard faults issue

Posted: Mon May 28, 2018 8:08 am
by Eelco de Groot
Sorry don't know much about it. Had not even heard of the term 'memory hard fault' before. Usually the term 'Page fault' is used I believe. Windows 10 does not have this in its Task Manager even, at least could not find it. No report of 'memory hard faults' anywhere so no way to measure. The Task Manager might be wrong, is not very reliable in its output. The only sensible way to measure is if you get a speed up with a Stockfish programmed for Large Pages or an OS that does that automatically like Linux, most flavors.

Re: SF memory hard faults issue

Posted: Mon May 28, 2018 9:16 am
by Modern Times
Eelco de Groot wrote: Mon May 28, 2018 8:08 am Windows 10 does not have this in its Task Manager
Yes it does, search for Resource Monitor on the task bar.

Re: SF memory hard faults issue

Posted: Mon May 28, 2018 9:53 am
by Eelco de Groot
Thanks Ray! Windows Start -> Windows Systeembeheer -> Broncontrole in my Dutch version Windows 10. (Edit: now I see I can access that from Task manager as well, like you said) I see very little pagefaults with the Stockfish that I compiled on this system at first glance (using small hash tables, 512 megabytes). Or with Kaissa, that is practically the same. I see a blue, almost flat line at 60 pagefaults per second but that does not seem the measurement itself? It is not completely flat but I don't think that is the measured level of page faults. Not sure Green spikes maybe once a minute below ten spikes per second. But as I said not sure what the blue line is

Re: SF memory hard faults issue

Posted: Mon May 28, 2018 1:53 pm
by Milos
Eelco de Groot wrote: Mon May 28, 2018 8:08 am Sorry don't know much about it. Had not even heard of the term 'memory hard fault' before. Usually the term 'Page fault' is used I believe. Windows 10 does not have this in its Task Manager even, at least could not find it. No report of 'memory hard faults' anywhere so no way to measure. The Task Manager might be wrong, is not very reliable in its output. The only sensible way to measure is if you get a speed up with a Stockfish programmed for Large Pages or an OS that does that automatically like Linux, most flavors.
Ah just ignore it, it's another meaningless question. Ofc it is page "fault", and ofc it is not fault it is just stupid windows name when page is not found in cache and needs to be retrieved from the disk/swap.
He is seeing a high number of those because he has enabled Large pages and is using half of memory for hash and his memory is extremely fragmented. Easy solution, just restart the machine or use less hash. :)

Re: SF memory hard faults issue

Posted: Mon Jun 04, 2018 11:33 pm
by yorkman
Milos wrote: Mon May 28, 2018 1:53 pm
Eelco de Groot wrote: Mon May 28, 2018 8:08 am Sorry don't know much about it. Had not even heard of the term 'memory hard fault' before. Usually the term 'Page fault' is used I believe. Windows 10 does not have this in its Task Manager even, at least could not find it. No report of 'memory hard faults' anywhere so no way to measure. The Task Manager might be wrong, is not very reliable in its output. The only sensible way to measure is if you get a speed up with a Stockfish programmed for Large Pages or an OS that does that automatically like Linux, most flavors.
Ah just ignore it, it's another meaningless question. Ofc it is page "fault", and ofc it is not fault it is just stupid windows name when page is not found in cache and needs to be retrieved from the disk/swap.
He is seeing a high number of those because he has enabled Large pages and is using half of memory for hash and his memory is extremely fragmented. Easy solution, just restart the machine or use less hash. :)
The system was of course rebooted. I know very well that pages can get fragmented with LP. My pages are not fragmented however. And even without using LP I see this issue.