KOMODO 1415 64-BIT 8CPU v STOCKFISH 180415 64-BIT 8CPU Match

Discussion of computer chess matches and engine tournaments.

Moderators: hgm, Rebel, chrisw

User avatar
Graham Banks
Posts: 41455
Joined: Sun Feb 26, 2006 10:52 am
Location: Auckland, NZ

PGN download link

Post by Graham Banks »

Modern Times wrote:Game 44 aborted with the same problem but in reverse. Stockfish spent 15 minutes on a move, tablebases were cached to a large extent, and then it was Komodo that was forced to suffer swapping on its move. Clearly you need 32GB of RAM to run 6-men reliably at very long time controls.

The match is now aborted, and a tie seems a fair result to both.
PGN download link - http://kirill-kryukov.com/chess/discuss ... p?id=34529
gbanksnz at gmail.com
syzygy
Posts: 5566
Joined: Tue Feb 28, 2012 11:56 pm

Re: Game 42 - The open b-file is contested, no advantage, dr

Post by syzygy »

jhellis3 wrote:
Explain?
Why do you think it is thrashing the pagefile in the first place? Because the system is out of memory.
It is not out of memory. It simply has used all its free memory for caching data which is backed up by disk files and can be released at any moment at no cost.

The problem is that the system sometimes chooses to swap hashtables to disk instead of releasing memory used for caching TB data. That might sometimes be a good decision, but in the specific use case of engines accessing 6-piece TBs on a system with limited memory (relative to the size of the 6-piece TBs), and that is what we are talking about here, it is a bad decision.
If you are at 99.9% usage and you try to load in 5% more of stuff, what do you think is going to happen?
If the page file is disabled, the system has no choice but the right one: to release memory that was used for caching TB data. That released memory can then be immediately reused for caching more recent/relevant TB data.
sockmonkey
Posts: 588
Joined: Sun Nov 23, 2008 11:16 pm
Location: Berlin, Germany

Re: Game 42 - The open b-file is contested, no advantage, dr

Post by sockmonkey »

mbabigian wrote:Ray, I would think the solution is to limit the windows file system cache.

See https://support.microsoft.com/en-us/kb/976618

*** Snipped from the link above ***

To work around this issue, use the GetSystemFileCacheSize API function and the SetSystemFileCacheSize API function to set the maximum or minimum size value for the working sets of the system file cache. The use of these functions is the only supported method to restrict the consumption of physical memory by the system file cache.

The Microsoft Windows Dynamic Cache Service is a sample service that demonstrates one strategy to use these APIs to minimize the effects of this issue.

Installing and using the Microsoft Dynamic Cache Service does not cause the exclusion of support for Microsoft Windows. This service and its source code are provided as an example of how to use the Microsoft supported APIs to reduce the growth of the file system cache.

You can obtain the service and source code from the following Microsoft website:

http://www.microsoft.com/downloads/deta ... laylang=en

Hope this helps,
Mike
Microsoft explicitly warns against using this service on newer OSs (at least in one KB article I read). Whatever their misgivings about it, the problem with these API calls in general is that they are global -- I can set them in Komodo, but you can override them in Stockfish. There's no good way, on Windows, to limit the file system cache for a particular process or process tree. Apparently the workgroup calls, another API often mentioned when this topic is raised, aren't particularly reliable, either.

The best advice I've found so far is to use VirtualUnlock() to release the cache memory explicitly. But coming up with a good system for doing so with the tablebases without wrecking probing performance seems like a challenge. Anyway, I'm glad that folks are thinking about this, maybe there are other options available.

EDIT: I should probably just note that I'm not convinced that this is a terrible, awful problem which deserves any kind of emergency attention. It's an annoyance and it certainly merits some further investigation, principally to determine if the behavior is actually having any sort of measurably detrimental effect on the performance of the system or, specifically in this case, of the opponent engine. If there were a reasonably fool-proof way to manually limit the file system cache for those who care, it would be cool, of course, but I don't think there is a magic bullet on Windows. Except to buy more RAM if you're using 6-man bases, of course. :-)

Jeremy
http://www.open-chess.org : Independent Computer Chess Discussion Forum
syzygy
Posts: 5566
Joined: Tue Feb 28, 2012 11:56 pm

Re: Game 42 - The open b-file is contested, no advantage, dr

Post by syzygy »

sockmonkey wrote:Microsoft explicitly warns against using this service on newer OSs (at least in one KB article I read). Whatever their misgivings about it, the problem with these API calls in general is that they are global -- I can set them in Komodo, but you can override them in Stockfish. There's no good way, on Windows, to limit the file system cache for a particular process or process tree.
The file system cache is, and should be, system-wide, which means it can't be limited for a particular process.

I agree it's not a good idea to let an engine set such a limit. This is clearly something that should be left to the system administrator. But it does seem to be a very useful option for those users that run engine-engine matches on their machines and are seeing what Graham saw. And it might also be useful for users that let their engine analyse a position for many hours or even days and suffer from an unresponsive system when firing up a web browser or so.
The best advice I've found so far is to use VirtualUnlock() to release the cache memory explicitly. But coming up with a good system for doing so with the tablebases without wrecking probing performance seems like a challenge. Anyway, I'm glad that folks are thinking about this, maybe there are other options available.
The problem / difficulty with such an approach is that if two engines are playing it does not help if one tells the system certain pages can be released while the other does not (or unlocks another range of pages).
If there were a reasonably fool-proof way to manually limit the file system cache for those who care, it would be cool, of course, but I don't think there is a magic bullet on Windows.
You don't think that "Dynamic Cache Service" tool that was linked will do just that? I have not tried it (and have Windows system with access to 6-piece tables), but maybe somebody will?

And if it does not help, disabling the pagefile should. May sound drastic, but on a 16 GB system the pagefile is overrated imho. (I guess the pagefile is still useful on a 16 GB system if you tend to start many big applications and not close them down when you're not using them.)
mbabigian
Posts: 204
Joined: Tue Oct 15, 2013 2:34 am
Location: US
Full name: Mike Babigian

Re: Game 42 - The open b-file is contested, no advantage, dr

Post by mbabigian »

Ronald is dead on. It is a system wide global change. Microsoft's warning about it potentially reducing performance is obvious. Reducing the system cache under normal circumstances is a silly thing to do; however, under the specific environment of running a two engine test on one machine and 6 piece TBs, creates a "unique" problem. In fact, the shorter the time control, the less of an issue this is likely to be, but at longer time controls the issue becomes more prominent.

Setting the pagefile to zero is also a good suggestion and worth trying. I do this myself, although my two machines have 24GB and 32GB respectively.

Worth reading is the following text from the link I posted. I think "Unique" situation is exactly what we are talking here.

Code: Select all

The memory management algorithms in Windows 7 and Windows Server 2008 R2 operating systems were updated to address many file caching problems that were found in earlier versions of Windows. There are only certain unique situations in which you have to implement this service on computers that are running Windows 7 or Windows Server 2008 R2.
Regards,
Mike
ouachita
Posts: 454
Joined: Tue Jan 15, 2013 4:33 pm
Location: Ritz-Carlton, NYC
Full name: Bobby Johnson

Re: PGN download link

Post by ouachita »

Graham,
How are you defining VLTC?
Graham Banks wrote:Clearly you need 32GB of RAM to run 6-men reliably at very long time controls.
SIM, PhD, MBA, PE
User avatar
Graham Banks
Posts: 41455
Joined: Sun Feb 26, 2006 10:52 am
Location: Auckland, NZ

Re: PGN download link

Post by Graham Banks »

ouachita wrote:Graham,
How are you defining VLTC?
Graham Banks wrote:Clearly you need 32GB of RAM to run 6-men reliably at very long time controls.
Hi,

I use 5 piece tablebases.

Graham.
gbanksnz at gmail.com