upper Ram limits for Windows 7

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

Dann Corbit
Posts: 12541
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Hash Filling

Post by Dann Corbit »

M ANSARI wrote:I don't understand why MS would have different amount of addressable space for Win 7 64bit. I mean for 32bit OS I can understand that the maximum is 4GB and that's it, but for 64bit why don't they just allow the max allowable addressable space and be done with it. Why have all the different versions?

Having said that I really like having more than 4GB of space in RAM. It has helped tremendously in Photoshop CS4, especially when merging very large files for stitching and using 4 or 6 different layers trying to get a decent HDR image. Also my brother says it is a HUGE boost in his video editing software.
Linux does *exactly* the same thing:

Code: Select all

RH Ent Linux ES ver 3          8 GB 
RH Ent Linux ES ver 4          16 GB 
RH Ent Linux base ver 5        No software imposed limit 
RH Desktop Linux v5            4 GB 
RH Desktop v5 w/ wkstn         No software imposed limit 
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: upper Ram limits for Windows 7

Post by bob »

Terry McCracken wrote:
bob wrote:
tjfroh wrote:Mr. Burcham,

Thanks for the info. Microsoft is migrating to a 128 bit OS in the next two years. That will allow considerably more RAM. I doubt that there will be backward compatibility for any 32 bit apps. Get ready.

Uncle TJ

PS There were Luddites back in the 1980s who said that they would never need a processor more powerful than the Intel 80286 with 1 Mb of RAM. I just smiled at them at the time. I thought that my
80386 with a 80387 math coprocessor was hot. I had an MFM 65 Mbyte hard disc that cost me $300.00 USD. Today we are buying
1,000,000 MByte drives for $100.00 USD.
I am not sure what this is about (128 bit O/S) but it will have _nothing_ to do with using additional RAM. Max RAM is defined by the hardware itself. Current 64 bit processors are limited to 40 bits of address space, or a total of one terabyte of RAM. And there's not a thing the O/S can do about it since the physical page table characteristics define this along with the memory address bus width.

They must be on crack or something. We do not have the capability for creating file sizes that tax 64 bits of addressing, so needing 128 bit file offsets is a long way off. 128 bits of RAM is not on the horizon, we don't even have 64 bits of RAM addressing yet. Current and planned PCs are 48 bits of virtual address space, 40 bits of physical address space. My core2 duo laptop only supports 36 bits of physical RAM as a reference point. I have an AMD that supports 40.
They really should just make the address 64bits. Costs are down and in 5 years people will demand terabytes and possibly petabytes of memory.

SSD's will be affordable by then.

Comps will get very powerful in the next decade.
There's nothing that says they won't. The issue is, however, that you need cache tags that get bigger as the physical address space grows. If you use the classic 2kb pages, that requires 12 bits of the address space. If you consider a 40 bit physical address space, you need 28 bits of tag. If you go to 64 bits of physical address space, you need 52 bits for a tag, almost 2x as large. This memory has to come from somewhere, and the only place it can come from is the overall cache memory space, which will likely make that smaller for the same physical memory size. Not to mention the problem of increasing the addressing datapaths (internal to the cache) to handle the extra bits, which will further impact overall cache size. The general consensus is to track address space with actual requirements, rather than jumping to the physical limit...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hash Filling

Post by bob »

kgburcham wrote:[D] r1bqkb1r/pp1n1pp1/2p1pn1p/6N1/3P4/3B1N2/PPP2PPP/R1BQK2R w KQkq -

My system reports the 6 gigs of ram installed.
I was curious about the program/gui limit of hash.
I do not know what to believe that programmers tell me with output.
Not sure what is true and what is smoke.
Deep Fritz 11 shows 11,000 kns.
Rybka 3 modified shows 325 kns.

so in other words is there also lying about hash?

In the above position I used these hash settings with these results.
Of course I do not have "time to solve" for position results.
I only have time to 95% hash fill.

With Hash at 1024, it took 44 seconds to fill to 95%.
With Hash at 2048, it took 92 seconds to fill to 95%.
With Hash at 3072, it took 142 seconds to fill to 95%.
With Hash at 4096, it took 196 seconds to fill to 95%.

kgburcham
Why in God's name would you use Rybka to compare speeds, nps, etc? It obviously fakes the numbers, this has been discussed for a couple of years.

Bigger hash does _not_ mean higher NPS. In general it will mean shorter time to a fixed depth until the hash table reaches the size where it holds almost everything. Going beyond this size does nothing and can hurt because huge hash tables _really_ beat the crap out of the TLB because there are so many virtual pages involved.

I pick 3-4 positions, one opening, couple of middlegame, and one endgame, and run them with different hash sizes, starting at maybe 64MB and doubling until the time to complete the 4 positions to the same depth stops speeding up. You do have to do this for the specific time control you are using. This is the reason I did the "adaptive" command in Crafty. you tell it the min and max hash sizes you can stand, and it will compute the optimal hash setting once it determines the time control you are going to use for a specific game.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hash Filling

Post by bob »

Dann Corbit wrote:
M ANSARI wrote:I don't understand why MS would have different amount of addressable space for Win 7 64bit. I mean for 32bit OS I can understand that the maximum is 4GB and that's it, but for 64bit why don't they just allow the max allowable addressable space and be done with it. Why have all the different versions?

Having said that I really like having more than 4GB of space in RAM. It has helped tremendously in Photoshop CS4, especially when merging very large files for stitching and using 4 or 6 different layers trying to get a decent HDR image. Also my brother says it is a HUGE boost in his video editing software.
Linux does *exactly* the same thing:

Code: Select all

RH Ent Linux ES ver 3          8 GB 
RH Ent Linux ES ver 4          16 GB 
RH Ent Linux base ver 5        No software imposed limit 
RH Desktop Linux v5            4 GB 
RH Desktop v5 w/ wkstn         No software imposed limit 
Except that I don't believe any of those. We are running centos on clusters that have 16 and 32 gb of ram, with no problems at all. The linux kernel doesn't care, other than for 32 bits, you have a 36 bit address space because of intel hardware limits, while for 64 bits, you can have anywhere from 36 bit to 40 bit physical address space, again limited by hardware, not software. There are no 4/8/16 gb limits in the memory management software that I can find, nor would I expect there to be. The MM software does have to be aware of what the hardware can use, and what memory the hardware has, of course.

I'd suspect the above numbers are nonsensical marketing crap.
kgburcham
Posts: 2016
Joined: Sun Feb 17, 2008 4:19 pm

Re: Hash Filling

Post by kgburcham »

bob wrote:[D]r1bqkb1r/pp1n1pp1/2p1pn1p/6N1/3P4/3B1N2/PPP2PPP/R1BQK2R w KQkq -


Why in God's name would you use Rybka to compare speeds, nps, etc?


I was curious if Rybka had also had its hash fill time altered.


It obviously fakes the numbers, this has been discussed for a couple of years.
Yes we all know but if he was so scared of you revealing his kns, I was curious what else he modified.

Bigger hash does _not_ mean higher NPS. In general it will mean shorter time to a fixed depth until the hash table reaches the size where it holds almost everything. Going beyond this size does nothing and can hurt because huge hash tables _really_ beat the crap out of the TLB because there are so many virtual pages involved.

I pick 3-4 positions, one opening, couple of middlegame, and one endgame, and run them with different hash sizes, starting at maybe 64MB and doubling until the time to complete the 4 positions to the same depth stops speeding up. You do have to do this for the specific time control you are using. This is the reason I did the "adaptive" command in Crafty. you tell it the min and max hash sizes you can stand, and it will compute the optimal hash setting once it determines the time control you are going to use for a specific game.