Too much Hash harms?

Discussion of anything and everything relating to chess playing software and machines.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
Laskos
Posts: 9840
Joined: Wed Jul 26, 2006 8:21 pm
Full name: Kai Laskos

Too much Hash harms?

Post by Laskos » Thu Feb 02, 2017 8:47 am

I tested time-to-depth Stockfish dev. and Komodo 10.3 for efficiency of Hash Table usage on 30 positions repeated 4 times:

Code: Select all

Stockfish dev.| depth=23| 30 positions x 4| average filling per position 64M:

Hash   Time

1M:    381s
32M:   348s
1024M: 361s 


Komodo 10.3| depth=22| 30 positions x 4| average filling per position 48M:

Hash   Time

1M:    388s
32M:   333s
1024M: 356s 
One, too much Hash seems to harm. Second, Komodo improves from 1M (too little) to 32M (adequate) by 17%, while Stockfish dev. by 9%.

User avatar
Laskos
Posts: 9840
Joined: Wed Jul 26, 2006 8:21 pm
Full name: Kai Laskos

Re: Too much Hash harms?

Post by Laskos » Thu Feb 02, 2017 10:01 am

Laskos wrote:I tested time-to-depth Stockfish dev. and Komodo 10.3 for efficiency of Hash Table usage on 30 positions repeated 4 times:

Code: Select all

Stockfish dev.| depth=23| 30 positions x 4| average filling per position 64M:

Hash   Time

1M:    381s
32M:   348s
1024M: 361s 


Komodo 10.3| depth=22| 30 positions x 4| average filling per position 48M:

Hash   Time

1M:    388s
32M:   333s
1024M: 356s 
One, too much Hash seems to harm. Second, Komodo improves from 1M (too little) to 32M (adequate) by 17%, while Stockfish dev. by 9%.
In fact, the minimum for these engines is 4MB, so the table looks like that:

Code: Select all

Stockfish dev.| depth=23| 30 positions x 4| average filling per position 64M:

Hash   Time

4M:    381s
32M:   348s
1024M: 361s 


Komodo 10.3| depth=22| 30 positions x 4| average filling per position 48M:

Hash   Time

4M:    388s
32M:   333s
1024M: 356s

User avatar
hgm
Posts: 23981
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Too much Hash harms?

Post by hgm » Thu Feb 02, 2017 10:09 am

What does 'average filling per position' mean? The engines seem to benefit less than I would expect in going from 4MB to 32MB. So it could be that the table already ceases to be overloaded before you reach 32MB.

Having unnecessarily large hash tables can hurt raw speed by increasing the number of TLB misses. You should be able to see this in the nps. So it would also be interesting to quote the nodes needed to reach the desired depth.

User avatar
Laskos
Posts: 9840
Joined: Wed Jul 26, 2006 8:21 pm
Full name: Kai Laskos

Re: Too much Hash harms?

Post by Laskos » Thu Feb 02, 2017 10:17 am

hgm wrote:What does 'average filling per position' mean?

Having unnecessarily large hash tables can hurt raw speed by increasing the number of TLB misses. You should be able to see this in the nps. So it would also be interesting to quote the nodes needed to reach the desired depth.
I might do this. "Average filling per position" means what maximum Hash is used during fixed-depth search on those positions by each engine. The positions are all openings, and the maximum used Hash to certain depth is similar for them, on average a little above 32M.

User avatar
hgm
Posts: 23981
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Too much Hash harms?

Post by hgm » Thu Feb 02, 2017 11:02 am

So that would be the hash filling in case of asymptotically large table?

It can be expected that there is very little benefit from remembering the leaf nodes of the tree, and the usual replacement schemes would overwrite those first, and try to protect the results from deeper searches. So if the table gets smaller than the number of different positions in the tree, the leaf nodes will start to compete with each other for space, and will only live in the table for a fraction of the time. But the deepernodes will be hardly affected. And still many transpositions (of moves close to the horizon) that reach the leaf nodes will occur within their survival time. And for those that occur afterwards it is not very expensive to do redo the evaluation of single node.

IIRC in my measurements the search time only started to go up once the overload factor exeded 10.

Tdunbug
Posts: 47
Joined: Sun Mar 06, 2016 11:46 pm

Re: Too much Hash harms?

Post by Tdunbug » Thu Feb 02, 2017 1:20 pm

Can I ask what software you used for your testing? As I like to run a similar test on my dual core machine.

Jouni
Posts: 2083
Joined: Wed Mar 08, 2006 7:15 pm

Re: Too much Hash harms?

Post by Jouni » Thu Feb 02, 2017 1:21 pm

In Stockfish forum this has been discussed/tested many times. Now they seems to use 4MB for 10+0,1 and 64MB for 60+0,6 one thread tests.
Jouni

User avatar
Laskos
Posts: 9840
Joined: Wed Jul 26, 2006 8:21 pm
Full name: Kai Laskos

Re: Too much Hash harms?

Post by Laskos » Thu Feb 02, 2017 1:32 pm

Here NPS are also computed:

Code: Select all

Stockfish dev.| depth=23| 30 positions x 4| average filling per position 64M: 

Hash   Time 

4M:    381s    nps: 1634271
32M:   348s    nps: 1635403
1024M: 361s    nps: 1536689


Komodo 10.3| depth=22| 30 positions x 4| average filling per position 48M: 

Hash   Time 

4M:    388s    nps: 1548932 
32M:   333s    nps: 1553464
1024M: 356s    nps: 1459819

User avatar
Laskos
Posts: 9840
Joined: Wed Jul 26, 2006 8:21 pm
Full name: Kai Laskos

Re: Too much Hash harms?

Post by Laskos » Thu Feb 02, 2017 1:35 pm

Tdunbug wrote:Can I ask what software you used for your testing? As I like to run a similar test on my dual core machine.
Shredder Classic GUI, 1 thread for the stability of the result. I feed EPD positions with no "bm" tag in them to analyze to certain depth.

User avatar
Laskos
Posts: 9840
Joined: Wed Jul 26, 2006 8:21 pm
Full name: Kai Laskos

Re: Too much Hash harms?

Post by Laskos » Thu Feb 02, 2017 1:57 pm

hgm wrote:So that would be the hash filling in case of asymptotically large table?
Yes
It can be expected that there is very little benefit from remembering the leaf nodes of the tree, and the usual replacement schemes would overwrite those first, and try to protect the results from deeper searches. So if the table gets smaller than the number of different positions in the tree, the leaf nodes will start to compete with each other for space, and will only live in the table for a fraction of the time. But the deepernodes will be hardly affected. And still many transpositions (of moves close to the horizon) that reach the leaf nodes will occur within their survival time. And for those that occur afterwards it is not very expensive to do redo the evaluation of single node.

IIRC in my measurements the search time only started to go up once the overload factor exeded 10.
I didn't know that, might be worth testing at a factor 4.

Post Reply