This is the old function for storing an entry in a bucket (ENTRIES_PER_BUCKET = 4):
OLD:
Code: Select all
pub fn store(&mut self, verification: u32, data: D, used_entries: &mut usize) {
let mut idx_lowest_depth = 0;
// Find the index of the entry with the lowest depth.
for entry in 1..ENTRIES_PER_BUCKET {
if self.bucket[entry].data.depth() < data.depth() {
idx_lowest_depth = entry
}
}
// If the verification was 0, this entry in the bucket was never
// used before. Count the use of this entry.
if self.bucket[idx_lowest_depth].verification == 0 {
*used_entries += 1;
}
// Store.
self.bucket[idx_lowest_depth] = Entry { verification, data }
}
In the end this means that the incoming data will be saved in the last entry that happened to have a lower depth than this incoming data. If no entry has a lower depth, the incoming data will be saved in entry 0.
This obviously wasn't the intention. I wanted to save the incoming data in the entry having the lowest depth. (I noticed this while refactoring.) So I changed the function:
NEW:
Code: Select all
pub fn store(&mut self, verification: u32, data: D, used_entries: &mut usize) {
let mut low = 0;
// Find the index of the entry with the lowest depth.
for i in 1..ENTRIES_PER_BUCKET {
if self.bucket[i].data.depth() < self.bucket[low].data.depth() {
low = i
}
}
// If the verification was 0, this entry in the bucket was never
// used before. Count the use of this entry.
if self.bucket[low].verification == 0 {
*used_entries += 1;
}
// Store.
self.bucket[low] = Entry { verification, data }
}
So in the end, the incoming data overwrites the entry with the lowest depth.
This seems to have dropped the playing strength of the engine by 30 points.
What am I missing here? It seems the second function is correct.
The only thing I can think about is that the hash table is filling up with ever higher and higher entries that at some point are of no use anymore, effectively rendering the buckets useless, because it's always the same entry being replaced. I would have to implement an aging system. The bug in the first function inadvertently seems to circumvents this (at least for some time), because it will save the incoming data into the entry that happens to be the last one with a lower depth than the incoming one, even if it is an entry with a very high depth just 1 lower than the incoming one.
Am I correct here, or could it be something else?