Ras - <...> multi-threading with several threads writing to the hash table isn't deterministic, and it depends on when which thread writes and reads in relation to other worker threads.
Why read something that is going to change before it's used?Because of multi threading it remains random behaviour how many reads will result in the same value before it changes.
Why write information if it will change?
Why have multiple threads accessing the same data at the same time, when there is other work to be done?
The goal is to have the same PV lines displayed for each ply-> each time you do a search on a given position, with and without multithreads.Right. Believe me, there really is NO solution to this, if the goal is to have the parallel program run faster than the serial program. Painful but true. You can try any parallel chess program you can get your hands on, and see the same non-deterministic behavior.
Think of it like a perft. You will have x number of positions. There's no reason there should be any result different. It would be like saying there's no such thing as a multithreaded perft. If the information isn't the same, it's not being handled correctly...
I don't understand why you're suggesting synchronization doesn't work with multiple threads.
Synchronization Pthread Example - Mutexes
A mutex lock is a mechanism that can be acquired by only one thread at a time. For other threads to get the same mutex, they must wait until it is released by the current owner of the mutex.
The key advantage of multithreading code is that all threads see the same memory. So, data is already shared between threads. But the failure of coordinating the access to the data can lead to incorrect results due to the reason such as data reaces. The mutex lock is one of ways of synchronizing data sharing methods.
1. Synchronization with Mutex
The mutual exclusion lock is the simplest and most primitive synchronization variable. It provides a single, absolute owner for the section of code (aka a critical section) that it brackets between the calls to pthread_mutex_lock() and pthread_mutex_unlock(). The first thread that locks the mutex gets ownership, and any subsequent attempts to lock it will fail, causing the calling thread to go to sleep. When the owner unlocks it, one of the sleepers will be awakened, made runnable, and given the chance to obtain ownership.
2. Synchronization with Mutex and Condition Variable
Condition variable enables threads to communicate with state changes. In other words, a condition variable allows one thread to inform other threads about the changes in the state of a shared resource and allows the other threads to wait for such notification. It allows a thread to sleep(wait) until another thread signals it that it must respond to since some condition has arisen. Without condition variables, the waiting have to do polling to check whether the condition become true.
Code: Select all
std::mutex mtx; // mutex for critical sectionstd::condition_variable cv; // condition variable for critical section
bool ready = false; // Tell threads to run
int current = 0; // current count
std::unique_lock<std::mutex> lck(mtx);
while(num != current || !ready){ cv.wait(lck); }
current++;
Code: Select all
void run()
{
std::unique_lock<std::mutex> lck(mtx);
ready = true;
cv.notify_all();
}
Spin locks are essentially mutex locks.
A spin lock polls its lock condition repeatedly until that condition becomes true. Spin locks are most often used on multiprocessor systems where the expected wait time for a lock is small. In these situations, it is often more efficient to poll than to block the thread, which involves a Context switch and the updating of thread data structures.
3. Synchronization with Semaphore
A semaphore is a counting and signaling mechanism. We use it to allow threads access to a specified number of items. If there is a single item, then a semaphore is virtually the same as a mutex.
However, it is more commonly used in a situation where there are multiple items to be managed. Semaphores can also be used to signal between threads or processes. For example, to tell another thread that there is data present in a queue. There are two types of semaphores: named and unnamed semaphores.
Code: Select all
#include <semaphore.h>
int main()
{
sem_t mySemaphore;
sem_init(&mySemaphore;, 0, 5);
//...
sem_destroy(&mySemaphore;);
return 0;
}
Producer / Consumer (Bounded buffer) problem
We will address the producer/consumer (aka Bounded Buffer) problem.
Suppose one or more producer threads and one or more consumer threads. Producers produce data items and wish to place them in a buffer. Then, consumers grab data items out of the buffer consume the data in some way.
Because the bounded buffer is a shared resource, we must of course require synchronized access to it to avoid any race condition. To understand this problem better, let us examine some actual code:
Code: Select all
sem_init(&empty, 0, MAX); // MAX buffers are empty to begin with...
sem_init(&full, 0, 0); // ... and 0 are full
void *producer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&empty);
put(i);
sem_post(&full);
}
}
void *consumer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&full);
int b = get();
sem_post(&empty);
printf("%d\n", b);
}
}
Imagine two producers both calling into put() at the same time. Assume producer 1 gets to run first, and just starts to fill the first buffer entry (fill = 0 @ buffer[fill] = value;). Before the producer gets a chance to increment the fill counter to 1, it is interrupted. Producer 2 starts to run, and at the same line of code, it also puts its data into the 0th element of buffer, which means that the old data there is overwritten!
As we can see, what we’ve forgotten here is mutual exclusion. The filling of a buffer and incrementing of the index into the buffer is a critical section, and thus must be guarded carefully. So let’s use binary semaphore and add some locks.
Code: Select all
sem_init(&mutex, 0, 1); // mutex = 1 since it a lock
void *producer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&mutex);
sem_wait(&empty);
put(i);
sem_post(&full);
sem_post(&mutex);
}
}
void *consumer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&mutex;);
sem_wait(&full;);
int b = get();
sem_post(&empty);
sem_post(&mutex);
printf("%d\n", b);
}
}
A producer then runs. It has data to produce and if it were able to run, it would be able to wake the consumer thread and all would be good. Unfortunately, the first thing it does is call sem_wait(&mutex;) on the binary mutex semaphore. The lock is already held. Hence, the producer is now stuck waiting too.
Solution: We simply must reduce the scope of the lock.
Code: Select all
void *producer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&empty);
// scope of lock reduced
sem_wait(&mutex);
put(i);
sem_post(&mutex);
sem_post(&full);
}
}
void *consumer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&full);
// scope of lock reduced
sem_wait(&mutex);
int b = get();
sem_post(&mutex);
sem_post(&empty);
printf("%d\n", b);
}
}