...
Although this appears to be harmless, it is possible (and likely) that the architecture that this is running on has aligned flag1
and flag2
on the same byte. If both assignments occur on a thread scheduling interleaving which ends with the both stores occurring after one another, it is possible that only one of the flags will be set as intended and the other flag will equal its previous value due to the fact that both of the bit-fields fell on the same byte, which was the smallest unit the processor could work on.
For example, the following sequence of events could occur.
Code Block |
---|
Thread 1: register 0 = flags
Thread 1: register 0 &= ~mask(flag1)
Thread 2: register 0 = flags
Thread 2: register 0 &= ~mask(flag2)
Thread 1: register 0 |= 1 << shift(flag1)
Thread 1: flags = register 0
Thread 2: register 0 |= 2 << shift(flag2)
Thread 2: flags = register 0
|
Even though each thread is modifying a separate bit-field, they are both modifying the same location in memory. This is the same problem discussed in POS00-A. Avoid race conditions with multiple threads, but worse because it is not obvious at first glance that the same memory location is being modified.
Compliant Solution
This compliant solution protects all usage of the flags with a mutex, preventing an unfortunate thread scheduling interleaving from being able to occur. In addition, the flags are declared volatile
to ensure that the compiler will not attempt to move operations on them outside the mutex.
Code Block | ||
---|---|---|
| ||
struct multi_threaded_flags { volatile int flag1 : 2; volatile int flag2 : 2; pthread_mutex_t mutex; }; struct multi_threaded_flags flags; void thread1() { pthread_mutex_lock(&flags.mutex); flags.flag1 = 1; pthread_mutex_unlock(&flags.mutex); } void thread2() { pthread_mutex_lock(&flags.mutex); flags.flag2 = 2; pthread_mutex_unlock(&flags.mutex); } |
...