summaryrefslogtreecommitdiff
path: root/src/thread/pthread_mutex_unlock.c
AgeCommit message (Collapse)AuthorLines
2019-03-31implement priority inheritance mutexesRich Felker-2/+12
priority inheritance is a feature to mitigate priority inversion situations, where a execution of a medium-priority thread can unboundedly block forward progress of a high-priority thread when a lock it needs is held by a low-priority thread. the natural way to do priority inheritance would be with a simple futex flag to donate the calling thread's priority to a target thread while it waits on the futex. unfortunately, linux does not offer such an interface, but instead insists on implementing the whole locking protocol in kernelspace with special futex commands that exist solely for the purpose of doing PI mutexes. this would require the entire "trylock" logic to be duplicated in the timedlock code path for PI mutexes, since, once the previous lock holder releases the lock and the futex call returns, the lock is already held by the caller. obviously such code duplication is undesirable. instead, I've made the PI timedlock success path set the mutex lock count to -1, which can be thought of as "not yet complete", since a lock count of 0 is "locked, with no recursive references". a simple branch in a non-hot path of pthread_mutex_trylock can then see and act on this state, skipping past the code that would check and take the lock to the same code path that runs after the lock is obtained for a non-PI mutex. because we're forced to let the kernel perform the actual lock and unlock operations whenever the mutex is contended, we have to patch things up when it does the wrong thing: 1. the lock operation is not aware of whether the mutex is error-checking, so it will always fail with EDEADLK rather than deadlocking. 2. the lock operation is not aware of whether the mutex is robust, so it will successfully obtain mutexes in the owner-died state even if they're non-robust, whereas this operation should deadlock. 3. the unlock operation always sets the lock value to zero, whereas for robust mutexes, we want to set it to a special value indicating that the mutex obtained after its owner died was unlocked without marking it consistent, so that future operations all fail with ENOTRECOVERABLE. the first of these is easy to solve, just by performing a futex wait on a dummy futex address to simulate deadlock or ETIMEDOUT as appropriate. but problems 2 and 3 interact in a nasty way. to solve problem 2, we need to back out the spurious success. but if waiters are present -- which we can't just ignore, because even if we don't want to wake them, the calling thread is incorrectly inheriting their priorities -- this requires using the kernel's unlock operation, which will zero the lock value, thereby losing the "owner died with lock held" state. to solve these problems, we overload the mutex's waiters field, which is unused for PI mutexes since they don't call the normal futex wait functions, as an indicator that the PI mutex is permanently non-lockable. originally I wanted to use the count field, but there is one code path that needs to access this flag without synchronization: trylock's CAS failure path needs to be able to decide whether to fail with EBUSY or ENOTRECOVERABLE, the waiters field is already treated as a relaxed-order atomic in our memory model, so this works out nicely.
2019-02-12redesign robust mutex states to eliminate data races on type fieldRich Felker-2/+7
in order to implement ENOTRECOVERABLE, the implementation has traditionally used a bit of the mutex type field to indicate that it's recovered after EOWNERDEAD and will go into ENOTRECOVERABLE state if pthread_mutex_consistent is not called before unlocking. while it's only the thread that holds the lock that needs access to this information (except possibly for the sake of pthread_mutex_consistent choosing between EINVAL and EPERM for erroneous calls), the change to the type field is formally a data race with all other threads that perform any operation on the mutex. no individual bits race, and no write races are possible, so things are "okay" in some sense, but it's still not good. this patch moves the recovery/consistency state to the mutex owner/lock field which is rightfully mutable. bit 30, the same bit the kernel uses with a zero owner to indicate that the previous owner died holding the lock, is now used with a nonzero owner to indicate that the mutex is held but has not yet been marked consistent. note that the kernel ABI also reserves bit 29 not to appear in any tid, so the sentinel value we use for ENOTRECOVERABLE, 0x7fffffff, does not clash with any tid plus bit 30.
2016-06-27fix failure to obtain EOWNERDEAD status for process-shared robust mutexesRich Felker-1/+1
Linux's documentation (robust-futex-ABI.txt) claims that, when a process dies with a futex on the robust list, bit 30 (0x40000000) is set to indicate the status. however, what actually happens is that bits 0-30 are replaced with the value 0x40000000, i.e. bits 0-29 (containing the old owner tid) are cleared at the same time bit 30 is set. our userspace-side code for robust mutexes was written based on that documentation, assuming that kernel would never produce a futex value of 0x40000000, since the low (owner) bits would always be non-zero. commit d338b506e39b1e2c68366b12be90704c635602ce introduced this assumption explicitly while fixing another bug in how non-recoverable status for robust mutexes was tracked. presumably the tests conducted at that time only checked non-process-shared robust mutexes, which are handled in pthread_exit (which implemented the documented kernel protocol, not the actual one) rather than by the kernel. change pthread_exit robust list processing to match the kernel behavior, clearing bits 0-29 while setting bit 30, and use the value 0x7fffffff instead of 0x40000000 to encode non-recoverable status. the choice of value here is arbitrary; any value with at least one of bits 0-29 set should work just as well,
2015-04-10redesign and simplify vmlock systemRich Felker-5/+2
this global lock allows certain unlock-type primitives to exclude mmap/munmap operations which could change the identity of virtual addresses while references to them still exist. the original design mistakenly assumed mmap/munmap would conversely need to exclude the same operations which exclude mmap/munmap, so the vmlock was implemented as a sort of 'symmetric recursive rwlock'. this turned out to be unnecessary. commit 25d12fc0fc51f1fae0f85b4649a6463eb805aa8f already shortened the interval during which mmap/munmap held their side of the lock, but left the inappropriate lock design and some inefficiency. the new design uses a separate function, __vm_wait, which does not hold any lock itself and only waits for lock users which were already present when it was called to release the lock. this is sufficient because of the way operations that need to be excluded are sequenced: the "unlock-type" operations using the vmlock need only block mmap/munmap operations that are precipitated by (and thus sequenced after) the atomic-unlock they perform while holding the vmlock. this allows for a spectacular lack of synchronization in the __vm_wait function itself.
2014-09-06use weak symbols for the POSIX functions that will be used by C threadsJens Gustedt-1/+3
The intent of this is to avoid name space pollution of the C threads implementation. This has two sides to it. First we have to provide symbols that wouldn't pollute the name space for the C threads implementation. Second we have to clean up some internal uses of POSIX functions such that they don't implicitly drag in such symbols.
2014-08-17make pointers used in robust list volatileRich Felker-2/+5
when manipulating the robust list, the order of stores matters, because the code may be asynchronously interrupted by a fatal signal and the kernel will then access the robust list in what is essentially an async-signal context. previously, aliasing considerations made it seem unlikely that a compiler could reorder the stores, but proving that they could not be reordered incorrectly would have been extremely difficult. instead I've opted to make all the pointers used as part of the robust list, including those in the robust list head and in the individual mutexes, volatile. in addition, the format of the robust list has been changed to point back to the head at the end, rather than ending with a null pointer. this is to match the documented kernel robust list ABI. the null pointer, which was previously used, only worked because faults during access terminate the robust list processing.
2014-08-16fix robust mutex unrecoverable status, and related clean-upRich Felker-3/+1
a robust mutex should not enter the unrecoverable status until it's unlocked without marking it consistent. previously, flag 8 in the type was used as an indication of unrecoverable, but only honored after successful locking; this resulted in a race window where the unrecoverable mutex could appear to a second thread as locked/busy again while the first thread was in the process of observing it as unrecoverable. now, flag 8 is used to mean that the mutex is in the process of being recovered, but not yet marked consistent. the flag only takes effect in pthread_mutex_unlock, where it causes the value 0x40000000 (owner dead flag, with old owner tid 0, an otherwise impossible state) to be stored in the lock. subsequent lock attempts will interpret this state as unrecoverable.
2014-08-16fix false ownership of mutexes due to tid reuse, using robust listRich Felker-7/+5
per the resolution of Austin Group issue 755, the POSIX requirement that ownership be enforced for recursive and error-checking mutexes does not allow a random new thread to acquire ownership of an orphaned mutex just because it happened to be assigned the same tid as the original owner that exited with the mutex locked. one possible fix for this issue would be to disallow the kernel thread to terminate when it exited with mutexes held, permanently reserving the tid against reuse. however, this does not solve the problem for process-shared mutexes where lifetime cannot be controlled, so it was not used. the alternate approach I've taken is to reuse the robust mutex system for non-robust recursive and error-checking mutexes. when a thread exits, the kernel (or the new userspace robust-list code added in commit b092f1c5fa9c048e12d002c7b972df5ecbe96d1d) will set the owner-died bit for these orphaned mutexes, but since the mutex-type is not robust, pthread_mutex_trylock will not allow a new owner to acquire them. instead, they remain in a state of being permanently locked, as desired.
2014-08-15make futex operations use private-futex mode when possibleRich Felker-4/+6
private-futex uses the virtual address of the futex int directly as the hash key rather than requiring the kernel to resolve the address to an underlying backing for the mapping in which it lies. for certain usage patterns it improves performance significantly. in many places, the code using futex __wake and __wait operations was already passing a correct fixed zero or nonzero flag for the priv argument, so no change was needed at the site of the call, only in the __wake and __wait functions themselves. in other places, especially where the process-shared attribute for a synchronization object was not previously tracked, additional new code is needed. for mutexes, the only place to store the flag is in the type field, so additional bit masking logic is needed for accessing the type. for non-process-shared condition variable broadcasts, the futex requeue operation is unable to requeue from a private futex to a process-shared one in the mutex structure, so requeue is simply disabled in this case by waking all waiters. for robust mutexes, the kernel always performs a non-private wake when the owner dies. in order not to introduce a behavioral regression in non-process-shared robust mutexes (when the owning thread dies), they are simply forced to be treated as process-shared for now, giving correct behavior at the expense of performance. this can be fixed by adding explicit code to pthread_exit to do the right thing for non-shared robust mutexes in userspace rather than relying on the kernel to do it, and will be fixed in this way later. since not all supported kernels have private futex support, the new code detects EINVAL from the futex syscall and falls back to making the call without the private flag. no attempt to cache the result is made; caching it and using the cached value efficiently is somewhat difficult, and not worth the complexity when the benefits would be seen only on ancient kernels which have numerous other limitations and bugs anyway.
2014-06-10replace all remaining internal uses of pthread_self with __pthread_selfRich Felker-1/+1
prior to version 1.1.0, the difference between pthread_self (the public function) and __pthread_self (the internal macro or inline function) was that the former would lazily initialize the thread pointer if it was not already initialized, whereas the latter would crash in this case. since lazy initialization is no longer supported, use of pthread_self no longer makes sense; it simply generates larger, slower code.
2012-08-17fix extremely rare but dangerous race condition in robust mutexesRich Felker-1/+7
if new shared mappings of files/devices/shared memory can be made between the time a robust mutex is unlocked and its subsequent removal from the pending slot in the robustlist header, the kernel can inadvertently corrupt data in the newly-mapped pages when the process terminates. i am fixing the bug by using the same global vm lock mechanism that was used to fix the race condition with unmapping barriers after pthread_barrier_wait returns.
2011-10-03simplify robust mutex unlock code pathRich Felker-4/+4
right now it's questionable whether this change is an improvement or not, but if we later want to support priority inheritance mutexes, it will be important to have the code paths unified like this to avoid major code duplication.
2011-10-03fix crash if pthread_mutex_unlock is called without ever lockingRich Felker-1/+1
this is valid for error-checking mutexes; otherwise it invokes UB and would be justified in crashing.
2011-10-03use count=0 instead of 1 for recursive mutex with only one lock referenceRich Felker-2/+2
this simplifies the code paths slightly, but perhaps what's nicer is that it makes recursive mutexes fully reentrant, i.e. locking and unlocking from a signal handler works even if the interrupted code was in the middle of locking or unlocking.
2011-08-02avoid accessing mutex memory after atomic unlockRich Felker-7/+8
this change is needed to fix a race condition and ensure that it's possible to unlock and destroy or unmap the mutex as soon as pthread_mutex_lock succeeds. POSIX explicitly gives such an example in the rationale and requires an implementation to allow such usage.
2011-03-30avoid crash on stupid but allowable usage of pthread_mutex_unlockRich Felker-1/+3
unlocking an unlocked mutex is not UB for robust or error-checking mutexes, so we must avoid calling __pthread_self (which might crash due to lack of thread-register initialization) until after checking that the mutex is locked.
2011-03-30streamline mutex unlock to remove a useless branch, use a_store to unlockRich Felker-2/+6
this roughly halves the cost of pthread_mutex_unlock, at least for non-robust, normal-type mutexes. the a_store change is in preparation for future support of archs which require a memory barrier or special atomic store operation, and also should prevent the possibility of the compiler misordering writes.
2011-03-17implement robust mutexesRich Felker-2/+11
some of this code should be cleaned up, e.g. using macros for some of the bit flags, masks, etc. nonetheless, the code is believed to be working and correct at this point.
2011-03-17avoid function call to pthread_self in mutex unlockRich Felker-1/+1
if the mutex was previously locked, we can assume pthread_self was already called at the time of locking, and thus that the thread pointer is initialized.
2011-03-17unify lock and owner fields of mutex structureRich Felker-2/+1
this change is necessary to free up one slot in the mutex structure so that we can use doubly-linked lists in the implementation of robust mutexes.
2011-03-08fix and optimize non-default-type mutex behaviorRich Felker-8/+5
problem 1: mutex type from the attribute was being ignored by pthread_mutex_init, so recursive/errorchecking mutexes were never being used at all. problem 2: ownership of recursive mutexes was not being enforced at unlock time.
2011-02-17reorganize pthread data structures and move the definitions to alltypes.hRich Felker-8/+8
this allows sys/types.h to provide the pthread types, as required by POSIX. this design also facilitates forcing ABI-compatible sizes in the arch-specific alltypes.h, while eliminating the need for developers changing the internals of the pthread types to poke around with arch-specific headers they may not be able to test.
2011-02-12initial check-in, version 0.5.0v0.5.0Rich Felker-0/+19