summaryrefslogtreecommitdiff
path: root/src/thread/__lock.c
AgeCommit message (Collapse)AuthorLines
2012-04-24ditch the priority inheritance locks; use malloc's version of lockRich Felker-23/+3
i did some testing trying to switch malloc to use the new internal lock with priority inheritance, and my malloc contention test got 20-100 times slower. if priority inheritance futexes are this slow, it's simply too high a price to pay for avoiding priority inversion. maybe we can consider them somewhere down the road once the kernel folks get their act together on this (and perferably don't link it to glibc's inefficient lock API)... as such, i've switch __lock to use malloc's implementation of lightweight locks, and updated all the users of the code to use an array with a waiter count for their locks. this should give optimal performance in the vast majority of cases, and it's simple. malloc is still using its own internal copy of the lock code because it seems to yield measurably better performance with -O3 when it's inlined (20% or more difference in the contention stress test).
2012-04-24internal locks: new owner of contended lock must set waiters flagRich Felker-1/+1
this bug probably would have gone unnoticed since it's only used in the fallback code for systems where priority-inheritance locking fails. unfortunately this approach results in one spurious wake syscall on the final unlock, when there are no waiters remaining. the alternative (possibly better) would be to use broadcast wakes instead of reflagging the waiter unconditionally, and let each waiter reflag itself; this saves one syscall at the expense of invoking the "thundering herd" effect (worse performance degredation) when there are many waiters. ideally we would be able to update all of our locks to use an array of two ints rather than a single int, and use a separate counter system like proper mutexes use; then we could avoid all spurious wake calls without resorting to broadcasts. however, it's not clear to me that priority inheritance futexes support this usage. the kernel sets the waiters flag for them (just like we're doing now) and i can't tell if it's safe to bypass the kernel when unlocking just because we know (from private data, the waiter count) that there are no waiters. this is something that could be explored in the future.
2012-04-24new internal locking primitive; drop spinlocksRich Felker-6/+27
we use priority inheritance futexes if possible so that the library cannot hit internal priority inversion deadlocks in the presence of realtime priority scheduling (full support to be added later).
2011-09-16use a_swap rather than old name a_xchgRich Felker-1/+1
2011-06-14minor locking optimizationsRich Felker-1/+1
2011-04-06consistency: change all remaining syscalls to use SYS_ rather than __NR_ prefixRich Felker-1/+1
2011-03-19syscall overhaul part two - unify public and internal syscall interfaceRich Felker-2/+1
with this patch, the syscallN() functions are no longer needed; a variadic syscall() macro allows syscalls with anywhere from 0 to 6 arguments to be made with a single macro name. also, manually casting each non-integer argument with (long) is no longer necessary; the casts are hidden in the macros. some source files which depended on being able to define the old macro SYSCALL_RETURNS_ERRNO have been modified to directly use __syscall() instead of syscall(). references to SYSCALL_SIGSET_SIZE and SYSCALL_LL have also been changed. x86_64 has not been tested, and may need a follow-up commit to fix any minor bugs/oversights.
2011-02-12initial check-in, version 0.5.0v0.5.0Rich Felker-0/+12