Age | Commit message (Collapse) | Author | Lines |
|
this simplifies the code paths slightly, but perhaps what's nicer is
that it makes recursive mutexes fully reentrant, i.e. locking and
unlocking from a signal handler works even if the interrupted code was
in the middle of locking or unlocking.
|
|
|
|
|
|
a reader unlocking the lock need only wake one waiter (necessarily a
writer, but a writer unlocking the lock must wake all waiters
(necessarily readers). if it only wakes one, the remainder can remain
blocked indefinitely, or at least until the first reader unlocks (in
which case the whole lock becomes serialized and behaves as a mutex
rather than a read lock).
|
|
|
|
mildly tested, seems to work
|
|
|
|
passing null pointer for %s is UB but lots of broken programs do it anyway
|
|
there is no need to send a wake when the lock count does not hit zero,
but when it does, all waiters must be woken (since all with the same
sign are eligible to obtain the lock).
|
|
seeking back can be performed by the caller, but if the caller doesn't
expect it, it will result in an infinite loop of failures.
|
|
eliminate the sequence number field and instead use the counter as the
futex because of the way the lock is held, sequence numbers are
completely useless, and this frees up a field in the barrier structure
to be used as a waiter count for the count futex, which lets us avoid
some syscalls in the best case.
as of now, self-synchronized destruction and unmapping should be fully
safe. before any thread can return from the barrier, all threads in
the barrier have obtained the vm lock, and each holds a shared lock on
the barrier. the barrier memory is not inspected after the shared lock
count reaches 0, nor after the vm lock is released.
|
|
i think this works, but it can be simplified. (next step)
|
|
the vm lock only waits for threads in the same process exiting.
actually this fix is not enough, but it's a start...
|
|
|
|
|
|
it was assuming the result of the condition it was supposed to be
checking for, i.e. that the thread ptr had already been initialized by
pthread_mutex_lock. use the slower call to be safe.
|
|
|
|
we're not required to check this except for error-checking mutexes,
but it doesn't hurt. the new test is actually simpler/lighter, and it
also eliminates the need to later check that pthread_mutex_unlock
succeeds.
|
|
when used with error-checking mutexes, pthread_cond_wait is required
to fail with EPERM if the mutex is not locked by the caller.
previously we relied on pthread_mutex_unlock to generate the error,
but this is not valid, since in the case of such invalid usage the
internal state of the cond variable has already been potentially
corrupted (due to access outside the control of the mutex). thus, we
have to check first.
|
|
i set the return value but then never used it... oops!
|
|
|
|
not sure if this is correct/ideal. it needs further attention.
|
|
|
|
|
|
this implementation is rather heavy-weight, but it's the first
solution i've found that's actually correct. all waiters actually wait
twice at the barrier so that they can synchronize exit, and they hold
a "vm lock" that prevents changes to virtual memory mappings (and
blocks pthread_barrier_destroy) until all waiters are finished
inspecting the barrier.
thus, it is safe for any thread to destroy and/or unmap the barrier's
memory as soon as pthread_barrier_wait returns, without further
synchronization.
|
|
mmap returns MAP_FAILED not 0 because some idiot thought the ability
to mmap the null pointer page would be a good idea...
|
|
lock out new waiters during the broadcast. otherwise the wait count
added to the mutex might be lower than the actual number of waiters
moved, and wakeups may be lost.
this issue could also be solved by temporarily setting the mutex
waiter count higher than any possible real count, then relying on the
kernel to tell us how many waiters were requeued, and updating the
counts afterwards. however the logic is more complex, and i don't
really trust the kernel. the solution here is also nice in that it
replaces some atomic cas loops with simple non-atomic ops under lock.
|
|
due to moving waiters from the cond var to the mutex in bcast, these
waiters upon wakeup would steal slots in the count from newer waiters
that had not yet been signaled, preventing the signal function from
taking any action.
to solve the problem, we simply use two separate waiter counts, and so
that the original "total" waiters count is undisturbed by broadcast
and still available for signal.
|
|
the changes to syscall_ret are mostly no-ops in the generated code,
just cleanup of type issues and removal of some implementation-defined
behavior. the one exception is the change in the comparison value,
which is fixed so that 0xf...f000 (which in principle could be a valid
return value for mmap, although probably never in reality) is not
treated as an error return.
|
|
testing revealed that the old implementation, while correct, was
giving way too many spurious wakeups due to races changing the value
of the condition futex. in a test program with 5 threads receiving
broadcast signals, the number of returns from pthread_cond_wait was
roughly 3 times what it should have been (2 spurious wakeups for every
legitimate wakeup). moreover, the magnitude of this effect seems to
grow with the number of threads.
the old implementation may also have had some nasty race conditions
with reuse of the cond var with a new mutex.
the new implementation is based on incrementing a sequence number with
each signal event. this sequence number has nothing to do with the
number of threads intended to be woken; it's only used to provide a
value for the futex wait to avoid deadlock. in theory there is a
danger of race conditions due to the value wrapping around after 2^32
signals. it would be nice to eliminate that, if there's a way.
testing showed no spurious wakeups (though they are of course
possible) with the new implementation, as well as slightly improved
performance.
|
|
using swap has a race condition: the waiters must be added to the
mutex waiter count *before* they are taken off the cond var waiter
count, or wake events can be lost.
|
|
|
|
somehow i forgot that normal-type mutexes don't store the owner tid.
|
|
this avoids the "stampede effect" where pthread_cond_broadcast would
result in all waiters waking up simultaneously, only to immediately
contend for the mutex and go back to sleep.
|
|
previously, a waiter could miss the 1->0 transition of block if
another thread set block to 1 again after the signal function set
block to 0. we now use the caller's thread id as a unique token to
store in block, which no other thread will ever write there. this
ensures that if block still contains the tid, no signal has occurred.
spurious wakeups will of course occur whenever there is a spurious
return from the futex wait and another thread has begun waiting on the
cond var. this should be a rare occurrence except perhaps in the
presence of interrupting signal handlers.
signal/bcast operations have been improved by noting that they need
not avoid inspecting the cond var's memory after changing the futex
value. because the standard allows spurious wakeups, there is no way
for an application to distinguish between a spurious wakeup just
before another thread called signal/bcast, and the deliberate wakeup
resulting from the signal/bcast call. thus the woken thread must
assume that the signalling thread may still be waiting to act on the
cond var, and therefore it cannot destroy/unmap the cond var.
|
|
it's amazing none of the conformance tests i've run even bothered to
check whether something so basic works...
|
|
|
|
|
|
if the file descriptor resource limit has been increased past
FD_SETSIZE, this is actually a security issue; we could write past the
end of the fd_set object. using poll makes it a non-issue, and
simplifies the code at the same time.
also, use clock_gettime instead of gettimeofday, for reduced bloat
and better entropy.
|
|
for now this is just a tiny optimization, but later if we support
cancellation from __stdio_read and __stdio_write, it will be necessary
for the recusrive lock count to be zero in order for these functions
to know they are responsible for unlocking the FILE on cancellation.
|
|
the arm syscall abi requires 64-bit arguments to be aligned on an even
register boundary. these new macros facilitate meeting the abi
requirement without imposing significant ugliness on the code.
|
|
|
|
at the same time, make struct statfs match the traditional definition
and make it more useful, especially the fsid_t stuff.
|
|
this was the cause of crashes in printf when attempting to print
floating point values.
|
|
this port assumes eabi calling conventions, eabi linux syscall
convention, and presence of the kernel helpers at 0xffff0f?0 needed
for threads support. otherwise it makes very few assumptions, and the
code should work even on armv4 without thumb support, as well as on
systems with thumb interworking. the bits headers declare this a
little endian system, but as far as i can tell the code should work
equally well on big endian.
some small details are probably broken; so far, testing has been
limited to qemu/aboriginal linux.
|
|
it does not work, but some configure scripts will falsely detect
support then generate programs that crash when they call dlopen.
|
|
several things are changed. first, i have removed the old __uniclone
function signature and replaced it with the "standard" linux
__clone/clone signature. this was necessary to expose clone to
applications anyway, and it makes it easier to port __clone to new
archs, since it's now testable independently of pthread_create.
secondly, i have removed all references to the ugly ldt descriptor
structure (i386 only) from the c code and pthread structure. in places
where it is needed, it is now created on the stack just when it's
needed, in assembly code. thus, the i386 __clone function takes the
desired thread pointer as its argument, rather than an ldt descriptor
pointer, just like on all other sane archs. this should not affect
applications since there is really no way an application can use clone
with threads/tls in a way that doesn't horribly conflict with and
clobber the underlying implementation's use. applications are expected
to use clone only for creating actual processes, possibly with new
namespace features and whatnot.
|
|
eventually we may have a working "generic" implementation for archs
that don't need anything special. in any case, the goal of having
stubs like this is to allow early testing of new ports before all the
details needed for threads have been filled in. more functions like
this will follow.
|
|
|
|
|