Age | Commit message (Collapse) | Author | Lines |
|
despite documentation that makes it sound a lot different, the only
ABI-constraint difference between TLS variants II and I seems to be
that variant II stores the initial TLS segment immediately below the
thread pointer (i.e. the thread pointer points to the end of it) and
variant I stores the initial TLS segment above the thread pointer,
requiring the thread descriptor to be stored below. the actual value
stored in the thread pointer register also tends to have per-arch
random offsets applied to it for silly micro-optimization purposes.
with these changes applied, TLS should be basically working on all
supported archs except microblaze. I'm still working on getting the
necessary information and a working toolchain that can build TLS
binaries for microblaze, but in theory, static-linked programs with
TLS and dynamic-linked programs where only the main executable uses
TLS should already work on microblaze.
alignment constraints have not yet been heavily tested, so it's
possible that this code does not always align TLS segments correctly
on archs that need TLS variant I.
|
|
|
|
the code in __libc_start_main is now responsible for parsing auxv,
rather than duplicating the parsing all over the place. this should
shave off a few cycles and some code size. __init_libc is left as an
external-linkage function despite the fact that it could be static, to
prevent it from being inlined and permanently wasting stack space when
main is called.
a few other minor changes are included, like eliminating per-thread
ssp canaries (they were likely broken when combined with certain
dlopen usages, and completely unnecessary) and some other unnecessary
checks. since this code gets linked into every program, it should be
as small and simple as possible.
|
|
unlike other implementations, this one reserves memory for new TLS in
all pre-existing threads at dlopen-time, and dlopen will fail with no
resources consumed and no new libraries loaded if memory is not
available. memory is not immediately distributed to running threads;
that would be too complex and too costly. instead, assurances are made
that threads needing the new TLS can obtain it in an async-signal-safe
way from a buffer belonging to the dynamic linker/new module (via
atomic fetch-and-add based allocator).
I've re-appropriated the lock that was previously used for __synccall
(synchronizing set*id() syscalls between threads) as a general
pthread_create lock. it's a "backwards" rwlock where the "read"
operation is safe atomic modification of the live thread count, which
multiple threads can perform at the same time, and the "write"
operation is making sure the count does not increase during an
operation that depends on it remaining bounded (__synccall or dlopen).
in static-linked programs that don't use __synccall, this lock is a
no-op and has no cost.
|
|
the design for TLS in dynamic-linked programs is mostly complete too,
but I have not yet implemented it. cost is nonzero but still low for
programs which do not use TLS and/or do not use threads (a few hundred
bytes of new code, plus dependency on memcpy). i believe it can be
made smaller at some point by merging __init_tls and __init_security
into __libc_start_main and avoiding duplicate auxv-parsing code.
at the same time, I've also slightly changed the logic pthread_create
uses to allocate guard pages to ensure that guard pages are not
counted towards commit charge.
|
|
note that POSIX does not specify these functions as _Noreturn, because
POSIX is aligned with C99, not the new C11 standard. when POSIX is
eventually updated to C11, it will almost surely give these functions
the _Noreturn attribute. for now, the actual _Noreturn keyword is not
used anyway when compiling with a c99 compiler, which is what POSIX
requires; the GCC __attribute__ is used instead if it's available,
however.
in a few places, I've added infinite for loops at the end of _Noreturn
functions to silence compiler warnings. presumably
__buildin_unreachable could achieve the same thing, but it would only
work on newer GCCs and would not be portable. the loops should have
near-zero code size cost anyway.
like the previous _Noreturn commit, this one is based on patches
contributed by philomath.
|
|
to deal with the fact that the public headers may be used with pre-c99
compilers, __restrict is used in place of restrict, and defined
appropriately for any supported compiler. we also avoid the form
[restrict] since older versions of gcc rejected it due to a bug in the
original c99 standard, and instead use the form *restrict.
|
|
some minor changes to how hard-coded sets for thread-related purposes
are handled were also needed, since the old object sizes were not
necessarily sufficient. things have gotten a bit ugly in this area,
and i think a cleanup is in order at some point, but for now the goal
is just to get the code working on all supported archs including mips,
which was badly broken by linux rejecting syscalls with the wrong
sigset_t size.
|
|
these could have caused memory corruption due to invalid accesses to
the next field. all should be fixed now; I found the errors with fgrep
-r '__lock(&', which is bogus since the argument should be an array.
|
|
after the thread unmaps its own stack/thread structure, the kernel,
performing child tid clear and futex wake, could clobber a new mapping
made at the same location as the just-removed thread's tid field.
disable kernel clearing of child tid to prevent this.
|
|
i originally omitted these (optional, per POSIX) interfaces because i
considered them backwards implementation details. however, someone
later brought to my attention a fairly legitimate use case: allocating
thread stacks in memory that's setup for sharing and/or fast transfer
between CPU and GPU so that the thread can move data to a GPU directly
from automatic-storage buffers without having to go through additional
buffer copies.
perhaps there are other situations in which these interfaces are
useful too.
|
|
this action is now performed in pthread_self initialization; it must
be performed there in case the first call to pthread_create is from a
signal handler, in which case the old signal mask could be restored on
return from the signal.
|
|
no need to pass unnecessary extra arguments on to the core code in
pthread_create.c. this just wastes cycles and code bloat.
|
|
this change is necessary or pthread_create will always fail on
security-hardened kernels. i considered first trying to make the stack
executable and simply retrying without execute permissions when the
first try fails, but (1) this would incur a serious performance
penalty on hardened systems, and (2) having the stack be executable is
just a bad idea from a security standpoint.
if there is real-world "GNU C" code that uses nested functions with
threads, and it can't be fixed, we'll have to consider other ways of
solving the problem, but for now this seems like the best fix.
|
|
pthread structure has been adjusted to match the glibc/GCC abi for
where the canary is stored on i386 and x86_64. it will need variants
for other archs to provide the added security of the canary's entropy,
but even without that it still works as well as the old "minimal" ssp
support. eventually such changes will be made anyway, since they are
also needed for GCC/C11 thread-local storage support (not yet
implemented).
care is taken not to attempt initializing the thread pointer unless
the program actually uses SSP (by reference to __stack_chk_fail).
|
|
|
|
even if pthread_create/exit code is not linked, run flag needs to be
checked and cleanup function potentially run on pop. thus, move the
code to the module that's always linked when pthread_cleanup_push/pop
is used.
|
|
the old abi was intended to duplicate glibc's abi at the expense of
being ugly and slow, but it turns out glib was not even using that abi
except on non-gcc-compatible compilers (which it doesn't even support)
and was instead using an exceptions-in-c/unwind-based approach whose
abi we could not duplicate anyway without nasty dwarf2/unwind
integration.
the new abi is copied from a very old glibc abi, which seems to still
be supported/present in current glibc. it avoids all unwinding,
whether by sjlj or exceptions, and merely maintains a linked list of
cleanup functions to be called from the context of pthread_exit. i've
made some care to ensure that longjmp out of a cleanup function should
work, even though it is not required to.
this change breaks abi compatibility with programs which were using
pthread cancellation, which is unfortunate, but that's why i'm making
the change now rather than later. considering that most pthread
features have not been usable until recently anyway, i don't see it as
a major issue at this point.
|
|
mmap returns MAP_FAILED not 0 because some idiot thought the ability
to mmap the null pointer page would be a good idea...
|
|
several things are changed. first, i have removed the old __uniclone
function signature and replaced it with the "standard" linux
__clone/clone signature. this was necessary to expose clone to
applications anyway, and it makes it easier to port __clone to new
archs, since it's now testable independently of pthread_create.
secondly, i have removed all references to the ugly ldt descriptor
structure (i386 only) from the c code and pthread structure. in places
where it is needed, it is now created on the stack just when it's
needed, in assembly code. thus, the i386 __clone function takes the
desired thread pointer as its argument, rather than an ldt descriptor
pointer, just like on all other sane archs. this should not affect
applications since there is really no way an application can use clone
with threads/tls in a way that doesn't horribly conflict with and
clobber the underlying implementation's use. applications are expected
to use clone only for creating actual processes, possibly with new
namespace features and whatnot.
|
|
fix up clone signature to match the actual behavior. the new
__syncall_wait function allows a __synccall callback to wait for other
threads to continue without returning, so that it can resume action
after the caller finishes. this interface could be made significantly
more general/powerful with minimal effort, but i'll wait to do that
until it's actually useful for something.
|
|
cleanup push and pop are also no-ops if pthread_exit is not reachable.
this can make a big difference for library code which needs to protect
itself against cancellation, but which is unlikely to actually be used
in programs with threads/cancellation.
|
|
|
|
previously, pthread_cleanup_push/pop were pulling in all of
pthread_create due to dependency on the __pthread_unwind_next
function. this was not needed, as cancellation cleanup handlers can
never be called unless pthread_exit or pthread_cancel is reachable.
|
|
previously, stdio used spinlocks, which would be unacceptable if we
ever add support for thread priorities, and which yielded
pathologically bad performance if an application attempted to use
flockfile on a key file as a major/primary locking mechanism.
i had held off on making this change for fear that it would hurt
performance in the non-threaded case, but actually support for
recursive locking had already inflicted that cost. by having the
internal locking functions store a flag indicating whether they need
to perform unlocking, rather than using the actual recursive lock
counter, i was able to combine the conditionals at unlock time,
eliminating any additional cost, and also avoid a nasty corner case
where a huge number of calls to ftrylockfile could cause deadlock
later at the point of internal locking.
this commit also fixes some issues with usage of pthread_self
conflicting with __attribute__((const)) which resulted in crashes with
some compiler versions/optimizations, mainly in flockfile prior to
pthread_create.
|
|
changing credentials in a multi-threaded program is extremely
difficult on linux because it requires synchronizing the change
between all threads, which have their own thread-local credentials on
the kernel side. this is further complicated by the fact that changing
the real uid can fail due to exceeding RLIMIT_NPROC, making it
possible that the syscall will succeed in some threads but fail in
others.
the old __rsyscall approach being replaced was robust in that it would
report failure if any one thread failed, but in this case, the program
would be left in an inconsistent state where individual threads might
have different uid. (this was not as bad as glibc, which would
sometimes even fail to report the failure entirely!)
the new approach being committed refuses to change real user id when
it cannot temporarily set the rlimit to infinity. this is completely
POSIX conformant since POSIX does not require an implementation to
allow real-user-id changes for non-privileged processes whatsoever.
still, setting the real uid can fail due to memory allocation in the
kernel, but this can only happen if there is not already a cached
object for the target user. thus, we forcibly serialize the syscalls
attempts, and fail the entire operation on the first failure. this
*should* lead to an all-or-nothing success/failure result, but it's
still fragile and highly dependent on kernel developers not breaking
things worse than they're already broken.
ideally linux will eventually add a CLONE_USERCRED flag that would
give POSIX conformant credential changes without any hacks from
userspace, and all of this code would become redundant and could be
removed ~10 years down the line when everyone has abandoned the old
broken kernels. i'm not holding my breath...
|
|
if thread id was reused by the kernel between the time pthread_kill
read it from the userspace pthread_t object and the time of the tgkill
syscall, a signal could be sent to the wrong thread. the tgkill
syscall was supposed to prevent this race (versus the old tkill
syscall) but it can't; it can only help in the case where the tid is
reused in a different process, but not when the tid is reused in the
same process.
the only solution i can see is an extra lock to prevent threads from
exiting while another thread is trying to pthread_kill them. it should
be very very cheap in the non-contended case.
|
|
previously a long-running dtor could cause pthread_detach to block.
|
|
|
|
|
|
|
|
the new approach relies on the fact that the only ways to create
sigset_t objects without invoking UB are to use the sig*set()
functions, or from the masks returned by sigprocmask, sigaction, etc.
or in the ucontext_t argument to a signal handler. thus, as long as
sigfillset and sigaddset avoid adding the "protected" signals, there
is no way the application will ever obtain a sigset_t including these
bits, and thus no need to add the overhead of checking/clearing them
when sigprocmask or sigaction is called.
note that the old code actually *failed* to remove the bits from
sa_mask when sigaction was called.
the new implementations are also significantly smaller, simpler, and
faster due to ignoring the useless "GNU HURD signals" 65-1024, which
are not used and, if there's any sanity in the world, never will be
used.
|
|
this also de-uglifies the dummy function aliasing a bit.
|
|
if the exit was caused by cancellation, __cancel has already set these
flags anyway.
|
|
cancellation frames were not correctly popped, so this usage would not
only loop, but also reuse discarded and invalid parts of the stack.
|
|
|
|
this patch improves the correctness, simplicity, and size of
cancellation-related code. modulo any small errors, it should now be
completely conformant, safe, and resource-leak free.
the notion of entering and exiting cancellation-point context has been
completely eliminated and replaced with alternative syscall assembly
code for cancellable syscalls. the assembly is responsible for setting
up execution context information (stack pointer and address of the
syscall instruction) which the cancellation signal handler can use to
determine whether the interrupted code was in a cancellable state.
these changes eliminate race conditions in the previous generation of
cancellation handling code (whereby a cancellation request received
just prior to the syscall would not be processed, leaving the syscall
to block, potentially indefinitely), and remedy an issue where
non-cancellable syscalls made from signal handlers became cancellable
if the signal handler interrupted a cancellation point.
x86_64 asm is untested and may need a second try to get it right.
|
|
otherwise we cannot support an application's desire to use
asynchronous cancellation within the callback function. this change
also slightly debloats pthread_create.c.
|
|
we take advantage of the fact that unless self->cancelpt is 1,
cancellation cannot happen. so just increment it by 2 to temporarily
block cancellation. this drops pthread_create.o well under 1k.
|
|
|
|
this is something of a tradeoff, as now set*id() functions, rather
than pthread_create, are what pull in the code overhead for dealing
with linux's refusal to implement proper POSIX thread-vs-process
semantics. my motivations are:
1. it's cleaner this way, especially cleaner to optimize out the
rsyscall locking overhead from pthread_create when it's not needed.
2. it's expected that only a tiny number of core system programs will
ever use set*id() functions, whereas many programs may want to use
threads, and making thread overhead tiny is an incentive for "light"
programs to try threads.
|
|
|
|
|
|
with these small changes, libc functions which need to call functions
which are cancellation points, but which themselves must not be
cancellation points, can use the CANCELPT_INHIBIT and CANCELPT_RESUME
macros to temporarily inhibit all cancellation.
|
|
|
|
otherwise a signal handler could see an inconsistent and nonconformant
program state where different threads have different uids/gids.
|
|
the problem: there is a (single-instruction) race condition window
between a thread flagging itself dead and decrementing itself from the
thread count. if it receives the rsyscall signal at this exact moment,
the rsyscall caller will never succeed in signalling enough flags to
succeed, and will deadlock forever. in previous versions of musl, the
about-to-terminate thread masked all signals prior to decrementing
the thread count, but this cost a whole syscall just to account for
extremely rare races.
the solution is a huge hack: rather than blocking in the signal
handler if the thread is dead, modify the signal mask of the saved
context and return in order to prevent further signal handling by the
dead thread. this allows the dead thread to continue decrementing the
thread count (if it had not yet done so) and exiting, even while the
live part of the program blocks for rsyscall.
|
|
for some inexplicable reason, linux allows the sender of realtime
signals to spoof its identity. permission checks for sending signals
should limit the impact to same-user processes, but just to be safe,
we avoid trusting the siginfo structure and instead simply examine the
program state to see if we're in the middle of a legitimate rsyscall.
|
|
|
|
|