Age | Commit message (Collapse) | Author | Lines |
|
|
|
|
|
|
|
|
|
the volatile hack in STRICT_ASSIGN is only needed if
assignment is not respected and excess precision is kept.
gcc -fexcess-precision=standard and -ffloat-store both
respect assignment and musl use these flags by default.
i kept the macro for now so the workaround may be used
for bad compilers in the future.
|
|
|
|
|
|
|
|
linux's sched_* syscalls actually implement the TPS (thread
scheduling) functionality, not the PS (process scheduling)
functionality which the sched_* functions are supposed to have.
omitting support for the PS option (and having the sched_* interfaces
fail with ENOSYS rather than omitting them, since some broken software
assumes they exist) seems to be the only conforming way to do this on
linux.
|
|
this function does not obey the normal calling convention; like a
syscall instruction, it's expected not to clobber any registers except
the return value. clobbering edx could break callers that were reusing
the value cached in edx after the syscall returns.
|
|
this mirrors the stdio_impl.h cleanup. one header which is not
strictly needed, errno.h, is left in pthread_impl.h, because since
pthread functions return their error codes rather than using errno,
nearly every single pthread function needs the errno constants.
in a few places, rather than bringing in string.h to use memset, the
memset was replaced by direct assignment. this seems to generate much
better code anyway, and makes many functions which were previously
non-leaf functions into leaf functions (possibly eliminating a great
deal of bloat on some platforms where non-leaf functions require ugly
prologue and/or epilogue).
|
|
this header evolved to facilitate the extremely lazy practice of
omitting explicit includes of the necessary headers in individual
stdio source files; not only was this sloppy, but it also increased
build time.
now, stdio_impl.h is only including the headers it needs for its own
use; any further headers needed by source files are included directly
where needed.
|
|
some of these were coming from stdio functions locking files without
unlocking them. I believe it's useful for this to throw a warning, so
I added a new macro that's self-documenting that the file will never
be unlocked to avoid the warning in the few places where it's wrong.
|
|
on x86 and some other archs, functions which make function calls which
might go through a PLT incur a significant overhead cost loading the
GOT register prior to making the call. this load is utterly useless in
musl, since all calls are bound at library-creation time using
-Bsymbolic-functions, but the compiler has no way of knowing this, and
attempts to set the default visibility to protected have failed due to
bugs in GCC and binutils.
this commit simply manually assigns hidden/protected visibility, as
appropriate, to a few internal-use-only functions which have many
callers, or which have callers that are hot paths like getc/putc. it
shaves about 5k off the i386 libc.so with -Os. many of the
improvements are in syscall wrappers, where the benefit is just size
and performance improvement is unmeasurable noise amid the syscall
overhead. however, stdio may be measurably faster.
if in the future there are toolchains that can do the same thing
globally without introducing linking bugs, it might be worth
considering removing these workarounds.
|
|
1. don't open /dev/null just as a basis to copy flags; use shared
__fmodeflags function to get the right file flags for the mode.
2. handle the case (probably invalid, but whatever) case where the
original stream's file descriptor was closed; previously, the logic
re-closed it.
3. accept the "e" mode flag for close-on-exec; update dup3 to fallback
to using dup2 so we can simply call __dup3 instead of putting fallback
logic in freopen itself.
|
|
this will prevent gnulib from wrapping our strtod to handle this
useless feature.
|
|
with this change, pcc-built musl libc.so seems to work correctly. the
problem is that pcc generates GOT lookups for external-linkage symbols
even if they are hidden, rather than using GOT-relative addressing.
the entire reason we're using hidden visibility on the __libc object
is to make it accessible prior to relocations -- not to mention
inexpensive to access. unfortunately, the workaround makes it even
more expensive on pcc.
when the pcc issue is fixed, an appropriate version test should be
added so new pcc can use the much more efficient variant.
|
|
|
|
this doubles the performance of the fastest syscalls on the atom I
tested it on; improvement is reportedly much more dramatic on
worst-case cpus. cannot be used for cancellable syscalls.
|
|
unlike other implementations, this one reserves memory for new TLS in
all pre-existing threads at dlopen-time, and dlopen will fail with no
resources consumed and no new libraries loaded if memory is not
available. memory is not immediately distributed to running threads;
that would be too complex and too costly. instead, assurances are made
that threads needing the new TLS can obtain it in an async-signal-safe
way from a buffer belonging to the dynamic linker/new module (via
atomic fetch-and-add based allocator).
I've re-appropriated the lock that was previously used for __synccall
(synchronizing set*id() syscalls between threads) as a general
pthread_create lock. it's a "backwards" rwlock where the "read"
operation is safe atomic modification of the live thread count, which
multiple threads can perform at the same time, and the "write"
operation is making sure the count does not increase during an
operation that depends on it remaining bounded (__synccall or dlopen).
in static-linked programs that don't use __synccall, this lock is a
no-op and has no cost.
|
|
this code will not work yet because the necessary relocations are not
supported, and cannot be supported without some internal changes to
how relocation processing works (coming soon).
|
|
the design for TLS in dynamic-linked programs is mostly complete too,
but I have not yet implemented it. cost is nonzero but still low for
programs which do not use TLS and/or do not use threads (a few hundred
bytes of new code, plus dependency on memcpy). i believe it can be
made smaller at some point by merging __init_tls and __init_security
into __libc_start_main and avoiding duplicate auxv-parsing code.
at the same time, I've also slightly changed the logic pthread_create
uses to allocate guard pages to ensure that guard pages are not
counted towards commit charge.
|
|
based on initial work by rdp, with heavy modifications. some features
including threads are untested because qemu app-level emulation seems
to be broken and I do not have a proper system image for testing.
|
|
no syscalls actually use that many arguments; the issue is that some
syscalls with 64-bit arguments have them ordered badly so that
breaking them into aligned 32-bit half-arguments wastes slots with
padding, and a 7th slot is needed for the last argument.
|
|
this code was using $10 to save the syscall number, but $10 is not
necessarily preserved by the kernel across syscalls. only mattered for
syscalls that got interrupted by a signal and restarted. as far as i
can tell, $25 is preserved by the kernel across syscalls.
|
|
now public syscall.h only exposes __NR_* and SYS_* constants and the
variadic syscall function. no macros or inline functions, no
__syscall_ret or other internal details, no 16-/32-bit legacy syscall
renaming, etc. this logic has all been moved to src/internal/syscall.h
with the arch-specific parts in arch/$(ARCH)/syscall_arch.h, and the
amount of arch-specific stuff has been reduced to a minimum.
changes still need to be reviewed/double-checked. minimal testing on
i386 and mips has already been performed.
|
|
this affects at least the case of very long inputs, but may also
affect shorter inputs that become long due to growth while upscaling.
basically, the logic for the circular buffer indices of the initial
base-10^9 digit and the slot one past the final digit, and for
simplicity of the loop logic, assumes an invariant that they're not
equal. the upscale loop, which can increase the length of the
base-10^9 representation, attempted to preserve this invariant, but
was actually only ensuring that the end index did not loop around past
the start index, not that the two never become equal.
the main (only?) effect of this bug was that subsequent logic treats
the excessively long number as having no digits, leading to junk
results.
|
|
optimized to avoid allocation and return lines directly out of the
stream buffer whenever possible.
|
|
some minor changes to how hard-coded sets for thread-related purposes
are handled were also needed, since the old object sizes were not
necessarily sufficient. things have gotten a bit ugly in this area,
and i think a cleanup is in order at some point, but for now the goal
is just to get the code working on all supported archs including mips,
which was badly broken by linux rejecting syscalls with the wrong
sigset_t size.
|
|
it's expected that this will be needed/useful only in asm, so I've
given it its own symbol that can be addressed in pc-relative ways from
asm rather than adding a field in the __libc structure which would
require hard-coding the offset wherever it's used.
|
|
these could have caused memory corruption due to invalid accesses to
the next field. all should be fixed now; I found the errors with fgrep
-r '__lock(&', which is bogus since the argument should be an array.
|
|
|
|
basically, this version of the code was obtained by starting with
rdp's work from his ellcc source tree, adapting it to musl's build
system and coding style, auditing the bits headers for discrepencies
with kernel definitions or glibc/LSB ABI or large file issues, fixing
up incompatibility with the old binutils from aboriginal linux, and
adding some new special cases to deal with the oddities of sigaction
and pipe syscall interfaces on mips.
at present, minimal test programs work, but some interfaces are broken
or missing. threaded programs probably will not link.
|
|
the type doesn't actually matter, just the size, but it's nice to be
consistent...
|
|
this file can be overridden by a same-named file in an arch dir.
|
|
there is no need/use for a flush hook. the write function serves this
purpose already. i originally created the hook for implementing mem
streams based on a mistaken reading of posix, and later realized it
wasn't useful but never removed it until now.
|
|
i originally omitted these (optional, per POSIX) interfaces because i
considered them backwards implementation details. however, someone
later brought to my attention a fairly legitimate use case: allocating
thread stacks in memory that's setup for sharing and/or fast transfer
between CPU and GPU so that the thread can move data to a GPU directly
from automatic-storage buffers without having to go through additional
buffer copies.
perhaps there are other situations in which these interfaces are
useful too.
|
|
|
|
I've been looking for data that would suggest a good default, and
since little has shown up, i'm doing this based on the limited data I
have. the value 80k is chosen to accommodate 64k of application data
(which happens to be the size of the buffer in git that made it crash
without a patch to call pthread_attr_setstacksize) plus the max stack
usage of most libc functions (with a few exceptions like crypt, which
will be fixed soon to avoid excessive stack usage, and [n]ftw, which
inherently uses a fair bit in recursive directory searching).
if further evidence emerges suggesting that the default should be
larger, I'll consider changing it again, but I'd like to avoid it
getting too large to avoid the issues of large commit charge and rapid
address space exhaustion on 32-bit machines.
|
|
these will NOT be used when compiling with -D_LARGEFILE64_SOURCE on
musl; instead, they exist in the hopes of eventually being able to run
some glibc-linked apps with musl sitting in place of glibc.
also remove the (apparently incorrect) fcntl alias.
|
|
|
|
i made a best attempt, but the intended semantics of this function are
fundamentally contradictory. there is no consistent way to handle
ownership of locks when forking a multi-threaded process. the code
could have worked by accident for programs that only used normal
mutexes and nothing else (since they don't actually store or care
about their owner), but that's about it. broken-by-design interfaces
that aren't even in glibc (only solaris) don't belong in musl.
|
|
it's ok to overlap with integer slot 3 on 32-bit because only slots
0-2 are used on process-local barriers.
|
|
updated nextafter* to use FORCE_EVAL, it can be used in many other
places in the math code to improve readability.
|
|
pthread structure has been adjusted to match the glibc/GCC abi for
where the canary is stored on i386 and x86_64. it will need variants
for other archs to provide the added security of the canary's entropy,
but even without that it still works as well as the old "minimal" ssp
support. eventually such changes will be made anyway, since they are
also needed for GCC/C11 thread-local storage support (not yet
implemented).
care is taken not to attempt initializing the thread pointer unless
the program actually uses SSP (by reference to __stack_chk_fail).
|
|
this caused misreading of certain floating point values that are exact
multiples of large powers of ten, unpredictable depending on prior
stack contents.
|
|
i did some testing trying to switch malloc to use the new internal
lock with priority inheritance, and my malloc contention test got
20-100 times slower. if priority inheritance futexes are this slow,
it's simply too high a price to pay for avoiding priority inversion.
maybe we can consider them somewhere down the road once the kernel
folks get their act together on this (and perferably don't link it to
glibc's inefficient lock API)...
as such, i've switch __lock to use malloc's implementation of
lightweight locks, and updated all the users of the code to use an
array with a waiter count for their locks. this should give optimal
performance in the vast majority of cases, and it's simple.
malloc is still using its own internal copy of the lock code because
it seems to yield measurably better performance with -O3 when it's
inlined (20% or more difference in the contention stress test).
|
|
we use priority inheritance futexes if possible so that the library
cannot hit internal priority inversion deadlocks in the presence of
realtime priority scheduling (full support to be added later).
|
|
also be extra careful to avoid wrapping the circular buffer early
|
|
care is taken that the setting of errno correctly reflects underflow
condition. scanning exact denormal values does not result in ERANGE,
nor does scanning values (such as the usual string definition of
FLT_MIN) which are actually less than the smallest normal number but
which round to a normal result.
only the decimal case is handled so far; hex float require a separate
fix to come later.
|