Age | Commit message (Collapse) | Author | Lines |
|
in a few places, non-hidden symbols were referenced from asm in ways
that assumed ld-time binding. while these is no semantic reason these
symbols need to be hidden, fixing the references without making them
hidden was going to be ugly, and hidden reduces some bloat anyway.
in the asm files, .global/.hidden directives have been moved to the
top to unclutter the actual code.
|
|
at the point of call it was declared hidden, but the definition was
not hidden. for some toolchains this inconsistency produced textrels
without ld-time binding.
|
|
otherwise the call instruction in the inline syscall asm results in
textrels without ld-time binding.
|
|
otherwise the call/jump from the crt_arch.h asm may not resolve
correctly without -Bsymbolic-functions.
|
|
the zero initialization is redundant since decode_vec does its own
clearing, and it increases the risk that buggy compilers will generate
calls to memset. as long as symbols are bound at ld time, such a call
will not break anything, but it may be desirable to turn off ld-time
binding in the future.
|
|
this was already essentially possible as a result of the previous
commits changing the dynamic linker/thread pointer bootstrap process.
this commit mainly adds build system infrastructure:
configure no longer attempts to disable stack protector. instead it
simply determines how so the makefile can disable stack protector for
a few translation units used during early startup.
stack protector is also disabled for memcpy and memset since compilers
(incorrectly) generate calls to them on some archs to implement
struct initialization and assignment, and such calls may creep into
early initialization.
no explicit attempt to enable stack protector is made by configure at
this time; any stack protector option supported by the compiler can be
passed to configure in CFLAGS, and if the compiler uses stack
protector by default, this default is respected.
|
|
since 1.1.0, musl has nominally required a thread pointer to be setup.
most of the remaining code that was checking for its availability was
doing so for the sake of being usable by the dynamic linker. as of
commit 71f099cb7db821c51d8f39dfac622c61e54d794c, this is no longer
necessary; the thread pointer is now valid before any libc code
(outside of dynamic linker bootstrap functions) runs.
this commit essentially concludes "phase 3" of the "transition path
for removing lazy init of thread pointer" project that began during
the 1.1.0 release cycle.
|
|
this allows the dynamic linker itself to run with a valid thread
pointer, which is a prerequisite for stack protector on archs where
the ssp canary is stored in TLS. it will also allow us to remove some
remaining runtime checks for whether the thread pointer is valid.
as long as the application and its libraries do not require additional
size or alignment, this early thread pointer will be kept and reused
at runtime. otherwise, a new static TLS block is allocated after
library loading has finished and the thread pointer is switched over.
|
|
previously, the layout of the static TLS block was perturbed by the
size of the dtv; dtv size increasing from 0 to 1 perturbed both TLS
arch types, and the TLS-above-TP type's layout was perturbed by the
specific number of dtv slots (libraries with TLS). this behavior made
it virtually impossible to setup a tentative thread pointer address
before loading libraries and keep it unchanged as long as the
libraries' TLS size/alignment requirements fit.
the new code fixes the location of the dtv and pthread structure at
opposite ends of the static TLS block so that they will not move
unless size or alignment changes.
|
|
previously a new GDT slot was requested, even if one had already been
obtained by a previous call. instead extract the old slot number from
GS and reuse it if it was already set. the formula (GS-3)/8 for the
slot number automatically yields -1 (request for new slot) if GS is
zero (unset).
|
|
this overhaul further reduces the amount of arch-specific code needed
by the dynamic linker and removes a number of assumptions, including:
- that symbolic function references inside libc are bound at link time
via the linker option -Bsymbolic-functions.
- that libc functions used by the dynamic linker do not require
access to data symbols.
- that static/internal function calls and data accesses can be made
without performing any relocations, or that arch-specific startup
code handled any such relocations needed.
removing these assumptions paves the way for allowing libc.so itself
to be built with stack protector (among other things), and is achieved
by a three-stage bootstrap process:
1. relative relocations are processed with a flat function.
2. symbolic relocations are processed with no external calls/data.
3. main program and dependency libs are processed with a
fully-functional libc/ldso.
reduction in arch-specific code is achived through the following:
- crt_arch.h, used for generating crt1.o, now provides the entry point
for the dynamic linker too.
- asm is no longer responsible for skipping the beginning of argv[]
when ldso is invoked as a command.
- the functionality previously provided by __reloc_self for heavily
GOT-dependent RISC archs is now the arch-agnostic stage-1.
- arch-specific relocation type codes are mapped directly as macros
rather than via an inline translation function/switch statement.
|
|
commit f08ab9e61a147630497198fe3239149275c0a3f4 introduced these
accidentally as remnants of some work I tried that did not work out.
|
|
|
|
this global lock allows certain unlock-type primitives to exclude
mmap/munmap operations which could change the identity of virtual
addresses while references to them still exist.
the original design mistakenly assumed mmap/munmap would conversely
need to exclude the same operations which exclude mmap/munmap, so the
vmlock was implemented as a sort of 'symmetric recursive rwlock'. this
turned out to be unnecessary.
commit 25d12fc0fc51f1fae0f85b4649a6463eb805aa8f already shortened the
interval during which mmap/munmap held their side of the lock, but
left the inappropriate lock design and some inefficiency.
the new design uses a separate function, __vm_wait, which does not
hold any lock itself and only waits for lock users which were already
present when it was called to release the lock. this is sufficient
because of the way operations that need to be excluded are sequenced:
the "unlock-type" operations using the vmlock need only block
mmap/munmap operations that are precipitated by (and thus sequenced
after) the atomic-unlock they perform while holding the vmlock.
this allows for a spectacular lack of synchronization in the __vm_wait
function itself.
|
|
as a result of commit 12e1e324683a1d381b7f15dd36c99b37dd44d940, kernel
processing of the robust list is only needed for process-shared
mutexes. previously the first attempt to lock any owner-tracked mutex
resulted in robust list initialization and a set_robust_list syscall.
this is no longer necessary, and since the kernel's record of the
robust list must now be cleared at thread exit time for detached
threads, optimizing it out is more worthwhile than before too.
|
|
the robust list head lies in the thread structure, which is unmapped
before exit for detached threads. this leaves the kernel unable to
process the exiting thread's robust list, and with a dangling pointer
which may happen to point to new unrelated data at the time the kernel
processes it.
userspace processing of the robust list was already needed for
non-pshared robust mutexes in order to perform private futex wakes
rather than the shared ones the kernel would do, but it was
conditional on linking pthread_mutexattr_setrobust and did not bother
processing the pshared mutexes in the list, which requires additional
logic for the robust list pending slot in case pthread_exit is
interrupted by asynchronous process termination.
the new robust list processing code is linked unconditionally (inlined
in pthread_exit), handles both private and shared mutexes, and also
removes the kernel's reference to the robust list before unmapping and
exit if the exiting thread is detached.
|
|
depending on the compiler's interpretation of __asm__ register names
for register class objects, it may be possible for the return value in
r2 to be clobbered by the function call to __stat_fix. I have not
observed any such breakage in normal builds and suspect it only
happens with -O0 or other unusual build options, but since there's an
ambiguity as to the semantics of this feature, it's best to use an
explicit temporary to avoid the issue.
based on reporting and patch by Eugene.
|
|
|
|
when dlopen fails, all partially-loaded libraries need to be unmapped
and freed. any of these libraries using an rpath with $ORIGIN
expansion may have an allocated string for the expanded rpath;
previously, this string was not freed when freeing the library data
structures.
|
|
this change hardens the dynamic linker against the possibility of
loading the wrong library due to inability to expand $ORIGIN in rpath.
hard failures such as excessively long paths or absence of /proc (when
resolving /proc/self/exe for the main executable's origin) do not stop
the path search, but memory allocation failures and any other
potentially transient failures do.
to implement this change, the meaning of the return value of
fixup_rpath function is changed. returning zero no longer indicates
that the dso's rpath string pointer is non-null; instead, the caller
needs to check. a return value of -1 indicates a failure that should
stop further path search.
|
|
the C standard specifies that setjmp is a macro, but longjmp is a
normal function. a macro version of it would be permitted (albeit
useless) for C (not C++), but would have to be a function-like macro,
not an object-like one.
|
|
transient errors during the path search should not allow the search to
continue and possibly open the wrong file. this patch eliminates most
conditions where that could happen, but there is still a possibility
that $ORIGIN-based rpath processing will have an allocation failure,
causing the search to skip such a path. fixing this is left as a
separate task.
a small bug where overly-long path components caused an infinite loop
rather than being skipped/ignored is also fixed.
|
|
while it's the same for all presently supported archs, it differs at
least on sparc, and conceptually it's no less arch-specific than the
other O_* macros. O_SEARCH and O_EXEC are still defined in terms of
O_PATH in the main fcntl.h.
|
|
|
|
POSIX requires the sem_nsems member to have type unsigned short. we
have to work around the incorrect kernel type using matching
endian-specific padding.
|
|
The shm_info struct is a gnu extension and some of its members do
not have shm* prefix. This is worked around in sys/shm.h by macros,
but aarch64 didn't use those.
|
|
|
|
Internally regcomp needs to copy some iteration nodes before
translating the AST into TNFA representation.
Literal nodes were not copied correctly: the class type and list
of negated class types were not copied so classes were ignored
(in the non-negated case an ignored char class caused the literal
to match everything).
This affects iterations when the upper bound is finite, larger
than one or the lower bound is larger than one. So eg. the EREs
[[:digit:]]{2}
[^[:space:]ab]{1,4}
were treated as
.{2}
[^ab]{1,4}
The fix is done with minimal source modification to copy the
necessary fields, but the AST preparation and node handling
code of tre will need to be cleaned up for clarity.
|
|
The valid BRE backref tokens are \1 .. \9, and 0 is not a special
character either so \0 is undefined by the standard.
Such undefined escaped characters are treated as literal characters
currently, following existing practice, so \0 is the same as 0.
|
|
commit 559de8f5f06da9022cbba70e22e14a710eb74513 redefined FLT_ROUNDS
to use an external function that can report the actual current
rounding mode, rather than always reporting round-to-nearest. however,
float.h did not include 'extern "C"' wrapping for C++, so C++ programs
using FLT_ROUNDS ended up with an unresolved reference to a
name-mangled C++ function __flt_rounds.
|
|
one stop condition for parsing abbreviated ipv6 addressed was missed,
allowing the internal ip[] buffer to overflow. this patch adds the
missing stop condition and masks the array index so that, in case
there are any remaining stop conditions missing, overflowing the
buffer is not possible.
|
|
one of the features of ERE is that it's actually a regular language
and does not admit expressions which cannot be matched in linear time.
introduction of \n backref support into regcomp's ERE parsing was
unintentional.
|
|
the regex parser handles the (undefined) case of an unexpected byte
following a backslash as a literal. however, instead of correctly
decoding a character, it was treating the byte value itself as a
character. this was not only semantically unjustified, but turned out
to be dangerous on archs where plain char is signed: bytes in the
range 252-255 alias the internal codes -4 through -1 used for special
types of literal nodes in the AST.
|
|
|
|
|
|
the previous values (2k min and 8k default) were too small for some
archs. aarch64 reserves 4k in the signal context for future extensions
and requires about 4.5k total, and powerpc reportedly uses over 2k.
the new minimums are chosen to fit the saved context and also allow a
minimal signal handler to run.
since the default (SIGSTKSZ) has always been 6k larger than the
minimum, it is also increased to maintain the 6k usable by the signal
handler. this happens to be able to store one pathname buffer and
should be sufficient for calling any function in libc that doesn't
involve conversion between floating point and decimal representations.
x86 (both 32-bit and 64-bit variants) may also need a larger minimum
(around 2.5k) in the future to support avx-512, but the values on
these archs are left alone for now pending further analysis.
the value for PTHREAD_STACK_MIN is not increased to match MINSIGSTKSZ
at this time. this is so as not to preclude applications from using
extremely small thread stacks when they know they will not be handling
signals. unfortunately cancellation and multi-threaded set*id() use
signals as an implementation detail and therefore require a stack
large enough for a signal context, so applications which use extremely
small thread stacks may still need to avoid using these features.
|
|
previously the implementation-internal signal used for multithreaded
set*id operations was left unblocked during handling of the
cancellation signal. however, on some archs, signal contexts are huge
(up to 5k) and the possibility of nested signal handlers drastically
increases the minimum stack requirement. since the cancellation signal
handler will do its job and return in bounded time before possibly
passing execution to application code, there is no need to allow other
signals to interrupt it.
|
|
these additions were made based on scanning commit authors since the
last update, at the time of the 1.1.4 release.
|
|
overly long user/group names are potentially a DoS vector and source
of other problems like partial writes by sendmsg, and not useful.
|
|
previously, a sentinel value of (FILE *)-1 was used to inform the
caller of __nscd_query that nscd is not in use. aside from being an
ugly hack, this resulted in duplicate code paths for two logically
equivalent cases: no nscd, and "not found" result from nscd.
now, __nscd_query simply skips closing the socket and returns a valid
FILE pointer when nscd is not in use, and produces a fake "not found"
response header. the caller is then responsible for closing the socket
just like it would do if it had gotten a real "not found" response.
|
|
This completes the alternate backend support that was previously added
to the getpw* and getgr* functions. Unlike those, though, it
unconditionally queries nscd. Any groups from nscd that aren't in the
/etc/groups file are added to the returned list, and any that are
present in the file are ignored. The purpose of this behavior is to
provide a view of the group database consistent with what is observed
by the getgr* functions. If group memberships reported by nscd were
honored when the corresponding group already has a definition in the
/etc/groups file, the user's getgrouplist-based membership in the
group would conflict with their non-membership in the reported
gr_mem[] for the group.
The changes made also make getgrouplist thread-safe and eliminate its
clobbering of the global getgrent state.
|
|
|
|
The unwind code in libgcc uses this type for unwinding across signal
handlers. On aarch64 the kernel may place a sequence of structs on the
signal stack on top of the ucontext to provide additional information.
The unwinder only needs the header, but added all the types the kernel
currently defines for this mechanism because they are part of the uapi.
|
|
previously, commit e7b9887e8b65253087ab0b209dc8dd85c9f09614 aligned
the sizes with the glibc ABI. subsequent discussion during the merge
of the aarch64 port reached a conclusion that we should reject larger
arch-specific sizes, which have significant cost and no benefit, and
stick with the existing common 32-bit sizes for all 32-bit/ILP32 archs
and the x86_64 sizes for 64-bit archs.
one peculiarity of this change is that x32 pthread_attr_t is now
larger in musl than in the glibc x32 ABI, making it unsafe to call
pthread_attr_init from x32 code that was compiled against glibc. with
all the ABI issues of x32, it's not clear that ABI compatibility will
ever work, but if it's needed, pthread_attr_init and related functions
could be modified not to write to the last slot of the object.
this is not a regression versus previous releases, since on previous
releases the x32 pthread type sizes were all severely oversized
already (due to incorrectly using the x86_64 LP64 definitions).
moreover, x32 is still considered experimental and not ABI-stable.
|
|
This adds complete aarch64 target support including bigendian subarch.
Some of the long double math functions are known to be broken otherwise
interfaces should be fully functional, but at this point consider this
port experimental.
Initial work on this port was done by Sireesh Tripurari and Kevin Bortis.
|
|
This is in preparation for the aarch64 port only to have the long
double math symbols available on ld128 platforms. The implementations
should be fixed up later once we have proper tests for these functions.
Added bigendian handling for ld128 bit manipulations too.
|
|
Changed the special case handling and bit manipulation to better
match the double version.
|
|
There are two main abi variants for thread local storage layout:
(1) TLS is above the thread pointer at a fixed offset and the pthread
struct is below that. So the end of the struct is at known offset.
(2) the thread pointer points to the pthread struct and TLS starts
below it. So the start of the struct is at known (zero) offset.
Assembly code for the dynamic TLSDESC callback needs to access the
dynamic thread vector (dtv) pointer which is currently at the front
of the pthread struct. So in case of (1) the asm code needs to hard
code the offset from the end of the struct which can easily break if
the struct changes.
This commit adds a copy of the dtv at the end of the struct. New members
must not be added after dtv_copy, only before it. The size of the struct
is increased a bit, but there is opportunity for size optimizations.
|
|
due to a logic error in the use of masked cancellation mode,
pthread_cond_wait did not honor PTHREAD_CANCEL_DISABLE but instead
failed with ECANCELED when cancellation was pending.
|
|
Implemented as a wrapper around fegetround introducing a new function
to the ABI: __flt_rounds. (fegetround cannot be used directly from float.h)
|