Age | Commit message (Collapse) | Author | Lines |
|
rather than having each arch provide its own atomic.h, there is a new
shared atomic.h in src/internal which pulls arch-specific definitions
from arc/$(ARCH)/atomic_arch.h. the latter can be extremely minimal,
defining only a_cas or new ll/sc type primitives which the shared
atomic.h will use to construct everything else.
this commit avoids making heavy changes to the individual archs'
atomic implementations. definitions which are identical or
near-identical to what the new shared atomic.h would produce have been
removed, but otherwise the changes made are just hooking up the
arch-specific files to the new infrastructure. major changes to take
advantage of the new system will come in subsequent commits.
|
|
commit 2f853dd6b9a95d5b13ee8f9df762125e0588df5d failed to change the
test for -include vis.h support to use $srcdir, so vis.h was always
disabled by configure for out-of-tree builds.
|
|
otherwise C declarations are included into preprocessed (.S) asm
source files, producing errors from the assembler.
|
|
the lib dir is automatically created if needed by the out-of-tree
build logic, and now that all generated files are in obj and lib,
deleting them is much simpler. using "rm -rf" is also more thorough,
as it picks up object files that were left around from source files
that no longer exist or which are no longer to be used because an
arch-specific replacement file was added or removed.
|
|
also clean up duplication of CFLAGS passing to assembler.
|
|
|
|
as of commit af21a82ccc8687aa16e85def7db95efeae4cf72e, .sub files are
no longer in use. removing the makefile machinery to handle them not
only cleans up and simplifies the makefile, but also significantly
reduces make's startup time.
|
|
commit 2f853dd6b9a95d5b13ee8f9df762125e0588df5d failed to replicate
the old makefile logic that caused arch/arm/src/arm/atomics.s to be
built. since this was the only .s file under arch/*/src, rather than
trying to reproduce the old logic, I'm just moving it up a level and
adjusting the glob pattern in the makefile to catch it. eventually
arch/*/src will probably be removed in favor of moving all these files
to appropriate src/*/$(ARCH) locations.
|
|
|
|
|
|
the __SOFTFP__ macro which was wrongly being used does not reflect the
ABI (arm vs armhf) but just the availability of floating point
instructions/registers, so -mfloat-abi=softfp was wrongly being
treated as armhf. __ARM_PCS_VFP is the correct predefined macro to
check for the armhf EABI variant. this macro usage was corrected for
the build process in commit 4918c2bb206bfaaf5a1f7d3448c2f63d5e2b7d56
but reloc.h was apparently overlooked at the time.
|
|
this makes it possible to inline them with LTO, and is the simplest
approach to eliminating the use of .sub files.
this also makes VFP sqrt available for use with the standard EABI
(plain arm rather than armhf subarch) when libc is built with
-mfloat-abi=softfp. the same could have been done for fabs, but when
the argument and return value are in integer registers, moving to VFP
registers and back is almost certainly more costly than a simple
integer operation.
|
|
this depends on commit 9f5eb77992b42d484d69e879d24ef86466f20f21, which
made it possible to use a .c file for arch-specific replacements, and on
commit 2f853dd6b9a95d5b13ee8f9df762125e0588df5d, the out-of-tree build
support, which made it so that src/*/$(ARCH)/* 'replacement' files get
used even if they don't match the base name of a .c file in the parent
directory.
|
|
this allows the rules for .o and .lo files to be identical, with -fPIC
and -DSHARED added for .lo files via target-specific variable append.
this is arguably cleaner now and will allow more cleanup and removal
of redundant rule bodies after other prerequisite changes are made.
|
|
previously, replacement files provided in $(ARCH) dirs under src/ had
to be .s files. in order to replace a file with C source, an empty .s
file was needed there to suppress the original file, and a separate .c
file was needed in arch/$(ARCH)/src/.
support for .S is new and is aimed at short-term use eliminating .sub
files. asm source files are still expected not to make any heavy
preprocessor use, just simple conditionals on subarch. eventually most
affected files may be replaced with C source files with minimal inline
asm instead of asm source files.
|
|
Programs such as iptables depend on these constants, which can also
be found defined in other libcs.
Since only TCP_* is reserved as part of tcp.h's namespace, we hide
them behind _BSD_SOURCE (and therefore _DEFAULT_SOURCE) to expose
them by default, but keep it standard conforming.
|
|
The return value of if_nametoindex is unsigned; it should return 0
on error.
|
|
this change adds support for building musl outside of the source
tree. the implementation is similar to autotools where running
configure in a different directory creates config.mak in the current
working directory and symlinks the makefile, which contains the
logic for creating all necessary directories and resolving paths
relative to the source directory.
to support both in-tree and out-of-tree builds with implicit make
rules, all object files are now placed into a separate directory.
|
|
|
|
apparently the .gpword directive does not work reliably with local
text labels; values produced were offset by 64k from the correct
value, resulting in incorrect computation of the got pointer at
runtime. instead, use an external label so that the assembler does not
munge the relocation; the linker will then get it right.
commit 6fef8cafbd0f6f185897bc87feb1ff66e2e204e1 exposed this issue by
removing the old, non-PIE-compatible handwritten crt1.s, which was not
affected. presumably mips PIE executables (using Scrt1.o produced from
crt_arch.h) were already affected at the time.
|
|
at least gcc 4.7 claims c++11 support but does not accept the alignas
keyword, causing breakage when stddef.h is included in c++11 mode.
instead, prefer using __attribute__((__aligned__)) on any compiler
with GNU extensions, and only use the alignas keyword as a fallback
for other C++ compilers.
C code should not be affected by this patch.
|
|
previously, getdelim was allocating twice the space needed every time
it expanded its buffer to implement exponential buffer growth (in
order to avoid quadratic run time). however, this doubling was
performed even when the final buffer length needed was already known,
which is the common case that occurs whenever the delimiter is in the
FILE's buffer.
this patch makes two changes to remedy the situation:
1. over-allocation is no longer performed if the delimiter has already
been found when realloc is needed.
2. growth factor is reduced from 2x to 1.5x to reduce the relative
excess allocation in cases where the delimiter is not initially in the
buffer, including unbuffered streams.
in theory these changes could lead to quadratic time if the same
buffer is reused to process a sequence of lines successively
increasing in length, but once this length exceeds the stdio buffer
size, the delimiter will not be found in the buffer right away and
exponential growth will still kick in.
|
|
getdelim was updating *n, the caller's stored buffer size, before
calling realloc. if getdelim then failed due to realloc failure, the
caller would see in *n a value larger than the actual size of the
allocated block, and use of that value is unsafe. in particular,
passing it again to getdelim is unsafe.
now, temporary storage is used for the desired new size, and *n is not
written until realloc succeeds.
|
|
this error case was overlooked in the old range checking logic. new
check is moved out of __libc_sigaction to the public wrapper in order
to unify the error path and reduce code size.
|
|
commit 8a8fdf6398b85c99dffb237e47fa577e2ddc9e77 was intended to remove
all such usage, but these arch-specific files were overlooked, leading
to inconsistent declarations and definitions.
|
|
POSIX specifies the behaviour for null rootp input, but it
was not implemented correctly.
|
|
changed the insertion method to simplify the recursion logic and
reduce code size a bit.
|
|
malloc failure was not properly propagated in the insertion method
which led to null pointer dereference.
|
|
the tsearch data structure is an avl tree, but it did not implement
the deletion operation correctly so the tree could become unbalanced.
reported by Ed Schouten.
|
|
With point-to-point interfaces, the IFA_ADDRESS netlink attribute
contains the peer address while an extra attribute IFA_LOCAL carries
the actual local interface address.
Both the glibc and uclibc implementations of getifaddrs() handle this
case by moving the ifa_addr contents to the broadcast/remote address
union and overwriting ifa_addr upon receipt of an IFA_LOCAL attribute.
This patch adds the same special treatment logic of IFA_LOCAL to
musl's implementation of getifaddrs() in order to align its behaviour
with that of uclibc and glibc.
Signed-off-by: Jo-Philipp Wich <jow@openwrt.org>
|
|
if two or more threads accessed tls in a dso that was loaded after
the threads were created, then __tls_get_new could do out-of-bound
memory access (leading to segfault).
accidentally byte count was used instead of element count when
the new dtv pointer was computed. (dso->new_dtv is (void**).)
it is rare that the same dso provides dtv for several threads,
the crash was not observed in practice, but possible to trigger.
|
|
a conforming compiler for an arch with excess precision floating point
(FLT_EVAL_METHOD!=0; presently i386 is the only such arch supported)
computes all intermediate results in the types float_t and double_t
rather than the nominal type of the expression. some incorrect
compilers, however, only keep excess precision in registers, and
convert down to the nominal type when spilling intermediate results to
memory, yielding unpredictable results that depend on the compiler's
choices of what/when to spill. in particular, this happens on old gcc
versions with -ffloat-store, which we need in order to work around
bugs where the compiler wrongly keeps explicitly-dropped excess
precision.
by explicitly converting to double_t where expressions are expected be
be evaluated in double_t precision, we can avoid depending on the
compiler to get types correct when spilling; the nominal and
intermediate precision now match. this commit should not change the
code generated by correct compilers, or by old ones on non-i386 archs
where double_t is defined as double.
this fixes a serious bug in argument reduction observed on i386 with
gcc 4.2: for values of x outside the unit circle, sin(x) was producing
results outside the interval [-1,1]. changes made in commit
0ce946cf808274c2d6e5419b139e130c8ad4bd30 were likely responsible for
breaking compatibility with this and other old gcc versions.
patch by Szabolcs Nagy.
|
|
commit ad1cd43a86645ba2d4f7c8747240452a349d6bc1 eliminated
preprocessor-level omission of references to the init/fini array
symbols from object files going into libc.so. the references are weak,
and the intent was that the linker would resolve them to zero in
libc.so, but instead it leaves undefined references that could be
satisfied at runtime. normally these references would be harmless,
since the code using them does not even get executed, but some older
binutils versions produce a linking error: when linking a program
against libc.so, ld first tries to use the hidden init/fini array
symbols produced by the linker script to satisfy the references in
libc.so, then produces an error because the definitions are hidden.
ideally ld would have already provided definitions of these symbols
when linking libc.so, but the linker script for -shared omits them.
to avoid this situation, the dynamic linker now provides its own dummy
definitions of the init/fini array symbols for libc.so. since they are
hidden, everything binds at ld time and no references remain in the
dynamic symbol table. with modern binutils and --gc-sections, both
the dummy empty array objects and the code referencing them get
dropped at link time, anyway.
the _init and _fini symbols are also switched back to using weak
definitions rather than weak references since the latter behave
somewhat problematically in general, and the weak definition approach
was known to work well.
|
|
|
|
the nommu kernel shares memory when it can anyway for private
read-only maps, but semantically the map should be private. this can
make a difference when debugging breakpoints are to be used, in which
case the kernel may need to ensure that the mapping is not shared.
the new behavior matches how the kernel FDPIC loader maps the main
program and/or program interpreter (dynamic linker) binary.
|
|
also fix visibility of the glue function used.
|
|
this both allows removal of some of the main remaining uses of the
SHARED macro and clears one obstacle to static-linked dlopen support,
which may be added at some point in the future.
specialized single-TLS-module versions of __copy_tls and __reset_tls
are removed and replaced with code adapted from their dynamic-linked
versions, capable of operating on a whole chain of TLS modules, and
use of the dynamic linker's DSO chain (which contains large struct dso
objects) by these functions is replaced with a new chain of struct
tls_module objects containing only the information needed for
implementing TLS. this may also yield some performance benefit
initializing TLS for a new thread when a large number of modules
without TLS have been loaded, since since there is no need to walk
structures for modules without TLS.
|
|
use weak definitions that the dynamic linker can override instead of
preprocessor conditionals on SHARED so that the same libc start and
exit code can be used for both static and dynamic linking.
|
|
this was only a tiny optimization, and static-linked binaries should
not be calling __tls_get_addr anyway since the linker is supposed to
perform relaxation, resulting in use of the local-exec TLS model.
|
|
this is the first and simplest stage of removal of the SHARED macro,
which will eventually allow libc.a and libc.so to be produced from the
same object files.
the original motivation for these #ifdefs which are now being removed
was to allow building a static-only libc using a compiler that does
not support visibility. however, SHARED was the wrong condition to
test for this anyway; various assembly-language sources refer to
hidden symbols and declare them with the .hidden directive, making it
wrong to define the referenced symbols as non-hidden. if there is a
need in the future to build libc using compilers that lack visibility,
support could be moved to the build system or perhaps the __PIC__
macro could be checked instead of SHARED.
|
|
when adding the fdpic subarchs, the need for these sub files was
overlooked. thus setjmp and longjmp performed illegal instructions.
|
|
on linux/nommu, non-writable private mappings of files may actually
use memory shared with other processes or the fs cache. the old nommu
loader code (used when mmap with MAP_FIXED fails) simply wrote over
top of the original file mapping, possibly clobbering this shared
memory. no such breakage was observed in practice, but it should have
been possible.
the new code starts by mapping anonymous writable memory on archs that
might support nommu, then maps load segments over top of it, falling
back to read if MAP_FIXED fails. we use an anonymous map rather than a
writable file map to avoid reading more data from disk than needed.
since pages cannot be loaded lazily on fault, in case of large
data/bss, mapping the full file may read a lot of data that will
subsequently be thrown away when processing additional LOAD segments.
as a result, we cannot skip the first LOAD segment when operating in
this mode.
these changes affect only non-FDPIC nommu support.
|
|
it was wrongly returning a null pointer instead of an empty string.
|
|
these files are all accepted as legacy arm syntax when producing arm
code, but legacy syntax cannot be used for producing thumb2 with
access to the full ISA. even after switching to UAL, some asm source
files contain instructions which are not valid in thumb mode, so these
will need to be addressed separately.
|
|
the idea of the three-instruction sequence being removed was to be
able to return to thumb code when used on armv4t+ from a thumb caller,
but also to be able to run on armv4 without the bx instruction
available (in which case the low bit of lr would always be 0).
however, without compiler support for generating such a sequence from
C code, which does not exist and which there is unlikely to be
interest in implementing, there is little point in having it in the
asm, and it would likely be easier to add pre-armv4t support via
enhanced linker handling of R_ARM_V4BX than at the compiler level.
removing this code simplifies adding support for building libc in
thumb2-only form (for cortex-m).
|
|
the code to save/restore vfp registers needs to build even when the
configured target does not have fpu; this is because code using vfp
fpu (but with the standard soft-float EABI) may call a libc built for
a soft-float only, and the EABI considers these registers call-saved
when they exist. thus, extra directives are used to force the
assembler to allow vfp instructions and to avoid marking the resulting
object files as requiring vfp.
moving away from using hard-coded opcode words is necessary in order
to eventually support producing thumb2-only output for cortex-m.
conditional execution of these instructions based on hwcap flags was
already implemented. when building for arm (non-thumb) output, the
only currently-supported configuration, this commit does not change
the code emitted.
|
|
|
|
mrc/mcr p10 coprocessor mnemonics are deprecated by some
toolchains.
|
|
contrary to commit 9367fe926196f407705bb07cd29c6e40eb1774dd, all
relevant gas versions actually do support .syntax unified.
|
|
this function is used only as a weak definition for malloc, for static
linking in programs which do not call realloc or free. since it had
external linkage and was thereby exported in libc.so's dynamic symbol
table, --gc-sections was unable to drop it. this was merely an
oversight; there's no reason for it to be external, so make it static.
|