Age | Commit message (Collapse) | Author | Lines |
|
|
|
the incorrect check for crossing device boundaries was preventing nftw
from traversing anything except the initially provided pathname.
|
|
posix allows zero length destination
|
|
STB_WEAK is only a weak reference for undefined symbols (those with a
section of SHN_UNDEF). otherwise, it's a weak definition. normally
this distinction would not matter, since a relocation referencing a
symbol that also provides a definition (not SHN_UNDEF) will always
succeed in finding the referenced symbol itself. however, in the case
of copy relocations, the referenced symbol itself is ignored in order
to search for another symbol to copy from, and thus it's possible that
no definition is found. in this case, if the symbol being resolved
happened to be a weak definition, it was misinterpreted as a weak
reference, suppressing the error path and causing a crash when the
copy relocation was performed with a null source pointer passed to
memcpy.
there are almost certainly still situations in which invalid
combinations of symbol and relocation types can cause the dynamic
linker to crash (this is pretty much inevitable), but the intent is
that crashes not be possible for symbol/relocation tables produced by
a valid linker.
|
|
setstate could use the results of previous initstate or setstate
calls (they return the old state buffer), but the documentation
requires that an initialized state buffer should be possible to
use in setstate immediately, which means that initstate should
save the generator parameters in it.
I also removed the copyright notice since it is present in the
copyright file.
|
|
|
|
this glibc abi compatibility function was missed when the scanf
aliases were added.
|
|
weak_alias was only in the c code, so drem was missing on platforms
where remainder is implemented in asm.
|
|
per POSIX, the variadic argument has type union semun, which may
contain a pointer or int; the type read depends on the command being
issued. this allows the userspace part of the implementation to be
type-correct without requiring special-casing for different commands.
the kernel always expects to receive the argument interpreted as
unsigned long (or equivalently, a pointer), and does its own handling
of extracting the int portion from the representation, as needed.
this change fixes two possible issues: most immediately, reading the
argument as a (signed) long and passing it to the syscall would
perform incorrect sign-extension of pointers on the upcoming x32
target. the other possible issue is that some archs may use different
(user-space) argument-passing convention for unions, preventing va_arg
from correctly obtaining the argument when the type long (or even
unsigned long or void *) is passed to it.
|
|
really, fcntl should be changed to use the correct type corresponding
to cmd when calling va_arg, and to carry the correct type through
until making the syscall. however, this greatly increases binary size
and does not seem to offer any benefits except formal correctness, so
I'm holding off on that change for now.
the minimal changes made in this patch are in preparation for addition
of the x32 port, where the syscall macros need to know whether their
arguments are pointers or integers in order to properly pass them to
the 64-bit kernel.
|
|
it's unclear what the historical signature for this function was, but
semantically, the argument should be a pointer to const, and this is
what glibc uses. correct programs should not be using this function
anyway, so it's unlikely to matter.
|
|
both the kernel and glibc agree that this argument is unsigned; the
incorrect type ssize_t came from erroneous man pages.
|
|
this change is consistent with the corresponding glibc functions and
is semantically const-correct. the incorrect argument types without
const seem to have been taken from erroneous man pages.
|
|
this was wrong since the original commit adding inotify, and I don't
see any explanation for it. not even the man pages have it wrong. it
was most likely a copy-and-paste error.
|
|
the type int was taken from seemingly erroneous man pages. glibc uses
in_addr_t (uint32_t), and semantically, the arguments should be
unsigned.
|
|
this practice came from very early, before internal/syscall.h defined
macros that could accept pointer arguments directly and handle them
correctly. aside from being ugly and unnecessary, it looks like it
will be problematic when we add support for 32-bit ABIs on archs where
registers (and syscall arguments) are 64-bit, e.g. x32 and mips n32.
|
|
this agrees with implementation practice on glibc and BSD systems, and
is the const-correct way to do things; it eliminates warnings from
passing pointers to const. the prototype without const came from
seemingly erroneous man pages.
|
|
|
|
the header is included only as a guard to check that the declaration
and definition match, so the typo didn't cause any breakage aside
from omitting this check.
|
|
the reasons are the same as for sbrk. unlike sbrk, there is no safe
usage because brk does not return any useful information, so it should
just fail unconditionally.
|
|
use of sbrk is never safe; it conflicts with malloc, and malloc may be
used internally by the implementation basically anywhere. prior to
this change, applications attempting to use sbrk to do their own heap
management simply caused untrackable memory corruption; now, they will
fail with ENOMEM allowing the errors to be fixed.
sbrk(0) is still permitted as a way to get the current brk; some
misguided applications use this as a measurement of their memory
usage or for other related purposes, and such usage is harmless.
eventually sbrk may be re-added if/when malloc is changed to avoid
using the brk by using mmap for all allocations.
|
|
|
|
based on patch by Timo Teräs; greatly simplified to use fprintf.
|
|
based on patch by Timo Teräs.
|
|
the workaround/fallback code for supporting O_PATH file descriptors
when the kernel lacks support for performing these operations on them
caused EBADF to get replaced by ENOENT (due to missing entry in
/proc/self/fd). this is unlikely to affect real-world code (calls that
might yield EBADF are generally unsafe, especially in library code)
but it was breaking some test cases.
the fix I've applied is something of a tradeoff: it adds one syscall
to these operations on kernels where the workaround is needed. the
alternative would be to catch ENOENT from the /proc lookup and
translate it to EBADF, but I want to avoid doing that in the interest
of not touching/depending on /proc at all in these functions as long
as the kernel correctly supports the operations. this is following the
general principle of isolating hacks to code paths that are taken on
broken systems, and keeping the code for correct systems completely
hack-free.
|
|
|
|
the ABI allows the callee to clobber stack slots that correspond to
arguments passed in registers, so the caller must adjust the stack
pointer to reserve space appropriately. prior to this fix, the argv
array was possibly clobbered by dynamic linker code before passing
control to the main program.
|
|
our getcwd already (as an extension) supports allocation of a buffer
when the buffer argument is a null pointer, so there's no need to
duplicate the allocation logic in this wrapper function. duplicating
it is actually harmful in that it doubles the stack usage from
PATH_MAX to 2*PATH_MAX.
|
|
and thereby remove otherwise-unnecessary inclusion of stddef.h
|
|
|
|
at most 4 hexadecimal digits are processed in one field so the
value cannot overflow. the netdb.h header was not used.
|
|
this makes the prototypes in math.h are visible so they are checked agaist
the function definitions
|
|
this is purely a wrapper for close since Linux does not support EINTR
semantics for the close syscall.
|
|
previously this flag was defined and accepted as a no-op, possibly
breaking some software that uses it. given the choice to remove the
definition and possibly break applications that were already working,
or simply implement the feature, the latter turned out to be easy
enough to make the decision easy.
in the case where the FNM_PATHNAME flag is also set, this
implementation is clean and essentially optimal. otherwise, it's an
inefficient "brute force" implementation. at some point, when cleaning
up and refactoring this code, I may add a more direct code path for
handling FNM_LEADING_DIR in the non-FNM_PATHNAME case, but at this
point my main interest is avoiding introducing new bugs in the code
that implements the standard fnmatch features specified by POSIX.
|
|
this is still experimental and subject to change. for git checkouts,
an attempt is made to record the exact revision to aid in bug reports
and debugging. no version information is recorded in the static libc.a
or binaries it's linked into.
|
|
the FNM_PATHNAME logic for advancing by /-delimited components was
incorrect when the / character was escaped (i.e. \/), and a final \ at
the end of pattern was not handled correctly.
|
|
a '/' in the pattern could be incorrectly matched against the
terminating null byte in the string causing arbitrarily long
sequence of out-of-bounds access in fnmatch("/","",FNM_PATHNAME)
|
|
a v6 socket will only be used if there is at least one v6 nameserver
address. if the kernel lacks v6 support, the code will fall back to
using a v4 socket and requests to v6 servers will silently fail. when
using a v6 socket, v4 addresses are converted to v4-mapped form and
setsockopt is used to ensure that the v6 socket can accept both v4 and
v6 traffic (this is on-by-default on Linux but the default is
configurable in /proc and so it needs to be set explicitly on the
socket level). this scheme avoids increasing resource usage during
lookups and allows the existing network io loop to be used without
modification.
previously, nameservers whose address family did not match the address
family of the first-listed nameserver were simply ignored. prior to
recent __ipparse fixes, they were not ignored but erroneously parsed.
|
|
subsequent code assumes the address family requested is either
unspecified or one of IPv4/IPv6, and could malfunction if this
constraint is not met, so other address families should be explicitly
rejected.
|
|
these functions were spuriously failing in the case where the buffer
size was exactly the number of bytes/characters to be written,
including null termination. since these functions do not have defined
error conditions other than buffer size, a reasonable application may
fail to check the return value when the format string and buffer size
are known to be valid; such an application could then attempt to use a
non-terminated buffer.
in addition to fixing the bug, I have changed the error handling
behavior so that these functions always null-terminate the output
except in the case where the buffer size is zero, and so that they
always write as many characters as possible before failing, rather
than dropping whole fields that do not fit. this actually simplifies
the logic somewhat anyway.
|
|
|
|
|
|
|
|
This function is used by ping6 from iputils.
|
|
|
|
- remove the HAVE_EFFICIENT_IRINT case: fn is an exact integer, so
it can be converted to int32_t a bit more efficiently than with a
cast (the rounding mode change can be avoided), but musl does not
support this case on any arch.
- __rem_pio2: use double_t where possible
- __rem_pio2f: use less assignments to avoid stores on i386
- use unsigned int bit manipulation (and union instead of macros)
- use hexfloat literals instead of named constants
|
|
|
|
|
|
This way, if an fprintf fails, we get an incomplete group entry rather
than a corrupted one.
|
|
If *l == *r && *l, then by transitivity, *r.
|