Age | Commit message (Collapse) | Author | Lines |
|
the goal is to be able to use pthread_setcancelstate internally in
the implementation, whenever a function might want to use functions
which are cancellation points but avoid becoming a cancellation point
itself. i could have just used a separate internal function for
temporarily inhibiting cancellation, but the solution in this commit
is better because (1) it's one less implementation-specific detail in
functions that need to use it, and (2) application code can also get
the same benefit.
previously, pthread_setcancelstate dependend on pthread_self, which
would pull in unwanted thread setup overhead for non-threaded
programs. now, it temporarily stores the state in the global libc
struct if threads have not been initialized, and later moves it if
needed. this way we can instead use __pthread_self, which has no
dependencies and assumes that the thread register is already valid.
|
|
|
|
signals were wrongly left masked, and cancellability state was not
switched to disabled, during the execution of cleanup handlers.
|
|
this patch improves the correctness, simplicity, and size of
cancellation-related code. modulo any small errors, it should now be
completely conformant, safe, and resource-leak free.
the notion of entering and exiting cancellation-point context has been
completely eliminated and replaced with alternative syscall assembly
code for cancellable syscalls. the assembly is responsible for setting
up execution context information (stack pointer and address of the
syscall instruction) which the cancellation signal handler can use to
determine whether the interrupted code was in a cancellable state.
these changes eliminate race conditions in the previous generation of
cancellation handling code (whereby a cancellation request received
just prior to the syscall would not be processed, leaving the syscall
to block, potentially indefinitely), and remedy an issue where
non-cancellable syscalls made from signal handlers became cancellable
if the signal handler interrupted a cancellation point.
x86_64 asm is untested and may need a second try to get it right.
|
|
|
|
|
|
setting errno here is completely valid, but some programs, notably
busybox printf, assume that errno will not be set during output and
treat this as an error condition. in any case, skipping it slightly
reduces code size and saves time.
|
|
|
|
|
|
|
|
otherwise we cannot support an application's desire to use
asynchronous cancellation within the callback function. this change
also slightly debloats pthread_create.c.
|
|
we take advantage of the fact that unless self->cancelpt is 1,
cancellation cannot happen. so just increment it by 2 to temporarily
block cancellation. this drops pthread_create.o well under 1k.
|
|
with datagram sockets, depending on fprintf not to flush the output
early was very fragile; the new version simply uses a small fixed-size
buffer. it could be updated to dynamic-allocate large buffers if
needed, but i can't envision any admin being happy about finding
64kb-long lines in their syslog...
|
|
per the standard, SIGPIPE is not generated for SOCK_DGRAM.
|
|
it actually appears the hacks to block SIGPIPE are probably not
necessary, and potentially harmful. if i can confirm this, i'll remove
them.
|
|
some of these definitions were just plain wrong, others based on
outdated ancient "non-64" versions of the kernel interface.
as much as possible has now been moved out of bits/*
these changes break abi (the old abi for these functions was wrong),
but since they were not working anyway it can hardly matter.
|
|
it should be noted that flock does not mix well with standard fcntl
locking, but nonetheless some applications will attempt to use flock
instead of fcntl if both exist. options to configure or small patches
may be needed. debian maintainers have plenty of experience with this
unfortunate situation...
|
|
|
|
|
|
|
|
|
|
this eliminates the ugly static buffer in programs that use ptsname_r.
|
|
after fork, we have a new process and the pid is equal to the tid of
the new main thread. there is no need to make two separate syscalls to
obtain the same number.
|
|
we can do this without violating the namespace now that they are
macros/inline functions rather than extern functions. the motivation
is that gcc was generating giant, slow, horrible code for the old
functions, and now generates a single byte-swapping instruction.
|
|
|
|
|
|
|
|
1. saved errno was not being restored, illegally clearing errno to 0.
2. no need to backup and save errno around free; it will not touch
except perhaps when the program has already invoked UB...
|
|
|
|
|
|
|
|
calling pthread_exit from, or pthread_cancel on, the timer callback
thread will no longer destroy the timer.
|
|
according to posix, readv "shall be equivalent to read(), except..."
that it places the data into the buffers specified by the iov array.
however on linux, when reading from a terminal, each iov element
behaves almost like a separate read. this means that if the first iov
exactly satisfied the request (e.g. a length-one read of '\n') and the
second iov is nonzero length, the syscall will block again after
getting the blank line from the terminal until another line is read.
simply put, entering a single blank line becomes impossible.
the solution, fortunately, is simple. whenever the buffer size is
nonzero, reduce the length of the requested read by one byte and let
the last byte go through the buffer. this way, readv will already be
in the second (and last) iov, and won't re-block on the second iov.
|
|
|
|
|
|
|
|
POSIX clearly specifies the type of msg_iovlen and msg_controllen, and
Linux ignores it and makes them both size_t instead. to work around
this we add padding (instead of just using the wrong types like glibc
does), but we also need to patch-up the struct before passing it to
the kernel in case the caller did not zero-fill it.
if i could trust the kernel to just ignore the upper 32 bits, this
would not be necessary, but i don't think it will ignore them...
|
|
|
|
|
|
previously NULL was returned in ai_canonname, resulting in crashes in
some callers. this behavior was incorrect. note however that the new
behavior differs from glibc, which performs reverse dns lookups. POSIX
is very clear that a reverse DNS lookup must not be performed for
numeric addresses.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
this is something of a tradeoff, as now set*id() functions, rather
than pthread_create, are what pull in the code overhead for dealing
with linux's refusal to implement proper POSIX thread-vs-process
semantics. my motivations are:
1. it's cleaner this way, especially cleaner to optimize out the
rsyscall locking overhead from pthread_create when it's not needed.
2. it's expected that only a tiny number of core system programs will
ever use set*id() functions, whereas many programs may want to use
threads, and making thread overhead tiny is an incentive for "light"
programs to try threads.
|
|
|
|
|