diff options
author | Rich Felker <dalias@aerifal.cx> | 2018-08-28 13:54:50 -0400 |
---|---|---|
committer | Rich Felker <dalias@aerifal.cx> | 2018-08-28 13:54:50 -0400 |
commit | cdbbcfb8f5d748f17694a5cc404af4b9381ff95f (patch) | |
tree | e185acc4eb02ca3d84aff6e285f49160fe8e3266 /include/unistd.h | |
parent | 060ed9367337cbbd59a9e5e638a1c2f460192f25 (diff) | |
download | musl-cdbbcfb8f5d748f17694a5cc404af4b9381ff95f.tar.gz |
fix dubious char signedness check in limits.h
commit 201995f382cc698ae19289623cc06a70048ffe7b introduced a hack
utilizing the signedness of character constants at the preprocessor
level to avoid depending on the gcc-specific __CHAR_UNSIGNED__ predef.
while this trick works on gcc and presumably other compilers being
used, it's not clear that the behavior it depends on is actually
conforming. C11 6.4.4.4 ¶10 defines character constants as having type
int, and 6.10.1 ¶4 defines preprocessor #if arithmetic to take place
in intmax_t or uintmax_t, depending on the signedness of the integer
operand types, and it is specified that "this includes interpreting
character constants".
if character literals had type char and just promoted to int, it would
be clear that when char is unsigned they should behave as uintmax_t at
the preprocessor level. however, as written the text of the standard
seems to require that character constants always behave as intmax_t,
corresponding to int, at the preprocessor level.
since there is a good deal of ambiguity about the correct behavior and
a risk that compilers will disagree or that an interpretation may
mandate a change in the behavior, do not rely on it for defining
CHAR_MIN and CHAR_MAX correctly. instead, use the signedness of the
value (as opposed to the type) of '\xff', which will be positive if
and only if plain char is unsigned. this behavior is clearly
specified, and the specific case '\xff' is even used in an example,
under 6.4.4.4 of the standard.
Diffstat (limited to 'include/unistd.h')
0 files changed, 0 insertions, 0 deletions