From ea818ea8340c13742a4f41e6077f732291aea4bc Mon Sep 17 00:00:00 2001 From: Rich Felker Date: Mon, 25 Aug 2014 15:43:40 -0400 Subject: add working a_spin() atomic for non-x86 targets conceptually, a_spin needs to be at least a compiler barrier, so the compiler will not optimize out loops (and the load on each iteration) while spinning. it should also be a memory barrier, or the spinning thread might keep spinning without noticing stores from other threads, thus delaying for longer than it should. ideally, an optimal a_spin implementation that avoids unnecessary cache/memory contention should be chosen for each arch, but for now, the easiest thing is to perform a useless a_cas on the calling thread's stack. --- arch/arm/atomic.h | 1 + 1 file changed, 1 insertion(+) (limited to 'arch/arm/atomic.h') diff --git a/arch/arm/atomic.h b/arch/arm/atomic.h index 302e6d8f..52103542 100644 --- a/arch/arm/atomic.h +++ b/arch/arm/atomic.h @@ -103,6 +103,7 @@ static inline void a_store(volatile int *p, int x) static inline void a_spin() { + __k_cas(&(int){0}, 0, 0)); } static inline void a_crash() -- cgit v1.2.1