[zeromq-dev] atomic ops
Luca Fascione
lukes at wetafx.co.nz
Thu Feb 9 04:54:42 CET 2012
Sorry, I thought by "lock free" you meant a progress guarantee: that at
least one thread in your pool will provably (as in a mathematical sense)
eventually come out of the starvation zone.
It seems you're opposing "in practice" arguments to the more formal
point (arguably somewhat academic) I was making, though...
Also, the CAS you're proposing is tantamount a spinlock, I don't quite
see the difference: you're saying check, try something, check if
anything changed, try again... That's the same code for the spinlock:
set to 1, if it was 0, otherwise (wait a tiny bit then) set to 1 if it
was 0 (and maybe do exponential backoff if you're fancy)...
Indeed as soon as the scheduler reclaims the timeslice to either thread,
assuming the other thread stays scheduled, they'll be able to make
progress. But that's "cheating" from a formal point of view...
So, yeah, I can see it's lock-less, but I don't see that it's lock-free,
I'm afraid. Nor wait-free for that matter.
Your overarching point remains, unlikely to become a real problem on
normal systems with time-slice scheduling and such. But on a simplistic,
non time-sharing, system in which you had a thread per "core" and these
cooperatively return the core only when they're done, and assuming
they're clocked the same (which is likely), it seems you could starve if
the alignment is just right (or just unlucky, really)
No?
Luca
On 09/02/12 15:18, john skaller wrote:
> On 09/02/2012, at 12:53 PM, Luca Fascione wrote:
>
>> I agree with everything else you said,
>> but I'm not sure I understand why this is lock free... (I see it has no
>> mutex locks, indeed...)
> Well if there's no locks its lock free :) At the core most locks use
> spin-locks, spin locks, obviously, are CPU intensive if they don't
> sleep a bit, and create a latency issue if they do. Users don't
> normally do low level locking, its done by the OS, which means
> an expensive context switch.
>
> So lock free is supposed to be cool. Lock free only works in particular
> algorithms. Thread safe lock free data structures/algorithms are supposedly good
> because they're fast and don't waste resources.
>
>> Couldn't you have an infinite pingpong of a couple threads starving each
>> other, because they never manage to agree on what the refcount should be?
>> (as in they cause each other's check of the ref count to fail forever)
> In theory, generally yes. In practice, this is rare.
>
> The theory can be modified to ensure it doesn't happen by making
> an assumption called "progress". Roughly that means that, however slowly,
> all threads will actually execute a bit, so eventually the algorithm will succeed.
>
> Hard real time operating systems do not provide progress across priority
> boundaries: high priority threads always execute to the exclusion of lower
> priority ones. Most OS, however, try to exercise all the threads a bit:
> high priority threads get more CPU but never all of it.
>
> This is particularly important squashing certain kinds of DNS attacks
> that hit higher priority layers, but need to be manager from a lower
> priority (application) level: if you want to shut down your network interface
> card because it's being spammed, you need enough cycles to type
> the command on the console :)
>
> BTW: I'm not saying 0MQ implementation is bugged, just explaining
> why I'm suspicious :)
>
>
> --
> john skaller
> skaller at users.sourceforge.net
>
>
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
--
Luca Fascione
Rendering Research Lead - Weta Digital
Phone: +64 4 909 6870 (x6870)
Mobile: +64 21 0764 862
More information about the zeromq-dev
mailing list