[zeromq-dev] C++ assertion failed with Java client
john skaller
skaller at users.sourceforge.net
Thu Feb 2 14:39:39 CET 2012
On 02/02/2012, at 10:36 PM, Martin Lucina wrote:
> skaller at users.sourceforge.net said:
>>
>> Q: is it technically feasible to make a thread-safe wrapper around 0MQ sockets?
>
> Sure, chuck a bunch of locks around socket-related calls? A mutex implies a
> full memory barrier, so things should work OK at least on cache-coherent
> architectures.
>
> Performance will take a nose-dive of course, but if that is not a concern,
> go for it ...
BTW: are you so sure performance will suffer?
The reason I ask is related to existing overheads:
(a) malloc/free is used extensively in real applications,
it is already thread-safe so it is using at least some combination
of locks and TLS
(b) the use of "errno" by 0MQ is a performance overhead already:
errno is in TLS which has a nasty overhead. If you want to improve performance
give up on, or at least provide a sane alternative to, errno.
Fast barriers are provided on multi-cores, implemented directly in the hardware.
There are also lock free synchronisation primitives on all modern CPUs.
Are you sure Python users will care? I mean Python is 10,000 slower than C,
and it already uses a global lock every few VM instructions. So does Ocaml
bytecode interpreter. A couple of extra locks won't even be noticed.
===========================
Hmmm .. here's the problem: multi-part messages. This is done very badly.
It was patched onto 0MQ. It should be supported by recv_array send_array
not a flag and sequenced operations. That is already intrinsically unsafe
in single threaded mode, but untenable in a multi-threaded mode.
If we had these operations, locking could be automatic. The idea is simply to also
add a new call:
zmq_create_ts_context
and then all sockets derived from this would have a flag set which causes
locks to protect the operations. This will NOT protect the current multi-part
messages so we have to add the new operations for that.
The cost for maintaining the old semantics is:
(a) a check on a flag for every function call
(b) one machine word for a (pointer to) a mutex
IMHO the check on the flag is fast enough to be ignored.
The one machine word overhead may be more significant.
There is another way to do this WITHOUT any memory overhead:
use separate data structure in the context object to find the socket's
mutex object: I'd use a JudyLArray myself, but a C++ map would do.
The overhead is then a couple of words per context which is irrelevant.
[The spawning of a new socket itself requires a mutex if you use a such
a data structure since the modification of that data structure must also
be thread-safe]
BTW: if this were a C++ API instead of a C one the flag check would be
un-necessary: you'd just use virtual functions to dispatch to the
appropriate routine. And since the core is written in C++ anyhow
this is another solution. The cost is then the cost of virtual dispatch,
which is probably higher than a flag check ;(
--
john skaller
skaller at users.sourceforge.net
More information about the zeromq-dev
mailing list