[zeromq-dev] Thread-safe sockets (cont.)

john skaller skaller at users.sourceforge.net
Fri Feb 17 01:29:03 CET 2012


On 17/02/2012, at 8:51 AM, Pieter Hintjens wrote:

> Hi,
> 
> Does anyone have a valid use case for thread-safe sockets?

There's an example in the test cases. Using ts sockets to serialise
sends from different threads is much easier than using inproc
transport.

You can take any existing code that writes to a socket, and split
some of the work, previously done in a single thread, out into
several threads without changing anything other than the socket
thread safety bit. With the current design, this would occur at
context construction time, in a place likely to be different from
the code handling transmission.

In other words, this provides transparency in *some* cases.

The alternative would be a network topology redesign: you'd have
to add an inproc socket for every thread, and also you'd have to
get the server to read them and write to the output.

This requires yet another socket and code to do the transfer.

TS sockets are hands-down-no-argument easier than all that,
especially considering there are no devices in 3.1 which might
automate it.


> It seems
> that the semantics are fuzzy

The semantics seem well defined to me. All access to the a ts_safe
socket are serialised by a mutex.

Au  contrare . .. it is the semantics of inproc that seem fuzzy to me.
It isn't clear to me how much buffering is done by inproc.
Therefore it isn't clear to me to what extent, if any, inproc allows
threads to synchronise.

Felix pchannels are unbuffered and provide rigid synchronisation.
inproc would also do this with HWM = 0 if that option is available.
HWM=1 is quite different, since a write would proceed after
buffering a message.

> and using this would lead to poor design.
> What happens if two threads are polling the same sockets, but one
> message arrives?

That would be bad design. It has nothing to do with the existence
of thread-safety. 

> What if two threads are in a blocking recv on the
> same socket?

It cannot happen. One thread will block on recv, the other will block
on the mutex set by the first thread. After the first thread receives,
the second one blocks.

If multiple threads are waiting one will be selected at random
to proceed next.

> If we don't have a clear problem that this change is fixing, I'd like
> to patch it out.


If we don't have the API available, no one will every have a clear
use case.

It is likely I will would use this in my Felix binding .. not that this is a
clear use case. That feature would allow 0MQ transport to be
used safely from a thread pool.

BTW: please patch it out manually if you have to, don't revert, as my patch made
some other useful changes (refactoring the API to not call itself and consistently
convert from the C data type to the C++ one).

The easiest way to patch it out is to just remove the API call and test case: 
there is no doco at this time. Or better, MOVE the API call to another  header:

zmq_experimental.h

I hope you don't because a binding based mutex is NOT the same thing,
since it will not inter-operate with C/C++ code.

--
john skaller
skaller at users.sourceforge.net







More information about the zeromq-dev mailing list