[zeromq-dev] Sharing socket safely between threads

Bill Torpey wallstprog at gmail.com
Sun May 13 15:35:25 CEST 2018


> On May 12, 2018, at 7:50 PM, Thomas Rodgers <rodgert at twrodgers.com> wrote:
> 
> 3.  Another situation is to have a single “main” thread setup all the sockets at startup and then hand them off to other threads (and never touch them again).  Strictly speaking you may only need a memory barrier, but using a mutex in this case ensures that you don’t try to use the socket concurrently from more than one thread.  
> 
> The pattern of having a main thread set up sockets first, then handing them off to worker threads is, IMO, very sound, but, lets perform this small thought experiement -
> 
> We use a mutex (I'm a C++ dev, so), std::mutex<socket>, concretely. Thread A acquires the mutex, initializes the socket, releases the mutex (ideally using one of the RAII scoped locking adapters). Thread B is then started, and acquires and, since we assume no further access from Thread A, simply holds the lock for the duration of thread_main(). If, thread A (or any other thread) attempted to acquire that mutex, it would deadlock, as Thread B now has ownership.
> 
> This can be properly described as poor program behavior, from a contractual standpoint, you violated a fundamental assumption, so this is "undefined behavior" for the program. I don't think I agree with the "just use a mutex, it's easy and safe" logic in this design pattern.
> 
> The requirement is for a "full memory fence". This is to ensure that Thread A completes all of it's socket setup (concretely in terms of ordered memory writes) before any other thread, in this case thread B, can use that socket (in terms of ordered memory read/writes). Issuing just the memory fence before ThreadB starts is sufficient to guarantee this ordering. If ThreadA or any other thread besides ThreadB accesses the socket after the memory fence, that is also undefined behavior.

The thing the mutex buys you is that allows you to *detect* when you’ve broken the contract (by blocking) — the fence does not.

> 
> If instead, you always acquire and release the mutex for each use, then you wouldn't have the deadlocking issue, but you are then *ALWAYS* paying for a pair of atomic operations you didn't really need (atomic operations are most decidedly not free), because you weren't comfortable understanding your program's lifetime and data usage model.

Well, acquiring an uncontested mutex is actually pretty cheap.  It’s when there’s a lot of contention on the mutex that it gets expensive.

Still, I wouldn’t suggest using a mutex to share sockets between threads, except in the case of a ZMQ_PUB socket I mentioned earlier.

> 
> 
> 
> 
> 
> On Sat, May 12, 2018 at 4:08 PM, Bill Torpey <wallstprog at gmail.com <mailto:wallstprog at gmail.com>> wrote:
> 1. FWIW, my personal experience has been that it is safe to share a ZMQ_PUB socket between threads if protected with a mutex.  Basically the sequence is lock/send/unlock.
> 
> The alternative is to have an internal zmq_proxy with inproc on one side and something like TCP on the other side, which is an extra send/receive hop.  I have not found that to be necessary, and I have done a *lot* of testing (which has found several other edge cases).
> 
> FWIW the docs specifically bless this approach (http://zeromq.org/area:faq <http://zeromq.org/area:faq>):
>> For those situations where a dedicated socket per thread is infeasible, a socket may be shared if and only if each thread executes a full memory barrier before accessing the socket. Most languages support a Mutex or Spinlock which will execute the full memory barrier on your behalf.
>> 
> 
> and here (http://zguide.zeromq.org/page:all#Multithreading-with-ZeroMQ <http://zguide.zeromq.org/page:all#Multithreading-with-ZeroMQ>):
> 
>> Technically it's possible to migrate a socket from one thread to another but it demands skill.
> 
> 
> I have not tried doing that with ZMQ_SUB sockets — there is really no point, as a sigle thread sitting in a zmq_poll call on multiple sockets is probably the cleanest design anyway.
> 
> 
> 2. I can definitely testify that trying to share a ZMQ_SUB socket *without* mutexes is guaranteed to crash, and that trying to share a ZMQ_SUB socket *with* mutexes is almost impossible to do without deadlocking.
> 
> 
> 3.  Another situation is to have a single “main” thread setup all the sockets at startup and then hand them off to other threads (and never touch them again).  Strictly speaking you may only need a memory barrier, but using a mutex in this case ensures that you don’t try to use the socket concurrently from more than one thread.  
> 
> 
> 4.  Last but not least, if you’re planning on using inproc sockets for inter-thread communication, you may find this helpful:  https://github.com/WallStProg/zmqtests/tree/master/threads <https://github.com/WallStProg/zmqtests/tree/master/threads>
> 
> 
>> On May 8, 2018, at 3:58 PM, Thomas Rodgers <rodgert at twrodgers.com <mailto:rodgert at twrodgers.com>> wrote:
>> 
>> No, not really. I have designed my zmq based systems to either -
>> 
>> * orchestrate connections over the inproc transport
>> * performed the delicate dance of explicitly passing a fresh socket to a fresh thread the way that czmq actors <http://czmq.zeromq.org/manual:zactor> work,
>>   as implemented in azmq <https://github.com/zeromq/azmq/blob/master/azmq/detail/actor_service.hpp> 
>> 
>> On Mon, May 7, 2018 at 11:58 AM, Attila Magyari <atti86 at gmail.com <mailto:atti86 at gmail.com>> wrote:
>> This sounds promising. My concern is if there are some background threads internally in the ZMQ library, which would use the socket out of sync with mine, or anything else that I'm not thinking about. I can ensure that the socket won't be used from multiple threads.
>> 
>> Do you have any experience with this, or only theory?
>> 
>> Thank you,
>> Attila
>> 
>> 
>> On Mon, May 7, 2018 at 5:49 PM Thomas Rodgers <rodgert at twrodgers.com <mailto:rodgert at twrodgers.com>> wrote:
>> The zmq docs say you need to ensure a ‘full fence’ a mutex is one way of achieving that.
>> 
>> If you are using C++ (-std=c++11 or later) and you can guarantee there is no racy usage from another thread, you can issue a memory fence using -
>> 
>> std::atomic_thread_fence
>> 
>> On Mon, May 7, 2018 at 8:42 AM Attila Magyari <atti86 at gmail.com <mailto:atti86 at gmail.com>> wrote:
>> Hello,
>> 
>> I know that zmq sockets are not thread safe. However I can't think of a good design (explained below) without using the socket in several threads. What I want to do is connect the socket in a thread, and then use it in a different one until the end, and never in parallel between several threads. I will guard the access with a mutex as well. Do you think this will be safe?
>> 
>> Just in case someone might have a better idea for my design, here is what I'm trying to do:
>> I have an application which starts another process, passes a port to the newly created process, so it can connect back to the main application through a ZMQ_PAIR socket. Both processes are local to the machine. The main application will request from the second one, but the second one can initiate messages without an explicit request as well. Do you have a nicer design to achieve something like this?
>> 
>> Thank you in advance!
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org <mailto:zeromq-dev at lists.zeromq.org>
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev <https://lists.zeromq.org/mailman/listinfo/zeromq-dev>
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org <mailto:zeromq-dev at lists.zeromq.org>
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev <https://lists.zeromq.org/mailman/listinfo/zeromq-dev>
>> 
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org <mailto:zeromq-dev at lists.zeromq.org>
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev <https://lists.zeromq.org/mailman/listinfo/zeromq-dev>
>> 
>> 
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org <mailto:zeromq-dev at lists.zeromq.org>
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev <https://lists.zeromq.org/mailman/listinfo/zeromq-dev>
> 
> 
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org <mailto:zeromq-dev at lists.zeromq.org>
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev <https://lists.zeromq.org/mailman/listinfo/zeromq-dev>
> 
> 
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20180513/906f5b0d/attachment.htm>


More information about the zeromq-dev mailing list