[zeromq-dev] Sharing socket safely between threads
Attila Magyari
atti86 at gmail.com
Sun May 13 01:20:52 CEST 2018
Alright, thanks everyone, really useful info. I will rethink my design to
eliminate the need for sharing the sockets between threads :).
On Sun, May 13, 2018 at 1:03 AM Justin Karneges <justin at karneges.com> wrote:
> The real issue here is thread affinity. Mere thread-unsafety can always be
> worked-around by the application developer, by ensuring that no two threads
> ever access the same resource at the same time. It is my understanding that
> libzmq sockets are not thread safe, but they also don't have thread
> affinity, so you can access a single socket from different threads if you
> handle safety on your own.
>
> Personally, I agree with the libzmq authors in that trying to do I/O on
> the same socket from multiple threads is an anti-pattern. However, I
> consider certain kinds of sharing to be legitimate, for example creating a
> socket in one thread and then handing it off to a second thread, and only
> that second thread does I/O. I think that fits within the spirit of libzmq,
> since only one thread is actually doing I/O on the socket. If you find
> yourself doing I/O on a socket from multiple threads, protecting the access
> with a mutex, then you're in anti-pattern territory. That said, it should
> work just fine, and libzmq authors' opinions won't stop you. :)
>
> FWIW, you might run into thread affinity in other libraries. For example,
> last I checked, PyZMQ has thread-affinity of sockets. If you create a
> socket in one thread and hand it off to another, I/O from the other thread
> will fail. But it is my understanding that libzmq itself does not have such
> limitation.
>
> On Sat, May 12, 2018, at 2:08 PM, Bill Torpey wrote:
>
> 1. FWIW, my personal experience has been that it is safe to share a
> ZMQ_PUB socket between threads if protected with a mutex. Basically the
> sequence is lock/send/unlock.
>
> The alternative is to have an internal zmq_proxy with inproc on one side
> and something like TCP on the other side, which is an extra send/receive
> hop. I have not found that to be necessary, and I have done a *lot* of
> testing (which has found several other edge cases).
>
> FWIW the docs specifically bless this approach (http://zeromq.org/area:faq
> ):
>
> For those situations where a dedicated socket per thread is infeasible, a
> socket may be shared *if and only if* each thread executes a full memory
> barrier before accessing the socket. Most languages support a Mutex or
> Spinlock which will execute the full memory barrier on your behalf.
>
> and here (http://zguide.zeromq.org/page:all#Multithreading-with-ZeroMQ):
>
> Technically it's possible to migrate a socket from one thread to another
> but it demands skill.
>
>
> I have not tried doing that with ZMQ_SUB sockets — there is really no
> point, as a sigle thread sitting in a zmq_poll call on multiple sockets is
> probably the cleanest design anyway.
>
>
> 2. I can definitely testify that trying to share a ZMQ_SUB socket
> *without* mutexes is guaranteed to crash, and that trying to share a
> ZMQ_SUB socket *with* mutexes is almost impossible to do without
> deadlocking.
>
>
> 3. Another situation is to have a single “main” thread setup all the
> sockets at startup and then hand them off to other threads (and never touch
> them again). Strictly speaking you may only need a memory barrier, but
> using a mutex in this case ensures that you don’t try to use the socket
> concurrently from more than one thread.
>
>
> 4. Last but not least, if you’re planning on using inproc sockets for
> inter-thread communication, you may find this helpful:
> https://github.com/WallStProg/zmqtests/tree/master/threads
>
>
> On May 8, 2018, at 3:58 PM, Thomas Rodgers <rodgert at twrodgers.com> wrote:
>
> No, not really. I have designed my zmq based systems to either -
> * orchestrate connections over the inproc transport
> * performed the delicate dance of explicitly passing a fresh socket to a
> fresh thread the way that czmq actors
> <http://czmq.zeromq.org/manual:zactor> work,
> as implemented in azmq
> <https://github.com/zeromq/azmq/blob/master/azmq/detail/actor_service.hpp>
>
> On Mon, May 7, 2018 at 11:58 AM, Attila Magyari <atti86 at gmail.com> wrote:
>
> This sounds promising. My concern is if there are some background threads
> internally in the ZMQ library, which would use the socket out of sync with
> mine, or anything else that I'm not thinking about. I can ensure that the
> socket won't be used from multiple threads.
>
> Do you have any experience with this, or only theory?
>
> Thank you,
> Attila
>
>
> On Mon, May 7, 2018 at 5:49 PM Thomas Rodgers <rodgert at twrodgers.com>
> wrote:
>
> The zmq docs say you need to ensure a ‘full fence’ a mutex is one way of
> achieving that.
>
> If you are using C++ (-std=c++11 or later) and you can guarantee there is
> no racy usage from another thread, you can issue a memory fence using -
>
> std::atomic_thread_fence
>
>
> On Mon, May 7, 2018 at 8:42 AM Attila Magyari <atti86 at gmail.com> wrote:
>
> Hello,
>
> I know that zmq sockets are not thread safe. However I can't think of a
> good design (explained below) without using the socket in several threads.
> What I want to do is connect the socket in a thread, and then use it in a
> different one until the end, and never in parallel between several threads.
> I will guard the access with a mutex as well. Do you think this will be
> safe?
>
> Just in case someone might have a better idea for my design, here is what
> I'm trying to do:
> I have an application which starts another process, passes a port to the
> newly created process, so it can connect back to the main application
> through a ZMQ_PAIR socket. Both processes are local to the machine. The
> main application will request from the second one, but the second one can
> initiate messages without an explicit request as well. Do you have a nicer
> design to achieve something like this?
>
> Thank you in advance!
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> *_______________________________________________*
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20180513/079aad34/attachment.htm>
More information about the zeromq-dev
mailing list