[zeromq-dev] sending from multiple threads

Colin Ingarfield colin at ingarfield.com
Thu Dec 1 22:34:34 CET 2016



On 12/1/16 1:30 PM, Lineker Tomazeli wrote:
> Hi Luca, 
>
> Thanks for answering. See my comments below.
>
> Lineker Tomazeli
> tomazeli.net <http://tomazeli.net>
>
<snip>
>
>     > *Second question :*
>     >
>     > What is the correct approach for a "zeromq" thread to notify
>     other threads
>     > that I new message was received ?
>     >  - raise an event on another thread?
>     >  - use a threadsafe producer/consumer queue to dump receiving
>     message ?
>
>     For inter-thread communication you could use an inproc:// endpoint,
>     which is essentially a lock-less in-memory queue. You can build any
>     model you want with it as a communication channel between threads. And
>     given they all take zmq_msg_t, you can just pass them along directly.
>
>     If the tasks to distribute are all equal in cost/time, a simple and
>     common pattern is to use a round-robin type of socket to distribute
>     work. There are a lot of examples and suggestions in the zguide.
>
>
> So would it be ok (performance, overhead and etc) for each thread that
> needs to send a message to connect using inproc://  to the "sender
> thread"  
>
> worker thread ------inproc:// ---->  
> worker thread ------inproc:// ---->     sender thread -----tcp://
> -----> server 
> worker thread ------inproc:// ---->
>
Depending on how frequently worker threads need to communicate with the
sender thread this is a valid approach.  I've done this successfully in
the past.  I have also created a single long-lived inproc connection to
the sender thread for use by multiple worker threads.  In that case the
shared connection must be guarded by a mutex.

In general it's considered an anti-pattern in 0MQ to create and destroy
lots of sockets.  Some resource cleanup is asynchronous (handled by a
background reaper thread), so it is possible to run out of file handles
if you create/destroy sockets too quickly.

-- Colin

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20161201/ca3402e5/attachment.htm>


More information about the zeromq-dev mailing list