[zeromq-dev] queue overhead

Goswin von Brederlow goswin-v-b at web.de
Tue Jul 29 13:43:27 CEST 2014

On Sun, Jul 27, 2014 at 11:13:31AM -0700, Justin Karneges wrote:
> I have a "stable" (in the addressing sense) worker that I want to take 
> advantage of multiple cores. So, I run N instances of this worker, where 
> N is the number of cores on the host machine, and each worker binds to 
> its own socket. Components that wish to make use of this worker service 
> connect to all N worker instances.
> Unfortunately this is a little awkward. The connecting components must 
> be configured with the N socket specs. And it's hard to automate this, 
> since even if the connecting components could generate socket specs 
> programmatically, this still requires knowing the number of cores of the 
> remote machine.
> What I'd like to do is put an adapter component in front of the N worker 
> instances (on the same machine as the worker instances) that binds to a 
> single socket. It would route to the N workers, and this is easily done 
> since the adapter lives on the same machine and knows the number of 
> cores. Connecting components could then simply connect to this adapter, 
> and not need to care about the number of remote cores.
> The question I have is what kind of overhead this introduces. An MxN set 
> of connections between M remote components and the N workers seems like 
> it would be far more efficient than M->1->N, which looks like a 
> bottleneck. But maybe in practice, if the routing is very simple, then 
> it becomes negligible?
> Justin

You want one worker per core? So that is all a single system? So why
not multithread your worker?

You have one main thread that handles the communication with the
outside using a ROUTER socket and talks to the worker threads using
inproc://. That way you have a single socket to connect to and
inproc:// avoids the overhead of retransmitting messages to other
processes or between systems.


More information about the zeromq-dev mailing list