[zeromq-dev] queue overhead
Justin Karneges
justin at affinix.com
Sun Jul 27 20:13:31 CEST 2014
I have a "stable" (in the addressing sense) worker that I want to take
advantage of multiple cores. So, I run N instances of this worker, where
N is the number of cores on the host machine, and each worker binds to
its own socket. Components that wish to make use of this worker service
connect to all N worker instances.
Unfortunately this is a little awkward. The connecting components must
be configured with the N socket specs. And it's hard to automate this,
since even if the connecting components could generate socket specs
programmatically, this still requires knowing the number of cores of the
remote machine.
What I'd like to do is put an adapter component in front of the N worker
instances (on the same machine as the worker instances) that binds to a
single socket. It would route to the N workers, and this is easily done
since the adapter lives on the same machine and knows the number of
cores. Connecting components could then simply connect to this adapter,
and not need to care about the number of remote cores.
The question I have is what kind of overhead this introduces. An MxN set
of connections between M remote components and the N workers seems like
it would be far more efficient than M->1->N, which looks like a
bottleneck. But maybe in practice, if the routing is very simple, then
it becomes negligible?
Justin
More information about the zeromq-dev
mailing list