[zeromq-dev] queue overhead

sven.koebnick at t-online.de sven.koebnick at t-online.de
Mon Jul 28 10:49:36 CEST 2014


there is a queue in zmq that you can use as template for own mods.

I took it years ago for following functionality: 

- creating a queue
device, supplying two endpoints that others can connect to (one bcast,
one rep/req)
- creating two internal endpoints, one for broadcasting,
one for req/rep
- instanciating multiple workers, that connect to the
two internal endpoints
- managing a list of "ready" workers for
dispatching messages only to idle ones. 

I would have attached the code
here, but the queue logic is that mixed with my own service concept,
that it would be of few use for you. 

to handle the several endpoints
to different services, I used a central dispatcher, that is used by all
external progs etc. and that knows, where to send which messages
(including subscription themes in bcast channel to target a special
sercvice instance). 

works fine since ~4 years now with a really old

I'm just about to check, if I could drop my own code when using a
new version of zmq ;o) 



Am 2014-07-27 20:13, schrieb Justin

> I have a "stable" (in the addressing sense) worker that I
want to take 
> advantage of multiple cores. So, I run N instances of
this worker, where 
> N is the number of cores on the host machine, and
each worker binds to 
> its own socket. Components that wish to make use
of this worker service 
> connect to all N worker instances.
Unfortunately this is a little awkward. The connecting components must

> be configured with the N socket specs. And it's hard to automate
> since even if the connecting components could generate socket
> programmatically, this still requires knowing the number of
cores of the 
> remote machine.
> What I'd like to do is put an
adapter component in front of the N worker 
> instances (on the same
machine as the worker instances) that binds to a 
> single socket. It
would route to the N workers, and this is easily done 
> since the
adapter lives on the same machine and knows the number of 
> cores.
Connecting components could then simply connect to this adapter, 
> and
not need to care about the number of remote cores.
> The question I
have is what kind of overhead this introduces. An MxN set 
> of
connections between M remote components and the N workers seems like 
it would be far more efficient than M->1->N, which looks like a 
bottleneck. But maybe in practice, if the routing is very simple, then

> it becomes negligible?
> Justin
> zeromq-dev mailing
> zeromq-dev at lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev [1]

[1] http://lists.zeromq.org/mailman/listinfo/zeromq-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20140728/6b2ed4c8/attachment.htm>

More information about the zeromq-dev mailing list