[zeromq-dev] queue overhead
Charles Remes
lists at chuckremes.com
Mon Jul 28 14:43:40 CEST 2014
Justin,
The way you would structure this is by setting up a zmq_proxy() that would bind to the “well known” port to receive tasks. On the back end, the proxy would also bind to a local tcp port (or via ipc, perf should be the same). The workers on the box would all connect to this backend port. The proxy would just receive incoming messages and then dole them out to the workers. If you use push/pull sockets, it will round robin them. If you need something more sophisticated, you can build your own proxy component and make the routing and load balancing logic as smart (or as dumb) as you like.
The perf hit will be measurable but it’s likely on the order of a millisecond or two. It’s impossible to be more definitive than that since it is very dependent upon your hardware. If this is on a dedicated box with 10GB interfaces, the router will be very fast. If this is on a Micro AWS instance in the cloud, then the router will probably takes a few 10s of milliseconds to do its work. If you use TCP endpoints, you can start to spread your workers over multiple boxes and scale horizontally. It’s pretty neat.
None of this is difficult, so benchmark it on your own hardware to determine the exact overhead. I think the flexibility that a configuration like this offers is well worth a tiny overhead.
If you need redundancy then I suggest reading through the advanced patterns in the zguide. All of this is covered there in detail usually with working code.
cr
On Jul 27, 2014, at 1:13 PM, Justin Karneges <justin at affinix.com> wrote:
> I have a "stable" (in the addressing sense) worker that I want to take
> advantage of multiple cores. So, I run N instances of this worker, where
> N is the number of cores on the host machine, and each worker binds to
> its own socket. Components that wish to make use of this worker service
> connect to all N worker instances.
>
> Unfortunately this is a little awkward. The connecting components must
> be configured with the N socket specs. And it's hard to automate this,
> since even if the connecting components could generate socket specs
> programmatically, this still requires knowing the number of cores of the
> remote machine.
>
> What I'd like to do is put an adapter component in front of the N worker
> instances (on the same machine as the worker instances) that binds to a
> single socket. It would route to the N workers, and this is easily done
> since the adapter lives on the same machine and knows the number of
> cores. Connecting components could then simply connect to this adapter,
> and not need to care about the number of remote cores.
>
> The question I have is what kind of overhead this introduces. An MxN set
> of connections between M remote components and the N workers seems like
> it would be far more efficient than M->1->N, which looks like a
> bottleneck. But maybe in practice, if the routing is very simple, then
> it becomes negligible?
>
> Justin
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
More information about the zeromq-dev
mailing list