[zeromq-dev] Thoughts on a pattern: "The Noisy Gymnasium"

Bruno D. Rodrigues bruno.rodrigues at litux.org
Sun Dec 22 23:21:59 CET 2013


On Dec 22, 2013, at 21:33, Randall Nortman <rnzmq at wonderclown.net> wrote:

> I am developing a new application using zmq in an unusual (from what
> I've seen so far) way.  I'd like to solicit input, particularly about
> reasons this may be a terrible idea.  It is essentially a variation on
> an event-driven architecture.
(…)
> 
> My tentative solution is to locally run a "broker" which does nothing
> other than re-broadcast all messages it receives on a SUB socket to a
> PUB socket.  It binds both the SUB and the PUB (to different ports),
> as opposed to the typical pattern of binding PUB and connecting the
> SUB to it.  I then run a bunch of small, single-purpose daemons (call
> them "agents" if you prefer) which connect their own SUB to the
> broker's PUB to hear messages, and their own PUB to the broker's SUB
> in order to broadcast messages.

Look at the XSUB - zmq_proxy - XPUB pattern - a dozen lines of code, or with my “avoid poll” optimization it scales to almost 10Gbps in a single core :)

Then just try to predict the amount of messages, the min-max-avg-median size of the messages, design for localhost and local net (Gb), design for bad local net worst case, and finally design for multiple proxy connections.

> What are the downsides?  Scalability, obviously, because
> rebroadcasting all events to everybody is not going to be very
> performant with either very high volume of messages or very large
> numbers of cooperating daemons.  Reliability may also be an issue;
> obviously PUB/SUB can drop messages, and if a given daemon is not up
> and running when a message comes across the wire, then whatever that
> daemon was supposed to do with it is not going to happen.  But the
> idea is that this is all happening on localhost, or at least a
> reliable LAN, and that there will be some supervisor process that
> first starts the broker and then everything else, and restarts dead
> daemons as necessary.  Still, this is not a solution with 100%
> reliability.
> 
> Anybody else have other thoughts?  Is this is bad idea?  A useless
> idea?  Better accomplished another way?  Any "Yeah, I did something
> like that once -- never again" stories?

I’ve started my code with a pattern similar to this one, mainly inspired by the examples from the Connected Code book - a PULL-zmq_proxy-PUSH on one side, for distributing load in round-robin, and a XSUB-zmq_proxy-XPUB on the other side to distribute load to all destinations (or part of them).

Then when I tried to put multiple of these proxies in parallel (which works anyway), Pieter convinced me to go mesh and do direct N-2-M connections with a “controller” for announcing the relevant endpoints.

I’m still using that code for “piping” data around data-centers, specially the XSUB-XPUB for multiplying the data to parallel environments.
 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 235 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20131222/2a002b16/attachment.sig>


More information about the zeromq-dev mailing list