[zeromq-dev] PUSH/PULL - Queue sizes and Disconnecting from endpoints
Chuck Remes
cremes.devlist at mac.com
Mon Oct 24 18:35:05 CEST 2011
On Oct 24, 2011, at 11:19 AM, David Henderson wrote:
> Hi,
>
> I've got a few zmq questions after playing around with it for a few weeks
>
> A bit of background...
>
> I'm using the node.js library throughout the system.
>
> I have a number of frontend web servers taking requests that should be
> written to the database (referred to as TOP).
> As the node.js library doesn't (that I could find) have the
> pre-defined devices available (so can't use the streamer). In its
> place, I have a PULL socket bound to one port, and a PUSH socket bound
> to another port. When PULL receives a message, it immediately sends it
> onwards using the PUSH socket. (I'll refer to this layer as MIDDLE)
> The last layer (referred to as BOTTOM) does the writing to the database.
You just reproduced *exactly* what a STREAMER device does. Later versions of the library have removed zmq_device() because it was pointless to include them when reproducing their functionality was so simple.
> Questions:
>
> - How can I get visibility of the queues/buffers at each level? I'm
> currently logging out the send backlog on the PUSH sockets, which is
> mildly useful, but I have found that messages seem to build up
> elsewhere in the system (TCP buffers, I guess)
0mq cannot tell you how many messages are in queue because it is impossible to know. Between OS buffers (on the sender *and* receiver), buffers on NICs, buffers in the routers in between, etc. it is impossible to know how many messages are in flight.
Due to that, you would need to enhance your "MIDDLE" process to read from the PULL, queue them in a data structure that you own, and use that data to inform TOP or BOTTOM nodes about the current state (which will *not* be exact anyway). Sending that informational data around would be done via a different set of sockets.
One of the first things to learn is that most complex 0mq apps will use *lots* of sockets. Do *not* try to shoe-horn extra logic (like telling upstream nodes that you are "full") into a PULL or PUSH socket. It cannot work. Setup a second (or third or fourth) set of sockets for the express purpose of transmitting load or performance data. Those might be PUB/SUB, REQ/REP or PUSH/PULL sockets.
> - If I do detect that one of the MIDDLE nodes is becoming overloaded,
> I'd ideally like to introduce a new node or two at that level (which I
> can do fine by having TOP and BOTTOM nodes connect() to it).
> Once I have the new nodes, I would like to disconnect the TOP nodes'
> socket from the overloaded MIDDLE node.
> The only way I can currently see to do this is to completely tear down
> the TOP sockets, re-instantiate and connect only to the good nodes.
> Is there a better way to achieve this?
See above. Your nodes will need to open up additional sockets for the purpose of sending/receiving load information. If a MIDDLE node gets overloaded, it will need to send a message to a load manager that can spawn additional MIDDLE nodes. Using HWM on the MIDDLE nodes, the TOP nodes will get some "backpressure" and the PUSH sockets will know to start favoring the new MIDDLE nodes.
> Something like a
> drain_and_disconnect() method that removed and endpoint from the
> available endpoints being load-balanced, sent any messages already in
> send buffers, then completely disconnected from the endpoint.
> Is this already possible or feasible for a future release?
This isn't possible.
Take a look at some of the advanced patterns in the guide. Flow control is *not* part of 0mq but it can done fairly easily *using* 0mq. You'll need to adopt those ideas to build the kind of flow control and load management that your application needs to scale. It should be possible to do this work as a generic mechanism building on top of 0mq. You could even document and name the pattern and add it to the guide. :)
cr
More information about the zeromq-dev
mailing list