[zeromq-dev] Forwarding ROUTER to PUB

Lindley French lindleyf at gmail.com
Wed Jan 22 20:38:28 CET 2014

I've tried the push/pull solution with a splitter. The code is not as
simple to get right as one might suppose, and it's about a hundred times
less elegant than the pub/sub approach.

On Thu, Jan 16, 2014 at 4:23 PM, Bruno D. Rodrigues <
bruno.rodrigues at litux.org> wrote:

> On Jan 16, 2014, at 20:27, Goswin von Brederlow <goswin-v-b at web.de> wrote:
> > On Thu, Jan 16, 2014 at 02:45:31PM -0500, Lindley French wrote:
> >>> This is a common issue. If you can?t recover from dropped messages,
> >> PUB/SUB is not the correct pattern.
> >>
> >> In many cases, this is correct. I do not believe inproc is one of those
> >> cases, however. With inproc, you should have full control of who is
> >> subscribing and when they come up relative to the publisher. If your
> >> subscribers aren't running when you expect them to be running, that's a
> >> bug. If they aren't running fast enough, dropping messages *might* be a
> >> solution, or it might not. I don't feel that's a decision that can be
> made
> >> in the general case.
> >>
> >> Let me put it this way: If I need one-to-many semantics with
> backpressure
> >> and filtering, what should I use? PUB is the only one-to-many socket
> type.
> >> I can write my own filtering code, keep a vector of push sockets, etc.
> but
> >> that seems to defeat the point of ZMQ patterns. PUB is exactly what I
> want
> >> in every way *except* the HWM behavior.
> >
> > Add a splitter that simply checks the first frame, looks up the right
> > target socket and sends the remainder on the message onwards.
> > Then you can use a simple PUSH/PULL pattern.
> What if the PUB side is a bind? The advantage of PUB SUB is that we can
> set a well-known PUB bind endpoint and have multiple SUB nodes connecting
> to it. With your solution there’s a need for one socket per connection,
> meaning either the PUB node connects to each SUB node, or the PUB node
> needs to dynamically open bind ports. The first needs extra sockets to
> handshake the endpoint. The later is prone to having a second connection
> stealing messages.
> On the other hand the context of this thread is inproc, where the
> endpoints are well known, so indeed it may work.
> I think I may have a solution for jeromq, but haven’t dig enough on the C
> zmq code to be able to get a similar solution. I know and understand all
> the reasons stated on the book, but sometimes the environment is controlled
> enough to require either a “throttle to the speed of the fastest consumer”.
> or even a “throttle to the speed of the slowest consumer”, if we need to
> guarantee all messages are delivered to all connected consumers even if it
> slows down the producing side.
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20140122/cea9864c/attachment.htm>

More information about the zeromq-dev mailing list