[zeromq-dev] Limitations of patterns?

Martin Sustrik sustrik at moloch.sk
Wed Aug 25 12:51:33 CEST 2010

Hi Kelly,

> 	The first issue is easily solved with a sequence number and some
> buffering.  Unfortunately this is annoying and a pain in the butt for
> something like Zmq which hopes to be a standard.  It's also unnecessary as
> the proper way to deal with this would be a method for the system to note
> new connections, post the init data and then post a "pay attention to the
> rest of this", message in the normal stream of pub messages.  (I suspect
> this touches on the sub/pub filtering item.)
Yes. This is what's called "last value cache". In short, when you 
connect you get a snapshot of the current state, then you'll get stream 
of deltas.

As for the solution, last value cache cannot be an organic part of 
PUB/SUB messaging pattern. Here's the rationale:

PUB/SUB pattern is a generalisation of "multicast delivery", a.k.a 
"radio broadcast model". Think of broadcasting weather reports over 
radio. If you want to know what's the weather will be you'll turn the 
radio on. That way you'll get weather report each morning. However, you 
won't get it the first day, as the transmission may be already over. So 
you go to the newstand, buy a newspaper and check the weather report 
there. The point is that you have to use 2 different channels to get the 
info (radio & newspaper).

The two channels cannot be joined as with certain transports (such as 
PGM reliable multicast) it's not even technically possible -- sender 
doesn't even know a subscriber has joined the transmission. The same 
applies to the radio example above. The broadcaster has no idea when 
individual radio receivers are turned on.

Additionally, generating the snapshot is not a generic algorithm. It's 
dependent on business logic. So, for example, snapshot for video 
streaming is handled differently from market data snapshots.

All in all, last value cache has to be implemented using a PUB/SUB 
channel for updates and REQ/REP channel for snapshots. The 
synchronisation of the two channels is not a trivial issue, so it would 
be useful to have a "LVC" wrapper on top of 0MQ that would allow the 
user to fill in the business logic, however, it would solve the 
synchronisation problem correctly and transparently.

> 	The second item would be a very different problem.  That one is a
> bit more complicated in terms that it implies an ack to various messages in
> certain connection types.  A non-even distribution requires knowledge of
> completion states.  As such, downstream/upstream seems to me to require a
> new flag: "ZMQ_ACKREQUIRED".  Before ZMQ tries to post more messages to a
> downstream in this case, it will require a zmq_close to occur.
Yes. The problem here is that there's no way to control number of 
requests on the fly. The existing pushback mechanism is based on TCP 
pushback, so it allows you to write messages until TCP buffers are full 
and 0MQ's high watermark is reached. What you need instead is a hard 
limit. To implement it you need ACKs sent from the receiver to the 
sender. If you are interested in implementing this, let me know and I'll 
help you with making sense of the existing code.

> 	Please take this as intended; I'm a newbie to Zqm so maybe I'm
> missing things.  But I am very experienced in networking and as such, know
> how to avoid silly waste.  My current work around's are wastes, and really
> should not be required.  Overall, being able to recv "connections" would
> solve many issues.
0MQ is so easy to use and scale exactly because individual connections 
are invisible to the user. Once you allow user to handle individual 
connections you'll end up with an equivalent of TCP. If that's what you 
want on the other hand, why not use TCP proper?


More information about the zeromq-dev mailing list