[zeromq-dev] Limitations of patterns?
Martin Sustrik
sustrik at 250bpm.com
Mon Aug 30 12:05:49 CEST 2010
Kelly,
>>> I forgot about the multicast case in this. Hmm, seems like maybe a
>>> generic wrapper for 'only' tcp/inproc/ipc types would be in order to
>> handle
>>
>> An option to sequence number would be for publisher to post a tag
>> message into the PUB/SUB feed that would correlate with a tag attached
>> to the snapshot.
>
> Yeah, I came up with that just a bit after writing the e-mail,
> thanks for not calling me an idiot though. :) The usage patterns take a bit
> of getting used to. But it does bring up a question about the guarantee's
> made by zmq, assume the following:
>
> Server side:
> Bind req/rep socket to port 1234
> Bind pub/sub socket to port 1234
You cannot bind two sockets to the same port.
> Client side:
> Connect to pub/sub to port 1234
> Subscribe some stuff...
> Connect to req/rep to port 1234
>
> I "believe" this is valid, the docs are a bit non-clear on this.
> Are there any guarantees that when I send the "initme" message on the req
> that I'm fully connected to the pub/sub and won't miss anything? (I.e. both
> tcp on the same port, I assume sockets are shared by default?) If not, I
> can just use a poll timeout and repost the "announce" message till I get the
> reply correctly.
No, TCP ports are dedicated, not shared.
>>> I'm definitely going to want to look at this in the next couple
>>> weeks. Currently at work I'm using Zmq to organize an asset dependency
>>
>> You don't want to go that way unless you are IBM. Been there, seen it.
>> It's a way that leads to transacted messages, later on distributed
>> transactions, then redundant distributed message store clusters etc.
>
> Ok, I'm all ears as to what you suggest then.
What I was talking about was user-level acks. Don't even try to do that.
It'll get extremely hairy in no time. What we need is 0mq-level acks to
control the flow.
> In thinking about
> things and trying to make it zmq "feng shui". :) I think this is what I
> would suggest:
>
> Push (Downstream) thread:
> Make the push/downstream socket with a "non blocking" flag.
> Send messages as normal. This blocks until some client connects "AND" says
> it is willing to accept work.
>
> Pull (Upstream) client:
> Make the pull socket as normal.
> Call "zmq_msg_accept( int x )" with the number of worker slots it will
> accept at this time.
> As things complete "zmq_pull_accept" the number of further work items you
> want to accept.
That's basically the user-level ack. The problem with it is that once
you introduce that kind of API, people will immediately start to use it
to implement reliability -- while your intention was only to provide
flow control. Reliability is hard (actually it's impossible to solve) so
people would end up just being confused and unhappy about the whole product.
What should be done IMO is to implement acks inside 0MQ so that they are
invisible to user. The user would set the max number of message on-fly
using HWM socket option and that's it.
>
> This keeps the behavior of both sides consistent with most other zmq
> concepts, I believe. The "accept" would simply say I'm both connected "and"
> willing to receive x number of messages before push should stop sending to
> me. This seems like the least intrusive and most appropriate solution from
> my current understanding of things. The int number of acceptable messages
> is simply to fit this into a zmq styled way to connect up single threaded
> workers on a separate machine with inproc worker threads. I.e. main thread
> connects, makes x number of workers each with an inproc pull connection,
> accepts x work items to feed to it's x number of inproc workers, likely one
> per CPU core and as workers finish, issues more "accept"'s.
Well, I think we agree on how the things should work internally, just
propose different API.
> If this is still not proper for zmq please explain why, currently it
> is as close as I can come up with as to the proper usage patterns of zmq at
> this time. :)
Martin
More information about the zeromq-dev
mailing list