[zeromq-dev] RPC design pattern
sustrik at 250bpm.com
Sun Apr 25 23:49:32 CEST 2010
Joe Holloway wrote:
> I think I am missing a subtlety, but if you are using a direct TCP
> connection for RPC, wouldn't you at least be able to distinguish the
> scenario where you can't connect the socket versus the scenario where
> the socket is broken after you started exchanging messages? In the
> scenario where the socket can't be connected to begin with, a client
> would be able to fail over to another service provider and attempt a
> new connection. All this assuming there aren't proxies and
> what-have-you in between the client and server. Connection pooling
> makes it a little trickier, but I think you would still have a good
> idea when to evict a connection from the pool, because you should know
> when the socket is broken.
Right. The problem is not with finding out whether we can connect to
particular endpoint. The problem is when we are already connected and we
send a request and don't get a reply. There's no way of telling whether
the request was processed or not. Automatically failing over in such
case means that request can be processed more than once.
> I guess this is an academic conversation since it's not how 0MQ works,
> but I'm just trying to understand why you say there's always at least
> one request up in the air. Are we talking about different layers of
> the stack?
I believe the above explains the scenario.
>>> 2) Pros and cons of static registration (RFC 2782) vs. dynamic
>>> registration (DNS-SD).
>> I would say the static registration should be strongly preferred. The
>> rationale is that you want to know location an entity even though it may
>> not be running/online at the moment. In such a case messages can be
>> queued and sent once it gets available.
> I think both could be desirable depending on what use cases you are
> considering. I am coming at it from the perspective of having a
> "fluid service bus". Let's say I have one node that is pushing
> capacity and I need to bring another node online that offers the same
> services to help out. With a dynamic registration (e.g. DNS-SD),
> clients could discover the new node and begin to load balance
> immediately (again in a gentlemanly fashion). Whereas, with a static
> DNS model you are at the mercy of TTLs and having the tools to
> publish, which may be just fine for certain applications.
Hm. I would assume that "worker" applications would connect rather than
bind and thus won't need to register their addresses, no?
Maybe it would be worth of looking at a concrete use case.
> There's also perhaps a hybrid model where certain infrastructure
> services are statically registered in DNS, but application services
> use an application-level registration system. For example, go to DNS
> to locate the "service directory", but the service directory is really
> just its own 0MQ application that knows the current topology of the
> service bus. The DNS-SD idea feels a bit more distributed since
> every node potentially replicates that knowledge of the current
> topology, but it could be done with an application of 0MQ as well.
> Regardless, my opinion is that all of this sits in a library of
> toolkits above 0MQ and perhaps only needs minimal support from the
>>> 3) Can a client detect that a service is local and switch over to
>>> inproc/ipc transport for optimization (or does the 0MQ kernel already
>>> attempt this?)
>> That's an interesting question. No, 0MQ does not do that at the moment.
>> But it would be nice if it could. How should it be done? Once again,
>> more research is needed.
> Yeah, I don't know either, really just throwing it out there. I can
> imagine a scenario where decoupled services need to transact with one
> another, but they don't realize they are deployed together on the same
> physical node or perhaps even in the same process space. Perhaps
> this is an application/deployment problem to solve, not a kernel
> problem since the basic protocols are already there.
My feeling is that this issue should be solved by the location service
rather than by administrator. Not sure how though.
More information about the zeromq-dev