[zeromq-dev] RE : Proposal for new features in ZMQ.
fabien.ninoles at ubisoft.com
Sun May 15 15:27:08 CEST 2011
> De : zeromq-dev-bounces at lists.zeromq.org
> [zeromq-dev-bounces at lists.zeromq.org] de la part de Martin Sustrik
> [sustrik at 250bpm.com] Date d'envoi : 15 mai 2011 04:32 À : ZeroMQ
> development list Objet : Re: [zeromq-dev] Proposal for new features in
> On 05/15/2011 06:35 AM, Pieter Hintjens wrote:
> > On Sun, May 15, 2011 at 1:25 AM, Fabien Ninoles
> > <fabien.ninoles at ubisoft.com> wrote:
> >> Here the feature set: Feature 1 - Add a new msg LABEL flag marking
> >> a frame as "labeled".
> > Part of redesigning the whole message envelope concept, I think. A
> > good idea.
> >> Feature 2 - Embedded flags inside the msg_t structure.
> > Definitely a good idea, it's hard to justify MORE being a socket
> > property. In czmq I experimented with this being a message property,
> > it works fine.
> >> Feature 3 - Add a timeout sockopt.
> > Nice idea.
Nice. Apart from everything said in the contributing section on the
site, have you any guidelines on how to approach those first three ?
Which repository I should clone ?
> >> Feature 4 - Add a ready sockopt.
> > It'd be impossible IMO to cleanly implement the ready signalling we
> > currently do in the LRU pattern because it's tied into application
> > processing of data. You could do some tighter coordination between
> > the queues at both sides, with low HWM, and this gives you some kind
> > of 'ready'. But it won't be on a message-per-message basis afaics.
> This is a request for explicit ACKs. Pieter is right that if the ack
> is to be issued only after the message was processed by the app, you
> are going to run into lot of complexity. This way lie distributed
> transactions et al.
I hesitate a lot with this feature because I cannot map it to anything I
know, neither ACK or something like that. In fact, it is just the same
kind of "READY" request of the LRU queue but distribute to all peer
It also seems to me that it is quite simple to implement:
The writer_t pipe is marked active at attachment time. Once a message is
send to it, the pipe is remove from the active pipes. Once a message
arrived on the associated reader_t pipe with the READY flag, the
associated writer_t pipe is mark active again. Any flaw on this ?
> >> Feature 5 - The COLLECTOR socket.
> > Assumes READY can work...
> >> Most of the current existing pattern would be greatly simplified
> >> using them, avoiding weird usage like DEALER socket as worker end
> >> point in LRU, and probably unseen and interesting new patterns
> >> could probably build using them.
> > Note that DEALER allows asynchronous worker 'end points', necessary
> > if your 'end point' actually deals work out to single threaded
> > worker threads. In synchronous workers, we don't use DEALER sockets.
My bad here, I had to use a DEALER here because the COLLECTOR cannot
predict the number of replies coming from one socket, so the endpoint
need to send a second message to say that it is READY to accept a new
request. The READY flag fix that, and would allow to use a more
standard (IMHO) REP socket, both for the COLLECTOR pattern and the LRU
Wouldn't it be nice if the worker doesn't need any specific change to
work with either a QUEUE, LRU or COLLECTOR device ?
> No opinion on this. In any case, if you want to experiment with
> messaging patterns:
> 1. Define your own messaging pattern. That would give you your own
> vertical segment of the stack which you can experiment with without
> stepping on others' toes.
That's for sure.
> 2. Precisely define the use case you are trying to solve.
Send a request to multiple nodes and collect all replies within a
specific time. That's it.
> 3. Identify different roles the endpoints can play within the pattern.
> Create a socket type for each role.
COLLECTOR on sender side, REP on the reply side. REP doesn't need any
special change, just to be able to transport the READY flag.
> 4. Specify exact behaviour of each socket type in terms of routing,
> applying backpressure, behaviour in case of failure etc.
- Outcoming: Fan-out to READY sockets.
- Incoming: Fair-queuing on READY sockets. Dropped all other messages.
- Send/Recv: Send, Multiple recv until timeout or all activated.
> 5. Think about scaling. Is interjection of device into the middle of
> the topology possible?
The main reason for the introduction of the READY flag is a case of
"application scalability". The current COLLECTOR can only work with a
single entry point on all nodes. Although this can be seen as a good
thing (since each request should theorically affect all nodes), a lot
of time I want my request to only be send to a subset of nodes,
requiring me to add a "recipients" identifier to the upper protocol,
which required the node to be less anonymous that I like (they shouldn't
care about the topology at all).
Remarks, the same feature could allow a work to be part of multiple
LRU queue, threating them in a FQ manner. This is not always what you
want but can lead to some interesting new patterns if you can deal with
this behavior. Also, since the READY exchange is now part of the socket
implementation, it can deal with disconnection of transient sockets and
the LRU could be implement in a simple device.
If you want, I could try to implement an LRU socket, which would be
simply a DEALER socket with awareness of the READY flag.
Thanks for your attention,
More information about the zeromq-dev