[zeromq-dev] Preserving order of replies in a multi threaded server

Doron Somech somdoron at gmail.com
Sat Jan 30 19:46:24 CET 2016

IMO it is completely up to the client. Your solution for server is like
saying don't send multiple request and will gave same performance. If you
prefer client simplicity over performance in the client just send request
synchronously, if you performance client need to know how to handle this.

In HTTP server doesn't handle this and its up to the browser. Actually
only  HTTP 2.0 introduced multiple request on same connection.
On Jan 30, 2016 18:52, "Tom Quarendon" <tom.quarendon at teamwpc.co.uk> wrote:

> I’m having difficulty understanding how I would preserve the relative
> order of replies to clients when I’m implementing a multithreaded server
> and when clients are allowed to pipeline requests (i.e send multiple
> requests without waiting for a response each time). So for example the XRAP
> protocol.
> So the obvious way of implementing the server seems to be to have a ROUTER
> socket that you read the requests from, then pass the requests to a pool of
> threads for processing. On completion the threads pass back the responses
> and then get forwarded back to the original requester.
> However if you implement that, then if clients can pipeline requests, so
> send multiple requests without waiting for responses, then there’s no
> guarantee that the responses get sent back in the same order. So client
> sends request1, then request2. Server receives request1, gives it to thread
> 1. Then server receives request2, gives it to thread2. Processing for
> request1 takes longer than processing for request2, so the response for
> request2 gets sent to the client before the response for request1. Client
> gets confused.
> It’s very unclear how you solve this. I think if I were doing this with
> normal TCP sockets, then my main loop would do a “select” on all the
> current connected client sockets. What I’d probably do is having received a
> request from a client, remove that socket from the poll list until the
> response is sent. That would reasonably simply give rise to the right
> behaviour. Haven’t actually done this, don’t know how HTTP servers actually
> do it, allow request pipelining, maybe there are better ways.
> Anyway, with zeromq sockets I don’t have that control, I don’t think. It’s
> not at all obvious what I would do.
> Can I somehow tell a zeromq socket to ignore one of the internal queues
> for a bit? I don’t think I can, but I may just not know what I’m looking
> for.
> I did wonder about having some kind of internal loopback queue, and having
> a list of “active” clients, and if I get a new message from one of those
> clients put it on that “loopback” queue. However I would appear to need one
> for each client, which would seem excessive, and also duplicating the
> internal zeromq buffers. If you only have one, then you’re going to go into
> a tight loop reading a message from it, then putting it back because you
> aren’t free to process another message for that client.
> If there was a sequence number in the messages, then I could push the
> problem to the client. So leave it up to the client to sort out messages
> coming back in the wrong order. However that complicates the client code
> and means you can’t use a “simple” zeromq API, and that code then has to be
> implemented in multiple ways if there are multiple client types. So I’m not
> sure that really helps.
> Any ideas?
> Thanks.
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20160130/5e5f9fe5/attachment.htm>

More information about the zeromq-dev mailing list