[zeromq-dev] Implementing efficient recv timeouts [was Re: zmq::poll timeout returns immediately on MacOS/X 10.5]
matt_weinstein at yahoo.com
Fri Jun 11 16:13:07 CEST 2010
I should have added that I'm going to create a third "ticker" socket
which sends a packet whenever ZMQ_TIMEOUT needs to wake up and do
items .socket = timers_;
items .fd = 0;
items .events = ZMQ_POLLIN;
items .revents = 0;
I'm initially going to just run a straight usleep(), but will later
build a full timer sow&piglet timer (next week :-) )
I'm *assuming* ZMQ is using poll() way down in the zmq_poll() layers
for efficiency. What are you using for inproc: devices, is it as
efficient, cause I'm going to be harnessing these together... if it's
going to cost some efficiency I'm going to have to find another way...
On Jun 11, 2010, at 9:14 AM, Matt Weinstein wrote:
> I'm going to recycle the old question, which could be entitled
> "implementing efficient recv() timeouts".
> My basic topology is the client-server setup shown in the sample
> [client] [ZMQ_REQ] -[inproc]-- [ZMQ_QUEUE] --- [ZMQ_XREQ] ~network~
> [ZMQ_XREP] -- [ZMQ_QUEUE] --- [inproc] -- [ZMQ_REP] -- [server thread]
> (don't fault me if I mislabel a few things, I'm on a train ;) )
> Anyway, after thinking about this overnight, one possible approach
> would be to create a ZMQ_TIMEOUT device, which would be inserted in
> place of the ZMQ_QUEUE:
> [client] [ZMQ_REQ] -[inproc]-- [ZMQ_TIMEOUT_QUEUE] --
> (It can subsume the function of ZMQ_QUEUE, otherwise I would push it
> onto the stack).
> This device would have some intelligence about the flow of REQs and
> REPs through it, and every N ms would synthesize an empty (or error)
> response back to the REQ that was waiting by constructing a synthetic
> X packet. Later, when the REP came through it would ignore it.
> One problem that results is that the receiving socket would be in the
> "ready" state after receiving the response, but the rest of the
> network would not reflect that. If another request were sent with the
> same identity it could trounce state elsewhere.
> One way to handle this might be to append out-of-band status to the
> identity packet in the XREQ/XREP protocol, which would be tell the
> receiving socket that e.g. this is an error message, rather than a
> message that causes a state change.
> In general, it would be nice if you could add such trailing bytes to
> the identity to allow for general out of band data as part of an XREQ/
> XREP, which could be useful for QoS etc. It might require inserting a
> delimiter at the end of the identity if the length is not globally
> Another approach might be to have the ZMQ_QUEUE remap the requests to
> a new identity, but that's just painful.
> Otherwise, I think you would have to build the device into zmq where
> it could directly mangle the pipes and sockets :-)
> Also, I'm presuming identities are not recycled, meaning I can mark an
> identity as "down permanently" once a socket closes. That way I can
> track all of the pending replies without worrying about the
> application "walking around me" to another socket and reusing the
> I've just started looking at the zmq kernel in depth, does anyone have
> any thoughts about how practical this approach is?
> On Jun 11, 2010, at 8:15 AM, Martin Sustrik wrote:
>> Marcin Gozdalik wrote:
>>> Avg. us/call Elapsed Name
>>> 0.064833 0.648334 clock_gettime(2/CLOCK_REALTIME)
>>> 0.009635 0.096351 clock_gettime(2/CLOCK_REALTIME_COARSE)
>> That's a pretty good performance! I wouldn't hesitate to use it in
>> zmq_poll. The impact on non-Linux systems is still unknown though.
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
More information about the zeromq-dev