[zeromq-dev] Queue full indication when queue is not full

Pieter Hintjens ph at imatix.com
Fri Feb 14 10:19:19 CET 2014


Hi Dave,

Feel free to test the current libzmq master, with and without that
patch, to see if it helps. We don't backport improvements to 3.2.x
stable, only bug fixes. So any new behavior would come in a 4.1.x
release.

-Pieter

On Fri, Feb 7, 2014 at 7:27 PM,  <davewalter at comcast.net> wrote:
> Hi folks,
>
> Currently using 0MQ 3.2.3 on Red Hat Linux 6.4. I have a ZRE application
> in which a server is reading from a Router socket and replying to a Dealer
> socket, all in non-blocking mode. I have set the RCVHWM and SNDHWM options
> to 100 on all sockets.
>
> There is a use case where a client sends many small requests to the server,
> but the server may be busy and not able to service requests for a period.
> This allows the requests to queue up. The client sending the requests does
> not hit the HWM mark because the TCP layer has a large buffer size compared
> to the messages, so many hundreds of messages can be queued.
>
> When the server is able to service the requests, it does so very quickly,
> draining the requests from the receive queue and sending replies back to
> the client. This "whiplash" effect may cause the send to fail with EAGAIN
> error after only 70 or so replies sent, meaning the send queue HWM has
> been hit. I found that if I insert a 1ms wait followed by a retry after a
> HWM error, it almost always succeeds.
>
> On studying the 0MQ code, I found there is a throttling mechanism in
> socket_base.cpp that may cause a HWM error even when the send queue is
> actually not full. Apparently others have encountered this situation as
> well, it is discussed at these links:
>
> https://zeromq.jira.com/browse/LIBZMQ-286
> http://groups.crossroads.io/groups/crossroads-dev/messages/topic/jJ5x65jr3FTaiZ8Fjwqe5#post-iZXqUrsBSwluZpzjCYA6l
>
> In the second link (May 11, 2012) Martin Sustrik acknowledged the issue
> and offered a patch for it. It doesn't appear that this patch has actually
> been introduced to the 3.2.x stable release. (I tested with V3.2.4 and saw
> the same behavior.)
>
> Has this behavior been fixed in some other way? I understand that the
> throttling can be modified by reducing the value of the max_command_delay
> configuration variable. What effect does lowering that value have on
> 0MQ performance?
>
> Thanks for any help,
> Dave Walter
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>



More information about the zeromq-dev mailing list