[zeromq-dev] How many messages are in queue now?

Thomas Rodgers rodgert at twrodgers.com
Fri Apr 17 17:12:38 CEST 2015


Patches welcome.

On Friday, April 17, 2015, Ilja Golshtein <ilejncs at narod.ru> wrote:

> Yes, this is true.
>
> If a sysadmin detects that a disk storage is 90% full, she notifies users
> and do other things to obtain more room, while it is quite possible that
> the disk is only 89% full at the moment.
> So, ability to calculate disk usage is "a likely source of confusion and
> gnashing of teeth".
> Right.
>
>
> 17.04.2015, 17:06, "Thomas Rodgers" <rodgert at twrodgers.com
> <javascript:_e(%7B%7D,'cvml','rodgert at twrodgers.com');>>:
>
> I suppose then it is a question of general utility. Even with ZMQ_PAIR, it
> is still at best and estimate, likely to have changed (perhaps
> considerably) from the time you query it to the the time you did something
> based on that information. Even under the best of circumstances it seems of
> such limited utility (and a likely source of confusion and gnashing of
> teeth) that I don't see the value in adding the feature.
>
> On Fri, Apr 17, 2015 at 9:00 AM, Ilja Golshtein <ilejncs at narod.ru
> <javascript:_e(%7B%7D,'cvml','ilejncs at narod.ru');>> wrote:
>
> Thomas,
>
> Agree that it makes perfect sense for multiple producers and consumers,
> while in my case it is ZMQ_PAIR.
>
> Thanks.
>
> 17.04.2015, 16:49, "Thomas Rodgers" <rodgert at twrodgers.com
> <javascript:_e(%7B%7D,'cvml','rodgert at twrodgers.com');>>:
>
> At the point a message is placed on the queue it is either at HWM or has
> at least space for one message, this can be evaluated atomically, claiming
> the 'slot' under the HWM for the message to be placed on the queue. Any
> other threads attempting to send would synchronize with this operation and
> the HWM will be respected, no overshoot.
>
> Any observation of an atomic count, as it is being changed, will be the
> state of the count, at some point in the past. So you could have a HWM of
> 1000, read a count of 850, make some decision, yah, cool to send, <90%, and
> have a failing subsequent send, because in the time it took you to do all
> of that, another thread may have enqueued 30 messages, and yet another
> thread may have tried to enqueue 50 messages.
>
> Any such observations of queue depth with multiple producers and consumers are
> at best, estimates of queue state at some point in the past, and it is very
> difficult IME to build reliable logic around that information.
>
> On Friday, April 17, 2015, Ilja Golshtein <ilejncs at narod.ru
> <javascript:_e(%7B%7D,'cvml','ilejncs at narod.ru');>> wrote:
>
> Hello Pieter,
>
> thank you for your answer.
>
> I am in process of reading http://zguide.zeromq.org/hx:chapter7 , while
> atomic counter seems more natural choice for inproc so far.
>
> Could you please explain why it is possible to detect that we are at 100%
> of HWM and it is not possible to detect that we are e.g. at 90% ?
>
> 17.04.2015, 11:54, "Pieter Hintjens" <ph at imatix.com>:
> > It was never possible, due to the fact that pipes are being written
> > and read asynchronously. You cannot measure the free space except by
> > stopping everything.
> >
> > The workaround is to use credit based flow control. I've a section in
> > the Guide that explains how this works. You can in effect know what %
> > of the pipe from sender to receiver (including all ZeroMQ and network
> > buffers) is filled.
> >
> > -Pieter
> >
> > On Wed, Apr 15, 2015 at 4:30 PM, Ilja Golshtein <ilejncs at narod.ru>
> wrote:
> >>  Hello.
> >>
> >>  I used to think that it is possible to retrieve number or queued
> messages per socket in recent versions of 0mq, while I failed to find
> correspond getsockopt in 4.0.5
> >>
> >>  Use case: I need to alter behavior of my application (drop certain
> type of messages) if a queue is, say, 90% of HWM.
> >>
> >>  Please, advise if this facility is implemented or planned, or suggest
> a workaround which is more elegant than having atomic variable (it is
> inproc) maintained by reader and writer threads which is what I plan to do.
> >>
> >>  Thanks.
> >>
> >>  --
> >>  Best regards
> >>  Ilja Golshtein
> >>  _______________________________________________
> >>  zeromq-dev mailing list
> >>  zeromq-dev at lists.zeromq.org
> >>  http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> > _______________________________________________
> > zeromq-dev mailing list
> > zeromq-dev at lists.zeromq.org
> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> --
> Best regards
> Ilja Golshtein
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> ,
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> <javascript:_e(%7B%7D,'cvml','zeromq-dev at lists.zeromq.org');>
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
>
> --
> Best regards
> Ilja Golshtein
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> <javascript:_e(%7B%7D,'cvml','zeromq-dev at lists.zeromq.org');>
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> ,
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> <javascript:_e(%7B%7D,'cvml','zeromq-dev at lists.zeromq.org');>
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
>
> --
> Best regards
> Ilja Golshtein
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20150417/fa380008/attachment.htm>


More information about the zeromq-dev mailing list