[zeromq-dev] High water mark notification for publisher

Edwin Amsler edwinamsler at thinkboxsoftware.com
Sat Sep 22 01:28:33 CEST 2012

As far as I know, it doesn't block. I also think that we should make 
sure that slow subscribers not cause trouble for others. If we notify 
when all queues are full, then the application code would be bound by 
the fastest subscriber, not the slowest. In my example code I put a 
sleep(), but it could just as easily be a spin lock.

My application pulls data from a number of other sources, so it's costly 
to generate traffic and its a shame that >90% of it gets thrown out.

On 21/09/2012 6:22 PM, Justin Karneges wrote:
> Another behavior that may be acceptable is blocking the sender if all outgoing
> queues are full. I believe the only reason it doesn't block today is because
> you wouldn't want one slow subscriber causing all other subscribers to stop
> receiving messages.
> But there may be apps that rely on writes to pub never blocking (it is true
> that it never blocks today, correct?)
> On Friday, September 21, 2012 05:57:29 PM Edwin Amsler wrote:
>> Hey folks,
>> I brought up this problem a few months back, but had to move onto more
>> pressing things at the time.
>> It was mentioned that under the hood, the PUB-SUB system had individual
>> outgoing queues, each with their own water mark counters. What happens
>> to a message when all queues are full? It would be ideal to know if the
>> message I pass in will be dropped by all queues due to the high water
>> mark. I don't think it was implemented then, but it would certainly
>> solve my particular problem of over-tasking the publishing side.
>> I have example code here:
>> http://pastebin.com/gjfuNdNF
>> The idea would be that if that ZeroMQ's publisher has no room in the
>> internal buffers, that I would be notified the message wasn't sent. In
>> the test application, I would then be able to wait before trying again.
>> This way the application code which publishes messages isn't creating
>> needless work which is secretly thrown away.
>> Thoughts?
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org
>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev

More information about the zeromq-dev mailing list