[zeromq-dev] Distributed Q with **lots** of consumers

Shane Spencer shane at bogomip.com
Wed Nov 14 19:18:06 CET 2012

Yup... it's a ZeroMQ list but I'm gonna deviate a bit:

Not sure how helpful fhis will be.. but I do a lot of this sort of
work using MongoDB and Redis.  Have workers nibble away at a queue (in
variable batch sizes).

Here's an article I wrote about it a while back:


And of course check out Blocking POP/PUSH operations with Redis.  You
can put expirable data in a giant queue and let it get nibbled away
and modified.

- Shane

On Mon, Nov 12, 2012 at 1:58 PM, Sean Donovan <sdonovan_uk at yahoo.com> wrote:
> Any suggestions for implementing the following in ZMQ?
> Imagine a single Q containing millions of entries, which is constantly being
> added to.  This Q would be fully persistent, probably not managed by ZMQ,
> and run in it's own process.
> We would like N workers.  Those workers need to start/stop ad-hoc, and
> reconnect to the Q host process.  Each worker would take a single item from
> the Q, process, acknowledge completion, then repeat (to request another
> item).  Processing time for each task is 3ms+ (occasionally minutes).
> Because of the variance in compute time it is important that the workers
> don't pre-fetch/cache tasks.  As an optimization, we'll add a heuristic so
> we can batch short-running tasks together (but, we'd like the control -- a
> load-balancing algorithm wouldn't know how to route efficiently, unless it
> could take hints).
> Need a pattern that would allow us to scale to 100s of workers.
> Sean Donovan
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev

More information about the zeromq-dev mailing list