[zeromq-dev] Distributed Q with **lots** of consumers

Andrew Hume andrew at research.att.com
Tue Nov 13 07:08:37 CET 2012

is it possible to estimate the runtime for an item?

and what is the metric you are trying to optimise?
is it average latency? or total throughput? or minimal idle time?

On Nov 12, 2012, at 3:58 PM, Sean Donovan wrote:

> Any suggestions for implementing the following in ZMQ?
> Imagine a single Q containing millions of entries, which is constantly being added to.  This Q would be fully persistent, probably not managed by ZMQ, and run in it's own process.
> We would like N workers.  Those workers need to start/stop ad-hoc, and reconnect to the Q host process.  Each worker would take a single item from the Q, process, acknowledge completion, then repeat (to request another item).  Processing time for each task is 3ms+ (occasionally minutes).
> Because of the variance in compute time it is important that the workers don't pre-fetch/cache tasks.  As an optimization, we'll add a heuristic so we can batch short-running tasks together (but, we'd like the control -- a load-balancing algorithm wouldn't know how to route efficiently, unless it could take hints).
> Need a pattern that would allow us to scale to 100s of workers.  
> Sean Donovan
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Andrew Hume
623-551-2845 (VO and best)
973-236-2014 (NJ)
andrew at research.att.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20121112/0ff91138/attachment.htm>

More information about the zeromq-dev mailing list