[zeromq-dev] Is there already native "high level" connection pool?

Pieter Hintjens ph at imatix.com
Tue Nov 27 15:04:00 CET 2012

OK, you're still not explaining the use case, and still focussing on
the 0MQ aspects instead of the actual problem.

I assume (since you don't say clearly) that you want to distribute
work, rather than duplicate data. So the pattern from web server to
client is round-robin distribution?

Now it also seems your have a long stream of data but you don't give
figures. How many HTTP clients will you have? How large is one HTTP
POST? What is the response to the HTTP client? Any? None?

You speak of latency and reliability requirements. What are they? 1
second? 1 millisecond? One nanosecond? What kind of reliability are
you talking about? The words don't mean anything unless you attach
them to real (sane) figures.

What I assume (but you don't say so it's kind of inefficient) is that
you want to send one HTTP POST to one client, but you don't want to
handle it all in memory at once.

Further I assume you're dealing with Internet clients, so your real
bottleneck is going to be the external network.

So, design:

- shove every HTTP POST request to disk, unzip as you go.
- when you have a complete POST, send the file information (path) as
one message to a client to process
- finished

And then measure the performance with realistic clients, and see where
it's too slow.

Again, without some more details about what you are actually trying to
do (and not *how*), real figures, etc. there's really no way to give a
more accurate answer.


On Tue, Nov 27, 2012 at 10:33 PM, Stefan de Konink <stefan at konink.de> wrote:
> On 11/27/12 13:49, Pieter Hintjens wrote:
>> Please explain what you are actually trying to achieve, not in terms
>> of 0MQ but in terms of the actual overall problem, and we'll be able
>> to help more.
> I would like to send an incomming HTTP POST to an undefined number of
> clients on the internet, having an envelope (the HTTP path) and data the
> HTTP POST. The two constraint of this process is minimizing latency and
> guaranteed delivery.
> In order to handle large HTTP POST each incomming HTTP request is
> compressed using gzip. When the request comes in, an envelope is send,
> with SND_MORE. The HTTP POST is compressed on the fly, each interation
> the webserver receives data on the socket for that request, gzip is
> called, and the partial result is send over a ZeroMQ socket using
> SND_MORE. When the HTTP POST is completely processed no SND_MORE flag is
> append.
> In order to handle concurrent requests the above process creates a new
> socket for each HTTP POST, in any other way the concurrency could
> already mess up the envelope and first data packet after it which are
> already two calls. The creation of new sockets per request doesn't scale
> - it does solve my problem but it is not elegant.
> A potential solution is buffering the compressed request, and send both
> envelope and data after eachother. Still the zeromq socket should be
> locked for that, or a per thread socket should be created. The solution
> I would prefer is reusing used sockets, based on a pool. It seems that
> this functionality is not available yet as an abstract solution.
> Stefan
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev

More information about the zeromq-dev mailing list