[zeromq-dev] max_app_threads = 512
matt_weinstein at yahoo.com
Sun Jun 13 14:26:03 CEST 2010
512 is actually pretty generous.
In a closed source solution you'd probably just have to call
zmq_set_max_threads(N) prior to creating any contexts.
My problem is that I'm being called downward by an opaque system, so I
don't have a design limit of max# of threads.
Meanwhile, as OSX started complaining about running out of fds (ulimit
and/or sysctl should fix that :-) ), but I started wondering whether
inproc fds are best in this scenario. If I have 5,000 REQ threads
waiting on an inproc: socket with an XREP in a queue device, will this
end up being a hog in CENTOS?
So... I've begun thinking it *might* be cleaner and more efficient to
send requests directly into a raw XREQ socket (thank you for having
that), and use per-thread mutex/condition variables, that way the
timeout would be handled locally. This is neater than wedging a
zmq_socket as well. I built the raw XREQ pipeline code last night,
and it runs faster than inproc, I'm not sure why yet. I haven't built
the GUID matching and distribution code yet, which may have a
performance impact because it needs thread safe containers, a queue,
and an extra service thread.
Your thoughts are welcome... :-)
On Jun 12, 2010, at 11:59 PM, Martin Sustrik wrote:
> Matt Weinstein wrote:
>> I guess this is less dynamic than it used to be ... :-)
>> I'm being called inward from an application with an unknown number of
>> threads, definitely > 1000.
>> Any harm in punching this up to 4096?
> No. The overhead per thread added is 8 bytes of memory allocated.
> I've used 512 just because "nobody uses more than 512 threads, except
> for those using supercomputers and those are presumably going to build
> in a custom way anyway".
> Looks like I was wrong. Would it be useful to increase the constant in
> future releases? And if so, what should the new value be?
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
More information about the zeromq-dev