[zeromq-dev] max_app_threads = 512

Martin Sustrik sustrik at 250bpm.com
Mon Jun 14 08:43:24 CEST 2010


Matt Weinstein wrote:
> Martin -
> 
> 512 is actually pretty generous.
> 
> In a closed source solution you'd probably just have to call  
> zmq_set_max_threads(N) prior to creating any contexts.

It can be a context creation parameter (as it used to be in 2.0.6)
What changed is that back then adding more application threads was 
rather expensive (memory usege drowing quadratically) even though the 
threads were not used; now resource consumed by single non-used thread 
is 8 bytes. That makes it possible to allocate large amount of them in 
advance and stop bothering users with it.

> My problem is that I'm being called downward by an opaque system, so I  
> don't have a design limit of max# of threads.

You just have to choose one, however large.

> Meanwhile, as OSX started complaining about running out of fds (ulimit  
> and/or sysctl should fix that :-) ), but I started wondering whether  
> inproc fds are best in this scenario.  If I have 5,000 REQ threads  
> waiting on an inproc: socket with an XREP in a queue device, will this  
> end up being a hog in CENTOS?

Well, there's a socketpair associated with each application thread. No 
idea how 5000 sockets pairs hog the system.

Anyway, I know it's not your fault, but having 5000 threads in the 
application seems like a pretty dumb idea -- unless you have 5000 CPU 
cores to allocate them to.

> So... I've begun thinking it *might* be cleaner and more efficient to  
> send requests directly into a raw XREQ socket (thank you for having  
> that), and use per-thread mutex/condition variables, that way the  
> timeout would be handled locally.

Migrating a 0MQ socket between threads in not allowed in 2.0.7, even if 
you synchronise access to it. It may work in most cases but can break in 
subtle ways.

I'm working on infrastructure for migrating sockets at the moment in 
case you are interested in it.

> This is neater than wedging a  
> zmq_socket as well.  I built the raw XREQ pipeline code last night,  
> and it runs faster than inproc, I'm not sure why yet.

tcp being faster than inproc? Is it reproducible Can you provide the 
benchmarking program?

> I haven't built  
> the GUID matching and distribution code yet, which may have a  
> performance impact because it needs thread safe containers, a queue,  
> and an extra service thread.

Mhm. Does it?

My idea was to generate a guid when sending a request (see src/uuid.hpp) 
and append it to the message. Then to chop of the GUID from reply and if 
doesn't match the expected GUID drop the message.

Martin



More information about the zeromq-dev mailing list