[zeromq-dev] max_app_threads = 512

Matt Weinstein matt_weinstein at yahoo.com
Mon Jun 14 15:59:22 CEST 2010


On Jun 14, 2010, at 2:43 AM, Martin Sustrik wrote:

> Matt Weinstein wrote:
>> Martin -
>>
>> 512 is actually pretty generous.
>>
>> In a closed source solution you'd probably just have to call
>> zmq_set_max_threads(N) prior to creating any contexts.
>
> It can be a context creation parameter (as it used to be in 2.0.6)
> What changed is that back then adding more application threads was
> rather expensive (memory usege drowing quadratically) even though the
> threads were not used; now resource consumed by single non-used thread
> is 8 bytes. That makes it possible to allocate large amount of them in
> advance and stop bothering users with it.
>
>> My problem is that I'm being called downward by an opaque system,  
>> so I
>> don't have a design limit of max# of threads.
>
> You just have to choose one, however large.
>
>> Meanwhile, as OSX started complaining about running out of fds  
>> (ulimit
>> and/or sysctl should fix that :-) ), but I started wondering whether
>> inproc fds are best in this scenario.  If I have 5,000 REQ threads
>> waiting on an inproc: socket with an XREP in a queue device, will  
>> this
>> end up being a hog in CENTOS?
>
> Well, there's a socketpair associated with each application thread. No
> idea how 5000 sockets pairs hog the system.
>
Yes, it just sounds like a bad idea, though :)

> Anyway, I know it's not your fault, but having 5000 threads in the
> application seems like a pretty dumb idea -- unless you have 5000 CPU
> cores to allocate them to.

I'm brokering and sequencing requests, it's not my fault, really. :)

>
>> So... I've begun thinking it *might* be cleaner and more efficient to
>> send requests directly into a raw XREQ socket (thank you for having
>> that), and use per-thread mutex/condition variables, that way the
>> timeout would be handled locally.
>
> Migrating a 0MQ socket between threads in not allowed in 2.0.7, even  
> if
> you synchronise access to it. It may work in most cases but can  
> break in
> subtle ways.
>
> I'm working on infrastructure for migrating sockets at the moment in
> case you are interested in it.
>
I suppose it would eliminate the service thread and queue in exchange  
for locking running contexts against a mutex and full memory barrier  
to switch owners.

== many threads == queue -- one thread -- outbound socket


>> This is neater than wedging a
>> zmq_socket as well.  I built the raw XREQ pipeline code last night,
>> and it runs faster than inproc, I'm not sure why yet.
>
> tcp being faster than inproc? Is it reproducible Can you provide the
> benchmarking program?
>
>> I haven't built
>> the GUID matching and distribution code yet, which may have a
>> performance impact because it needs thread safe containers, a queue,
>> and an extra service thread.

> Mhm. Does it?
>
Good question
>
> My idea was to generate a guid when sending a request (see src/ 
> uuid.hpp)
> and append it to the message. Then to chop of the GUID from reply  
> and if
> doesn't match the expected GUID drop the message.
>

The service thread and queue provide a solution to the many-to-one  
problem for sockets.  I may sleep the contexts against a condition  
variable, we'll see..

I do hitchhike on 0MQ's uuid XREQ protocol.


> Martin
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Best,
Matt



More information about the zeromq-dev mailing list