[zeromq-dev] max_app_threads = 512

Matt Weinstein matt_weinstein at yahoo.com
Wed Jun 16 13:29:04 CEST 2010


On Jun 16, 2010, at 1:36 AM, Martin Sustrik wrote:

> Matt Weinstein wrote:
>
>>>> == many threads == queue -- one thread -- outbound socket
>>> Yes. In theory you can use it that way. Obviously the performance  
>>> will
>>> be poor and no fair-queueing will be done.
>>>
>>
>> Yes, you're probably right.
>>
>> How about lockless queues (pipes) and round-robin scheduling, similar
>> to what you're doing.  I can just wake up my service thread with a
>> single condition_variable and do the scan and forward.
>>
>> Are there other performance implications / does that sound workable?
>> Or am I missing something?
>
> I would expect the performance impact to be associated with migrating
> the pipes among OS threads. Cache synchronisation is expensive.
>

First off, let me apologize for asking newbie questions. SMP has  
really only been a serious subject for me for the last four weeks.  My  
graduate work predated the RAM model (80's), although I'm a EE so I  
understand the pipelining tradeoffs and store ordering "non guarantees".

As near as I can tell it should be straightforward to build a 1R/1W  
lockless pipe using barriers (certainly you can do a circular buffer  
with a count, so this should extend to a queue, although it may  
require getting efficiencies using an unrolled list, i.e. one lock per  
allocation of N blocks).  I haven't had the time to dig into your  
ypipes yet, which may be doing that.

Am I correct here?

Intuitively I think it should be possible to build a zero barrier 1R/ 
1R lockless queue leveraging CPU affinity and cache alignment tricks  
to preserve store ordering (essentially using memory as an inter- 
processor messaging platform), but it seems pretty exotic, and I've  
got to study the cache update guarantees for modern architectures.

I've been reading the web pubs, but probably have to buy an actual  
book (!) to understand the basics of the RAM model (tall RAM  
assumption, etc., etc.)  Recommendations would be appreciated.

Thanks,

Best,

Matt

> Martin
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev




More information about the zeromq-dev mailing list