[zeromq-dev] LMAX/Distruptor code project

Steven McCoy steven.mccoy at miru.hk
Fri Jul 29 04:12:53 CEST 2011

On 28 July 2011 19:43, Ian Barber <ian.barber at gmail.com> wrote:

> Memory management and NUMA-awareness is very difficult, ptmalloc3 is
>> incredibly good that you don't need your own SLAB manager like GLib has;
>> Windows equivalent is the Concurrency Runtime.  The fight is between
>> pre-allocating a large arena and optimising that for one fast consumer or
>> producer, and making a multi-thread friendly architecture that allows
>> multiple threads to publish or consume.  With PGM there is a big design
>> issue with how to handle new publishers, whether you have a bus topology or
>> limit to one sender at a time and recycle the receive window.  It is very
>> expensive to allocate large chunks of memory for each new receiver.
> From what I understand the dispruptor basically acts as a bus with one
> large allocated big of memory, but once a writing process has a slot it can
> happily write into that part of the buffer without clashing with other
> publishers, as they attempt to pad their slots to be as cache independent as
> possible. I believe that most of the actual entry structures are
> pre-allocated as well, in part of avoid GC by making them very long lived,
> again I think that reflects the nature of events they're processing.
The significant design I missed first time reading is that a single
disruptor ring can support multiple publishers.  This is an architecture I
need to move the PGM RX window towards in order to manage a sensible memory
overhead and support embedded environments.  It looks like a good idea for
OpenPGM 6.

The disruptors I've described are used in a style with one producer and
> multiple consumers, but this isn't a limitation of the design of the
> disruptor. The disruptor can work with multiple producers too, in this case
> it still doesn't need locks.[13]<http://martinfowler.com/articles/lmax.html#footnote-multi-producer>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20110729/23775542/attachment.htm>

More information about the zeromq-dev mailing list