[zeromq-dev] LMAX/Distruptor code project

Ian Barber ian.barber at gmail.com
Thu Jul 28 13:43:45 CEST 2011


On Thu, Jul 28, 2011 at 12:08 PM, Steven McCoy <steven.mccoy at miru.hk> wrote:

>
> The multiple consumers in the LMAX ring are the same as the multiple
> windows I manage within the TX and RX windows to keep data for FEC
> construction and reconstruction.  A simpler architecture can be used when
> not requiring FEC but this
>
was the best design I arrived at for parity packets.
>

Yeah, that sounds pretty similar (from the my PoV of usually having to read
your messages with google open :)). From what I understand one of the nice
options they achieved for the LMAX architecture was the ability to have
different stages of parallel consumption, which is done by having the
current slot position for each consumer be stored centrally along with the
buffer, so later stage processes can wait until the earlier stages have done
their processing. This situation doesn't really exist in PGM world as far as
I understand it, as they are solving somewhat different problems. The LMAX
guys do point out in their QCon presentation that their structures is
similar to some network card internals etc. so it seems they are aware other
people have worked on similar solutions - what is interesting that they have
taken a fairly low level process and brought a version of it to application
level problems, and that it particularly seems to be a good way of dealing
with the problem of sequencing a lot of events to known consumers.

Memory management and NUMA-awareness is very difficult, ptmalloc3 is
> incredibly good that you don't need your own SLAB manager like GLib has;
> Windows equivalent is the Concurrency Runtime.  The fight is between
> pre-allocating a large arena and optimising that for one fast consumer or
> producer, and making a multi-thread friendly architecture that allows
> multiple threads to publish or consume.  With PGM there is a big design
> issue with how to handle new publishers, whether you have a bus topology or
> limit to one sender at a time and recycle the receive window.  It is very
> expensive to allocate large chunks of memory for each new receiver.
>

>From what I understand the dispruptor basically acts as a bus with one large
allocated big of memory, but once a writing process has a slot it can
happily write into that part of the buffer without clashing with other
publishers, as they attempt to pad their slots to be as cache independent as
possible. I believe that most of the actual entry structures are
pre-allocated as well, in part of avoid GC by making them very long lived,
again I think that reflects the nature of events they're processing.

Ian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20110728/fe353ee8/attachment.htm>


More information about the zeromq-dev mailing list