[zeromq-dev] LMAX/Distruptor code project
Steven McCoy
steven.mccoy at miru.hk
Thu Jul 28 13:08:00 CEST 2011
On 28 July 2011 17:57, Ian Barber <ian.barber at gmail.com> wrote:
> On Thu, Jul 28, 2011 at 9:53 AM, Steven McCoy <steven.mccoy at miru.hk>wrote:
>
>> Just saw this article and had a brief read through the linked blogs and
>> documentation. I'm not really picking up on how this is so different from
>> pretty much any C/C++ messaging middleware implementation, it's certainly
>> the same as the TX window in OpenPGM.
>>
>>
>> http://www.low-latency.com/blog/breaking-new-ground-ultra-low-latency-messaging
>
>
> I think the really interesting stuff was actually how they got the single
> thread processing so damn much with the various data structure and code
> optimisations, but I like the idea of using the ring buffer disruptor thing
> to avoid any two processes writing to the same cache line to maximise
> performance in the queue. Is that the same sort of idea the the TX window in
> PGM uses then?
>
>
I use an array based ring buffer in order to reduce the time to read entries
for rebroadcasting and constructing FEC parity packets. Under initial
development one option was a full array including data but I moved to a
pointer array with separate Linux SKB like buffers to reduce start time and
to improve cache locality for the RX window.
The multiple consumers in the LMAX ring are the same as the multiple windows
I manage within the TX and RX windows to keep data for FEC construction and
reconstruction. A simpler architecture can be used when not requiring FEC
but this was the best design I arrived at for parity packets.
Memory management and NUMA-awareness is very difficult, ptmalloc3 is
incredibly good that you don't need your own SLAB manager like GLib has;
Windows equivalent is the Concurrency Runtime. The fight is between
pre-allocating a large arena and optimising that for one fast consumer or
producer, and making a multi-thread friendly architecture that allows
multiple threads to publish or consume. With PGM there is a big design
issue with how to handle new publishers, whether you have a bus topology or
limit to one sender at a time and recycle the receive window. It is very
expensive to allocate large chunks of memory for each new receiver.
In 0MQ PGM is single threaded and so the design can be tuned further, there
are some branches in the repository that have done such work and achieved
performance improvements. The counter argument is that throwing better
hardware at the problem is usually more effective as the protocol is
currently still network hardware limited and not memory or processor time
limited.
--
Steve-o
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20110728/4028a6f0/attachment.htm>
More information about the zeromq-dev
mailing list