[zeromq-dev] Windows multicast

Steven McCoy steven.mccoy at miru.hk
Tue Apr 26 13:07:10 CEST 2011


On 24 April 2011 10:31, Steven McCoy <steven.mccoy at miru.hk> wrote:

> On 24 April 2011 03:17, Robert Falcone <falcone.rob at gmail.com> wrote:
>
>>   I am developing with MSVC++ 2010. I have read that it is tricky to get
>> multicast (with openPGM) working with ZeroMQ on windows. I will be sending
>> out market data at high rates and would love to test it out.
>>
>> Does anyone know if it is possible to use ZeroMQ to multicast?
>>     If so can you please provide information on how to get it to work?
>>
>>
> Grab an OpenPGM package from here, http://見.香港/openpgm/<http://xn--nw2a.xn--j6w193g/openpgm/>
>
>
> Then build ØMQ with ZMQ_HAVE_OPENPGM and include & lib paths set
> appropriately.
>
> I'm sure I wrote the OS configuration on the wiki somewhere, but I'm just
> copying from previous mail.  Be prepared for poor performance if you don't
> have TX & RX coalescing enabled and actually working, check with PerfMon
> that the TX counter increments.  I've noted a problem with Broadcomm Server
> NICs, meanwhile Linux and FreeBSD don't suffer these problems, go figure.
>

I ran these tests in November and published on the list, today I finally
moved over to the blog but they still are reasonably accurate, a latency
comparison of Linux and Windows.

http://openpgmdev.blogspot.com/2011/04/wherefore-art-thou-ip-packet-make-haste.html

The effects are compounded as Windows only has millisecond timers; the
performance tool rounds times up to the nearest millisecond on Windows with
a trade off of latency to busy-wait time.  Busy-waiting on timers is quite
adverse to performance on systems without significant number of unbound
cores.  A lot of engineering work has gone to ensure busy-waiting does not
occur on single-core or in processes with only one available core, due to
process affinity or deactivated cores.

On that note, everything Martin & Pieter says is complicated and difficult
with multi-threading and synchronisation that ØMQ avoids for performance
reasons you can probably find in OpenPGM.

:-)

For OpenPGM inside ØMQ you could theoretically disable a lot of the
threading support, the idea is to make a libpgm_se (non-re-entrant) edition.
 But the savings when measured in PGM are untraceable and the overhead of
atomic operations is dwarfed by the significant memory allocator and IP
stack demands.  Consider that most Ethernet adapters need buffer coalescing
to reach even gigabit levels there are still many technology limitations and
buffer bloat, as per LKML discussion, in the way aside of moving to
different fabrics.

<http://openpgmdev.blogspot.com/2011/04/wherefore-art-thou-ip-packet-make-haste.html>
-- 
Steve-o
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20110426/f9f2e63c/attachment.htm>


More information about the zeromq-dev mailing list