[zeromq-dev] [Openpgm-dev] latency increasing

Steven McCoy steven.mccoy at miru.hk
Fri Feb 6 09:13:24 CET 2009


2009/2/6 Steven McCoy <steven.mccoy at miru.hk>:
> That is presumably a context switch on a call to g_static_mutex_unlock().

This all happens if the SPM heartbeat intervals are too low.  The
pgmsend examples use very low latency settings that cause great
problems with expensive context switches, especially on my hardware
with only one core.  Increasing these intervals to Rendezvous style
values immediately shows a drastic improvement in call cost:

Example settings:

pgm_transport_set_ambient_spm (g_transport, 8192*1000);
guint spm_heartbeat[] = { 1*1000, 1*1000, 2*1000, 4*1000, 8*1000,
16*1000, 32*1000, 64*1000, 128*1000, 256*1000, 512*1000, 1024*1000,
2048*1000, 4096*1000, 8192*1000 };

4b: 2028us
256b:  2025us
1400b:  2028us

Rendezvous settings:

pgm_transport_set_ambient_spm (g_transport, pgm_secs(30));
guint spm_heartbeat[] = { pgm_msecs(100), pgm_msecs(100),
pgm_msecs(100), pgm_msecs(100), pgm_msecs(1300), pgm_secs(7),
pgm_secs(16), pgm_secs(25), pgm_secs(30) };

4b:  41us
256b:  45us
1400b:  47us

Tuned low latency to context switch time:

pgm_transport_set_ambient_spm (g_transport, 8192*1000);
guint spm_heartbeat[] = { 4*1000, 4*1000, 8*1000, 16*1000, 32*1000,
64*1000, 128*1000, 256*1000, 512*1000, 1024*1000, 2048*1000,
4096*1000, 8192*1000 };

4b:  40us
256b:  41us
1400b:  43us

-- 
Steve-o



More information about the zeromq-dev mailing list