[zeromq-dev] [Openpgm-dev] latency increasing
Steven McCoy
steven.mccoy at miru.hk
Fri Feb 6 08:18:06 CET 2009
2009/2/6 Martin Sustrik <sustrik at fastmq.com>:
> Maybe the slowdown is not related to checksums or sending. I've noticed
> there's a mutex synchronising access to the tx window. Maybe the sender
> thread has to wait for extended periods to get access to the critical
> section?
It seems the SPM spinlock can be at fault after sending a message
(pgm_reset_heartbeat_spm), with a broken down time trace it's taking
2ms here:
2009-02-06 14:52:15 ayaka Pgm: 0# 0us
2009-02-06 14:52:15 ayaka Pgm: 1# 132us - after parameter check
2009-02-06 14:52:15 ayaka Pgm: 2# 182us - after header setup (50us)
2009-02-06 14:52:15 ayaka Pgm: 3# 228us - after writer lock (46us)
2009-02-06 14:52:15 ayaka Pgm: 4# 305us - after sn#s (77us)
2009-02-06 14:52:15 ayaka Pgm: 5# 360us - after cksum (55us)
2009-02-06 14:52:15 ayaka Pgm: 6# 413us - after pgm_txw_push (53us)
2009-02-06 14:52:15 ayaka Pgm: 7# 472us - after pgm_sendto (59us)
2009-02-06 14:52:15 ayaka Pgm: 8# 527us - after writer unlock (55us)
2009-02-06 15:13:03 ayaka Pgm: lock in 0us
2009-02-06 15:13:03 ayaka Pgm: state update in 53us
2009-02-06 15:13:03 ayaka Pgm: notify in 130us
2009-02-06 15:13:03 ayaka Pgm: unlock in 2067us
2009-02-06 14:52:15 ayaka Pgm: 9# 2594us - after pgm_reset_heartbeat_spm (2ms)
2009-02-06 14:52:15 ayaka Pgm: 10# 2643us - end of pgm_transport_send_one
2009-02-06 14:52:15 ayaka: message sent in 2690us
That is presumably a context switch on a call to g_static_mutex_unlock().
I'm finding it overly easy to get gigantic timing errors during
profiling which isn't helping.
--
Steve-o
More information about the zeromq-dev
mailing list