[zeromq-dev] PGM at high throughput

Robin Weisberg robin.weisberg at gmail.com
Wed Jul 22 20:12:21 CEST 2009


I've been testing sending pgm from linux to windows and linux to linux using
the pgm_remote_thr/pgm_local_thr and have a few questions maybe issues.

1) the pgm_remote_thr reports message gaps even for tests that last < 3
seconds. I thought there was recovery being done at the PGM level for 10
seconds? (pgm_secs is set to 10 in config.hpp). This is true for windows and
sometimes linux. FYI, I previously changed pgm_max_rte to 125000000 in
config.hpp

C:\ework\zmqsvnv1.0.0\windows\Release>pgm_remote_thr.exe wimp
"192.168.0.195;226.0.0.1:7500" 100 100000
iface to connect to local_exchange: 192.168.0.195;226.0.0.1:7500
message size: 100 [B]
message count: 100000

msg# 16742 GAP


2) On the linux side while publishing for extended periods of time I see
this message. I'm assuming this means that the io thread is stuck sending
because of full OS buffers and doesn't indicate a real problem that could
cause message loss. Can someone confirm that?

root at dev01:/local/home/data/robin/zmq-1.0.0/perf/tests/zmq# ./pgm_local_thr
wimp "eth0;226.0.0.1:7500" 100 100000
local_exchange network: zmq.pgm://eth0;226.0.0.1:7500
message size: 100 [B]
message count: 1000000
Start pgm_remote_thr on remote host and pres enter to continue.


** (process:28101): WARNING **: sendto 226.0.0.1 failed: 105/No buffer space
available

** (process:28101): WARNING **: sendto 226.0.0.1 failed: 105/No buffer space
available

** (process:28101): WARNING **: sendto 226.0.0.1 failed: 105/No buffer space
available

** (process:28101): WARNING **: sendto 226.0.0.1 failed: 105/No buffer space
available

.... (lots of these)

** (process:28101): WARNING **: sendto 226.0.0.1 failed: 105/No buffer space
available

** (process:28101): WARNING **: sendto 226.0.0.1 failed: 105/No buffer space
available
Pres enter when pgm_remote_thr exits.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20090722/8cc2d548/attachment.htm>


More information about the zeromq-dev mailing list