[zeromq-dev] inproc PUB/SUB problem, how to profile

Jan Müller 217534 at gmail.com
Mon Mar 30 12:45:14 CEST 2015


Hi,

I changed a set of tcp pub/sub connected processes into a single
application with inproc sockets. Now it seems to me this application is
somewhat slower than the multiple process implementation.

The application looks something like this: (runs on Linux)

1. get the data from the network through UDP (around 10kbits at 20kHz)
2. "unpack the data" and publish (PUB) it over inproc
3. multiple threads subscribe and do various operations (I/O, filtering
etc.)

Now it seems to me, when the system is under load (other applications are
running) everything gets a bit unresponsive. And the kernel drops some of
the incoming UDP packets (even though I set the kernel receive buffer quite
high, to 40MB). I'm pretty sure, nothing in the zmq part gets dropped
(HWM==0).

It looks like more and more memory gets used. Although the zmq application
uses only few tens of MBs, when I check it with e.g. "top".

I'm not sure where the bottleneck is (e.g. a slow subscriber).

What is the behavior of inproc PUB/SUB when there is a slow subscriber?
Will one slow subscriber delay everyone else or will the publisher happily
continue with filling up the shared memory?
Is there more shared memory allocated once the subscriber does not catch up
with reading it?

In short: can the "suicidal snail" pattern be applied to _inproc_ PUB/SUB
as well?

Can I have a set HWM on the PUB side and HWM==0 on the SUB side (where I
need to write the data to disk with sometimes long respond times)?

Any ideas, how I could performance test this application? AFAIK I cannot
inspect the "fill-level" of the sockets. How can I find out who is the slow
subscriber?

Thanks a lot!

Best,
Jan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20150330/e5316913/attachment.htm>


More information about the zeromq-dev mailing list