[zeromq-dev] Pub/Sub with crashing subscribers - possible memory leak?
Samuel Lucas Vaz de Mello
samuelmello at gmail.com
Mon Jul 1 20:40:24 CEST 2013
Hi,
I'm using ZeroMQ 3.2.2.
I have a simple PUB server that bind to an address and several SUB clients
that connect to it. It seems that memory is leaking at server as clients
connect/disconnect.
You can find attached a pair of programs that reproduce the behaviour.
- Server just bind the socket and sleep.
- Client connect, subscribe and then immediately close the socket and exit.
I run the client several times, as fast as I can:
$ while true ; do ./sub ; done
And watch the memory usage in the server:
$ ps -eF
1001 7335 14795 0 4965 1340 1 15:07 pts/3 00:00:00 ./pub
...(1000 clients later)...
$ ps -eF
1001 7335 14795 1 9180 10564 1 15:07 pts/3 00:00:00 ./pub
Used memory went from 1336K to 10564K and does not went down, even after
all connections complete the TIME_WAIT phase.
I tried to run the publisher with valgrind:
- When I close the socket and context all memory is freed as expected
- If I interrupt the server when there are no clients connect (neither in
TIME_WAIT):
==10397== LEAK SUMMARY:
==10397== definitely lost: 0 bytes in 0 blocks
==10397== indirectly lost: 0 bytes in 0 blocks
==10397== possibly lost: 633,095 bytes in 1,043 blocks
==10397== still reachable: 16,624,316 bytes in 6,110 blocks
==10397== suppressed: 0 bytes in 0 blocks
The amount of blocks possibly lost and bytes still reachable roughly double
if I double the number of clients. The stack trace most frequent among the
possibly lost is:
==19506== 57,400 bytes in 7 blocks are possibly lost in loss record 33 of 42
==19506== at 0x4024F20: malloc (vg_replace_malloc.c:236)
==19506== by 0x405E8BE: zmq::pipepair(zmq::object_t**, zmq::pipe_t**,
int*, bool*) (yqueue.hpp:54)
==19506== by 0x4063355:
zmq::session_base_t::process_attach(zmq::i_engine*) (session_base.cpp:309)
==19506== by 0x4059B76: zmq::object_t::process_command(zmq::command_t&)
(object.cpp:86)
==19506== by 0x40524FE: zmq::io_thread_t::in_event() (io_thread.cpp:75)
==19506== by 0x4050A08: zmq::epoll_t::loop() (epoll.cpp:162)
==19506== by 0x4071DC5: thread_routine (thread.cpp:83)
==19506== by 0x41FA96D: start_thread (pthread_create.c:300)
==19506== by 0x41603FD: clone (clone.S:130)
It seems that the server is keeping some sort of state for clients who
connected and disconnected.
Is there any way to disable this behaviour and avoid running out of memory
for long running servers?
Thanks,
- Samuel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20130701/df5f26e4/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pub.c
Type: text/x-csrc
Size: 461 bytes
Desc: not available
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20130701/df5f26e4/attachment.c>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: sub.c
Type: text/x-csrc
Size: 700 bytes
Desc: not available
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20130701/df5f26e4/attachment-0001.c>
More information about the zeromq-dev
mailing list