[zeromq-dev] FD resource leak when opening and closing an inproc socket
Itay Chamiel
itay.chamiel at orcam.com
Tue Dec 29 13:12:08 CET 2020
Hi, we have a client thread that is supposed to receive data from a parent
thread, then disconnect when done. We've noticed that when the socket is
closed there's a leak of an eventfd (file descriptor), therefore we have a
leak every time such a client is created and destroyed - even if no data is
transferred.
Here is a quick C++ program to reproduce it. I'm running on a Ubuntu 18
desktop with ZMQ 4.1.6 or 4.3.3. This loop is expected to run forever but
crashes a little after 1000 iterations due to too many open files.
#include "zmq.hpp"
int main() {
zmq::context_t context;
while(1) {
zmq::socket_t* socket = new zmq::socket_t(context, ZMQ_SUB);
socket->connect("inproc://some_name");
delete socket;
}
}
The problem does not occur for other connection types (i.e. replace inproc
with ipc and the problem will not occur).
In case you want it without the C++ bindings, here is a slightly more
elaborate C example which also sets LINGER to zero (with no effect) and
displays the number of FDs in use by the process each iteration.
#include <zmq.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
int main() {
void* ctx = zmq_ctx_new();
for (int i=0; ; i++) {
void* zmq_sock = zmq_socket(ctx, ZMQ_SUB);
if (!zmq_sock) { printf("fail after %d iterations: %s\n", i,
zmq_strerror(errno)); exit(-1); }
int linger = 0;
int rc = zmq_setsockopt(zmq_sock, ZMQ_LINGER, &linger, sizeof(linger));
// this doesn't actually help
if (rc != 0) exit(-1);
rc = zmq_connect(zmq_sock, "inproc://some_name");
if (rc != 0) exit(-1);
rc = zmq_close(zmq_sock);
if (rc != 0) exit(-1);
// show the number of used FDs
char cmd[100];
sprintf(cmd, "ls -l -v /proc/%d/fd | wc -l", (int)getpid());
system(cmd);
// test is hard to abort without a sleep
usleep(100*1000);
}
}
Thank you,
Itay Chamiel, OrCam
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20201229/8c60ca4a/attachment.htm>
More information about the zeromq-dev
mailing list