[zeromq-dev] fifo idea
sustrik at 250bpm.com
Tue Dec 22 08:38:20 CET 2009
>>> on another note I have been considering adding a fifo/pipe
>>> transport. This would be so that a zmq socket could say open
>>> a named fifo and receive shutdown, statistics, admin etc requests on it.
>> How would that work? Can you give a scenario?
> At the moment I just ctrl-c my servers to stop them
> now else where I have worked, we implemented a simple
> named-fifo-reader or pipe and the server would open it.
> there where then some administrative tools which would
> simply echo commands in the pipe, to switch up logging etc.
> for this to work with zmq i would have to write a small utility say
> zmq_echoer which took the path and the command
> then on the server
> zmq::context_t ctx (1, 1);
> zmq::socket_t rep (ctx, ZMQ_REP);
> while (running)
> zmq::message_t msg;
> // lets say the first byte of the message indicates the type
> // app =0 or control = 255message
> unsigned char type = *msg.data();
> switch (type)
> case 0:
> /// do app message
> case 255:
> const char * msg = ((char *)msg.data()) + 1;
> if (strncmp(msg, "SHUTDOWN", 8) == 0)
> running = false;
> zmq::message_t response;
> sleep(2);; // yuck
> std::cout << "shutting down" << std::endl
> abort(); // very friendly
> does that explain it?
Some kind of IPC mechanism was already proposed couple of times. Given
that the same functionality can be achieved using TCP transport and
loopback interface, the main goal seems to be performance. In your
example performance isn't that interesting, however, one can imagine
scenarios where large amounts of data are passed between processes on
the same box.
I have no idea what's the performance difference between a pipe and a
loopback TCP connection, still, my feeling is that the biggest
performance gain for IPC transport would be the possibility to transfer
the data via shared memory, i.e. no copying is involved, at least for
More information about the zeromq-dev