[zeromq-dev] Shared memory transport (IPC)
ismael.luceno at gmail.com
Mon Nov 16 23:11:08 CET 2009
Martin Sustrik escribió:
> Hi Ismael,
> As discussed beforehand off-list, here's my idea how shared memory
> transport can be implemented:
> - first of all the two processes have to have a notification mechanism
> between them; let's say we'll use a TCP connection for this (in the
> future we may experiment with alternatives such as pipe or eventfd); to
> implement transport that has speaks TCP to another process, simply copy
> existing 0mq TCP "engine" (bp_tcp_engine_t) and name it something
> different (shmem_engine_t or so); you'll also have to modify
> engine_factory.cpp so that when a specific prefix is used ("shmem")
> shmem_engine_t is created; thus, the address parameter to bind/connect
> functions would like something like this: "shmem://127.0.0.1:5555"
> - as shared memory won't work accross the machines anyway, we can assume
> the interface to use is always loopback (127.0.0.1) and simplify the
> address to "shmem://5555"; additionally, we can think about how to
> transfer port number via shared memory and thus get rid of TCP settings
> - add name of the file associated with shared memory to create to the
> address, say: "shmem://home/sustrik/myfile"
Thanks for the tips. Before, I was thinking about doing a wrapper that
resembles what you mention, but...
> - on the "bind" side of the transport, the shared memory identified by
> the filename should be created, on the "connect" side you should bind to
> it; have a look at your shmem_engine_t definition - there's "readbuf"
> and "writebuf" there - with original TCP transport the blocks are
> allocated using malloc - with shmem they can be the actual shared memory
Yes, we would simply have a shared buffer and an malloc-like function.
> - existing implementation will store the data into writebuf when sending
> messages, and extract messages from the readbuf when receiving
> - all you have to do is to add a notification mechanism so that when
> writebuf is full, the application sends a hint to the peer to start
> extracting data from the buffer; other way round, when receiving side is
> ready with reading the buffer it should notify the sender that it can
> refill the buffer with new data; to send the notification use the TCP
> connection established at the beginning
> - to accomplish the previous point, note that shmem_engine_t::in_event
> will be called when thare are data to receive on the TCP connection and
> shmem_engine_t::out_event when data can be written to the connection.
> That's about it, however, feel free to discuss the matter on the mailing
> Btw, I recall there were other people asking for an IPC mechanism so you
> may get some help with the implementation.
What I'm concerned about is ZeroMQ's API. It does not resemble anything
known by humans :P, nor it does it better IMHO. It complicates the
development of backends. Particularly, I'm interested in an UNIX message
queue backend, which should be far more efficient than TCP sockets.
In the wrapper I sorted out the problem by implementing a simple
translation layer, which imitates the more familiar UNIX MQ's API. But
So if Ben and Ritesh agree that ZeroMQ is the way to go, we may need to
extend the API to mitigate some of the constrains of those
aforementioned transport backends.
OTOH, ZeroMQ provides too much flexibility and visibility. Using shared
memory will require some extra constrains from the programmer side or we
could open a can of worms. Performance would degrade steadily as the
programmer makes assumptions about the memory and it's layout, copying
would defeat the shm purpose, and sharing a queue would not work well at
Thus, it doesn't fit on the current ZeroMQ model, I think. But ZeroMQ
could be improved to make it fit, of course :).
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 198 bytes
Desc: OpenPGP digital signature
More information about the zeromq-dev