[zeromq-dev] Shared memory transport (IPC)

Martin Sustrik sustrik at 250bpm.com
Tue Nov 17 10:03:35 CET 2009

Hi Ismael,

> What I'm concerned about is ZeroMQ's API. It does not resemble anything
> known by humans :P, nor it does it better IMHO. It complicates the
> development of backends. Particularly, I'm interested in an UNIX message
> queue backend, which should be far more efficient than TCP sockets.
> In the wrapper I sorted out the problem by implementing a simple
> translation layer, which imitates the more familiar UNIX MQ's API. But
> that's suboptimal.

What 0MQ/1.0 API evolved from is AMQP specification (www.amqp.org) but 
the problem of unfamiliar API became evident over the time. Because of 
it we've switched to socket-like API in 0MQ/2.0. That one should take 
almost no time to master for anyone familiar with POSIX sockets. Have a 
look at the new API here:


> So if Ben and Ritesh agree that ZeroMQ is the way to go, we may need to
> extend the API to mitigate some of the constrains of those
> aforementioned transport backends.
> OTOH, ZeroMQ provides too much flexibility and visibility. Using shared
> memory will require some extra constrains from the programmer side or we
> could open a can of worms. Performance would degrade steadily as the
> programmer makes assumptions about the memory and it's layout, copying
> would defeat the shm purpose, and sharing a queue would not work well at
> all.

It depends on the size of messages IMO. If what you need to transfer is 
message 1GB long it would obviously make sense to allocate shmem block 
of appropriate size beforehand, fill it in and pass it to the peer process.

With small messages like market data the above would be suboptimal as 
pinning down the memory would consume much more time than the message 
transfer itself. As 0MQ is designed primarily for small messages, it's 
designed in such a way that it needs no memory pinning while the 
application is doing its work. Instead it pins a single buffer at the 
beginning and copies the messages into the buffer. Note that same trick 
is used in high-perf networking stacks (OpenMX and alike) when doing 
zero-copy. Zero-copy is deliberately avoided for small batches of data 
to get rid of memory pinning overhead.

> Thus, it doesn't fit on the current ZeroMQ model, I think. But ZeroMQ
> could be improved to make it fit, of course :).


More information about the zeromq-dev mailing list