[zeromq-dev] Shared memory transport (IPC)
sustrik at 250bpm.com
Fri Nov 20 14:36:47 CET 2009
Ismael Luceno wrote:
> Martin Sustrik escribió:
>> It depends on the size of messages IMO. If what you need to transfer is
>> message 1GB long it would obviously make sense to allocate shmem block
>> of appropriate size beforehand, fill it in and pass it to the peer process.
>> With small messages like market data the above would be suboptimal as
>> pinning down the memory would consume much more time than the message
>> transfer itself. As 0MQ is designed primarily for small messages, it's
>> designed in such a way that it needs no memory pinning while the
>> application is doing its work. Instead it pins a single buffer at the
>> beginning and copies the messages into the buffer. Note that same trick
>> is used in high-perf networking stacks (OpenMX and alike) when doing
>> zero-copy. Zero-copy is deliberately avoided for small batches of data
>> to get rid of memory pinning overhead.
> I didn't explained it well. What we have is exactly that, big data
> blocks transferred between processes. So I want to just allocate a big
> shared buffer and use it to pass the bulk of data.
> The actual messages would take just a few words, so could be, for
> example, passed very efficiently by solaris doors (specially on sparc).
> So, separating the data from the trigger message is a must.
> What Ritesh and Ben want is to integrate this model into ZMQ, so that it
> works transparently, and can be used without any major effort from
> programmer side.
Ok, I see.
The problem with existing API is that message is created first and sent
message_t msg (1000);
At the time of creation of the message there's no way to find out
whether it'll be sent via shared memory transport afterwards and thus
it's not obvious whether to allocate shared or local memory for the message.
Thus I believe wrapping 0MQ to as you've suggested originally, is the
right way to go.
More information about the zeromq-dev