[zeromq-dev] [***SPAM*** Score/Req: 10.1/8.0] Re: [***SPAM*** Score/Req: 08.1/8.0] Re: IPC (again)

Martin Sustrik sustrik at 250bpm.com
Tue Jan 5 15:10:03 CET 2010

Erik Rigtorp wrote:

> Is it correct that VSMs are messages that get copied within 0mq and
> other messages get passed by reference?

Yes. That's right.

> Then we would implement copy-on-write for VSMs and zero-copy reading
> in the client. For large messages we would need a shared memory area
> which is mapped to all processes, a lock-free allocator and a
> reference counting garbage collector. Doable but complex and it has
> it's own performance bottlenecks. To utilize this we also need to
> change/complement the API (i think).

Right. Now let's break it into small and gradual steps and find out 
which is the first one to implement...

> I think we should forget about implementing zero-copy to begin with
> and for small messages it's not necessarily better.
>>> I'll try to find/write a good c++ lock-free ringbuffer template.
>> I would start with yqueue_t and ypipe_t. We've spent a lot of time making
>> them as efficient as possible. The only thing needed is to split each of
>> them into read & write part. This shouldn't be that complex. Both classes
>> have variables accessed exclusively by reader and variables accessed
>> exclusively by the writer. Then there are shared variables manipulated by
>> atomic operations that should reside in the shared memory.
> They look efficient, but assumes you can allocate new memory. For shm
> it's simpler to assume a fixed length buffer. Also I think that the
> code is not correct. On PPC you are not guaranteed that the memcpy()
> is commited before you update the atomic_ptr. You need to add a memory
> barrier.

Note that on PPC atomic_ptr is implemented using mutex. The mutex should 
trigger the barrier.

In general, the code works now due to various assumptions resembling the 
one above. In future it should be rewritten to use explicit memory barriers.


More information about the zeromq-dev mailing list