[zeromq-dev] zmq_recvmsg jni

Artem Vysochyn artem.vysochyn at gmail.com
Thu Oct 3 08:10:09 CEST 2013


hi Trevor,

Thanks for posting this. Very nice.

Still curious. Does it mean that using DirectByteBuffer only makes
sense if one uses ZMQ+Disruptor?

Second question:
>>...
>> ByteBuffer bb = ByteBuffer.allocateDirect(4096);
>> socket.recvByteBuffer(bb, 0);
>>...
I also had been thinking about such construction, but my question is
-- what we win here? I.e. one still has to make a call to jzmq':
byte[] buf = socket.recv(flag). So, we do jni call .recv() and get
copy of byte[] from native code. Then we copy that byte[] into DBB.
I'm not judging, but just trying to figure out for myself, what one
could win with DBB for receiving operation?


Thanks in advance.
-artemv

2013/10/2 Trevor Bernard <trevor.bernard at gmail.com>:
> It's fairly straightforward.
>
> ByteBuffer bb = ByteBuffer.allocateDirect(4096);
> serialize(bb, someObj);
> bb.flip();
> socket.sendByteBuffer(bb, 0);
> ...
> ByteBuffer bb = ByteBuffer.allocateDirect(4096);
> socket.recvByteBuffer(bb, 0);
> bb.flip();
> Object deserialized = deserialize(bb);
>
> In my use case, I use the LMAX Disruptor in conjunction with zeromq. I
> preallocate the RingBuffer with off-heap ByteBuffers to reduce GC and
> to create a pool. This also is more efficient since it had one less
> copy and you are reusing the ByteBuffers.
>
> Something like this:
>
> public final static EventFactory<ByteBuffer> EVENT_FACTORY = new
> EventFactory<ByteBuffer>() {
>     public ByteBuffer newInstance() {
>         return ByteBuffer.allocateDirect(4096);
>     }
> };
>
> On Wed, Oct 2, 2013 at 12:30 PM, Artem Vysochyn
> <artem.vysochyn at gmail.com> wrote:
>> hi Trevor,
>>
>> Can you past example(s) of how you use DirectByteBuffer in jzmq? pls.
>>
>> 2013/10/2 Trevor Bernard <trevor.bernard at gmail.com>:
>>> I do that very thing with a different method signature.
>>>
>>> https://github.com/trevorbernard/zmq-jni/blob/master/src/main/c%2B%2B/zmq.cpp#L169
>>>
>>> I've had some success with preallocating a bunch of DirectByteBuffers
>>> off heap. It definitely helps with performance and GC. If I use
>>> byte[], it generates far too much garbage and generates too much GC
>>> pressure.
>>>
>>> On Tue, Oct 1, 2013 at 9:33 PM, Radu Braniste <rbraniste at gmail.com> wrote:
>>>> Personally I'm using the direct buffer as a  memory arena (I preallocate a
>>>> pool)  and I avoid one allocation like:
>>>>
>>>>>     zmq_msg_t msg;
>>>>>     zmq_msg_init (&msg);
>>>>>     zmq_recvmsg ((void *) socket, &msg, flags);
>>>>>     int size = zmq_msg_size (&msg);
>>>>
>>>> //might check for out of bounds and return false
>>>>
>>>>  memcpy(GetDirectBufferAccess(arena+used), zmq_msg_data (&msg), size);
>>>>
>>>> used += size;
>>>>>     zmq_msg_close(&msg);
>>>>>     return used;
>>>>
>>>> the best would be of course to use this technique with a possible
>>>> "zmq_msg_init_data" as Peter suggested and completely avoid he extra memcpy
>>>>
>>>>
>>>> _______________________________________________
>>>> zeromq-dev mailing list
>>>> zeromq-dev at lists.zeromq.org
>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>>
>>> _______________________________________________
>>> zeromq-dev mailing list
>>> zeromq-dev at lists.zeromq.org
>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org
>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev



More information about the zeromq-dev mailing list