[zeromq-dev] Potential Memory Leak in JZMQ 2.x.x running on top of ZeroMQ 3.2.3 for Multicast Publishing?

Chinmay Nerurkar chinmay.nerurkar at fusionts.com
Wed Jun 19 00:23:54 CEST 2013


Parag Patel <Parag.Patel <at> fusionts.com> writes:

> 
> 
> 
> This fixed our issue.  Thanks
>  
> From: zeromq-dev-bounces <at> lists.zeromq.org [mailto:zeromq-dev-bounces
<at> lists.zeromq.org]
> On Behalf Of Steven McCoySent: Tuesday, June 18, 2013 9:36 AMTo: ZeroMQ
development listSubject: Re: [zeromq-dev] Potential Memory Leak in JZMQ
2.x.x running on top of ZeroMQ 3.2.3 for Multicast Publishing?
>  
> On 18 June 2013 07:43, Parag Patel <Parag.Patel <at> fusionts.com> wrote:
> 
> 
> 
> 
> Does the window size default to 10 seconds only for Java?
>  
> 
> 
> 
> 
>  
> 
> 
> The default is in options.cpp:
> 
> 
>  
> 
> 
> https://github.com/zeromq/libzmq/blob/master/src/options.cpp#L31
> 
> 
>  
> 
> 
> ZMQ_RECOVERY_IVL: Set multicast recovery interval
> The ZMQ_RECOVERY_IVL option shall set the recovery interval for multicast
transports using the specified socket.
>  The recovery interval determines the maximum time in milliseconds that a
receiver can be absent from a multicast group before unrecoverable data loss
will occur.
> Exercise care when setting large recovery intervals as the data needed for
recovery will be held in memory. For example, a 1 minute recovery interval
at a data rate of 1Gbps requires a 7GB
>  in-memory buffer.
> 
> 
> 
> 
> Option value type
> 
> 
> 
> int
> 
> 
> 
> 
> 
> Option value unit
> 
> 
> 
> milliseconds
> 
> 
> 
> 
> 
> Default value
> 
> 
> 
> 10000
> 
> 
> 
> 
> 
> Applicable socket types
> 
> 
> 
> all, when using multicast transports
> 
> 
> 
> 
> 
>  
> 
> 
> The pure Java implementation does not yet interface with the pure Java PGM
implementation to do anything different.
> 
> 
>  
> 
> 
> -- 
> 
> 
> Steve-o
> 
> 
> 
> 
> 
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev <at> lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> 

Hi, 

What effect do the options ZMQ_RECOVERY_IVL and ZMQ_RATE have on the ZMQ SUB
socket while connected to PGM? I am using JZMQ 2.2.0 with ZeroMQ 3.2.4 and
OpenPGM 5.1.118. 

I have a ZMQ PUB socket publisher multicasting 100 byte messages out on a
channel. A receiver with a ZMQ SUB socket is receiving data on that channel.
I have set the ZMQ_RECOVERY_IVL = 1000 ms for the publisher and the
subscriber. If I set ZMQ_RECOVERY_IVL = 0 or 1 ms on the subscriber I barely
get any messages.

If I set ZMQ_RECOVERY_IVL = 1000 ms and ZMQ_RATE = 200000 kb/s on both
publisher and subscriber, and publish these 100 byte messages at ~ 350000
messages per second, I notice that the subscriber just hangs/pauses after
receiving a few messages. After around 30-40 seconds it recovers from the
pause and starts receiving messages again. The thread dump does not show any
blocks.

If I set ZMQ_RATE = 160000 kb/s and run the same scenario as before limiting
my receiving rate to ~180000 messages per second, I don't get any pauses and
also get all my published messages reliably.

Anyone has any ideas on this?

Thanks.

Chinmay





More information about the zeromq-dev mailing list