[zeromq-dev] Timing issues

Apostolis Xekoukoulotakis xekoukou at gmail.com
Thu Jan 16 16:16:38 CET 2014


>From what Martin Sustrik writes, batching happens only when there is need
for a queue and it should in fact help the queue become smaller and thus
reduce latency. But batching itselfs adds some latency too, so you may want
to disable it.

In my opinion, the only other option is to drop messages.


2014/1/16 Lindley French <lindleyf at gmail.com>

> Okay, fair enough. Not every use-case calls for throughput to trump
> latency, though, so it would be good to make that an optional feature.
>
>
> On Thu, Jan 16, 2014 at 9:57 AM, Apostolis Xekoukoulotakis <
> xekoukou at gmail.com> wrote:
>
>> http://www.aosabook.org/en/zeromq.html
>>  On Jan 16, 2014 4:55 PM, "Apostolis Xekoukoulotakis" <xekoukou at gmail.com>
>> wrote:
>>
>>> To reduce calls to the other layers and improve performance.
>>> On Jan 16, 2014 4:53 PM, "Lindley French" <lindleyf at gmail.com> wrote:
>>>
>>>> Maybe I'm missing something, but what purpose is there in disabling
>>>> Nagle's algorithm, only to then re-implement the same concept one layer
>>>> higher?
>>>>
>>>>
>>>> On Thu, Jan 16, 2014 at 9:15 AM, Charles Remes <lists at chuckremes.com>wrote:
>>>>
>>>>> Nagle’s algo is already disabled in the codebase (you can confirm that
>>>>> with a quick grep). I think what Bruno is referring to is that zeromq
>>>>> batches small messages into larger ones before sending. This improves
>>>>> throughput at the cost of latency as expected.
>>>>>
>>>>> Check out the “performance” section of the FAQ for an explanation:
>>>>> http://zeromq.org/area:faq
>>>>>
>>>>>
>>>>> On Jan 16, 2014, at 7:04 AM, Lindley French <lindleyf at gmail.com>
>>>>> wrote:
>>>>>
>>>>> Ah, that would explain it, yes. It would be great to have a way of
>>>>> disabling Nagle's algorithm (TCP_NODELAY sockopt).
>>>>>
>>>>>
>>>>> On Thu, Jan 16, 2014 at 4:24 AM, Bruno D. Rodrigues <
>>>>> bruno.rodrigues at litux.org> wrote:
>>>>>
>>>>>> Without looking at the code I assume ØMQ is not trying to send each
>>>>>> individual message as a TCP PDU but instead, as the name implies, queues
>>>>>> messages so it can batch them together and get the performance.
>>>>>>
>>>>>> This then means the wire will be filled up when some internal buffer
>>>>>> fills, or after a timeout, which looks like 100ms.
>>>>>>
>>>>>> On the other hand I can’t see any setsockopt to configure this
>>>>>> possible timeout value.
>>>>>>
>>>>>> Any feedback from someone else before I have time to  look at the
>>>>>> code?
>>>>>>
>>>>>> On Jan 15, 2014, at 16:20, Lindley French <lindleyf at gmail.com> wrote:
>>>>>>
>>>>>> > I have a test case in which I'm communicating between two threads
>>>>>> using zmq sockets. The fact that the sockets are in the same process is an
>>>>>> artifact of the test, not the real use-case, so I have a TCP connection
>>>>>> between them.
>>>>>> >
>>>>>> > What I'm observing is that a lot of the time, it takes ~100
>>>>>> milliseconds between delivery of a message to the sending socket and
>>>>>> arrival of that message on the receiving socket. Other times (less
>>>>>> frequently) it is a matter of microseconds. I imagine this must be due to
>>>>>> some kernel or thread scheduling weirdness, but I can't rule out that it
>>>>>> might be due to something in 0MQ.
>>>>>> >
>>>>>> > If I follow the TCP socket write with one or more UDP writes using
>>>>>> Boost.Asio, the 100 millisecond delay invariably occurs for the ZMQ TCP
>>>>>> message but the UDP messages arrive almost instantly (before the TCP
>>>>>> message).
>>>>>> >
>>>>>> > My design requires that the TCP message arrive before *most* of the
>>>>>> UDP messages. It's fine if some come through first----UDP is faster after
>>>>>> all, that's why I'm using it----but this big of a delay is more than I
>>>>>> counted on, and it's concerning. I don't know if it would apply across a
>>>>>> real network or if it's an artifact of testing in a single process.
>>>>>> >
>>>>>> > Any insights?
>>>>>> > _______________________________________________
>>>>>> > zeromq-dev mailing list
>>>>>> > zeromq-dev at lists.zeromq.org
>>>>>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> zeromq-dev mailing list
>>>>>> zeromq-dev at lists.zeromq.org
>>>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> zeromq-dev mailing list
>>>>> zeromq-dev at lists.zeromq.org
>>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> zeromq-dev mailing list
>>>>> zeromq-dev at lists.zeromq.org
>>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> zeromq-dev mailing list
>>>> zeromq-dev at lists.zeromq.org
>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>>
>>>>
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org
>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
>>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>


-- 


Sincerely yours,

     Apostolis Xekoukoulotakis
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20140116/6d2517e5/attachment.htm>


More information about the zeromq-dev mailing list