[zeromq-dev] Timing issues

Bruno D. Rodrigues bruno.rodrigues at litux.org
Fri Jan 17 13:24:48 CET 2014


Had a look at the code and, of course, Pieter is correct. :)

The 100ms are measured under which circumstances? Are the sockets already connected (the TCP ones), or is it just the first message after starting the app?


On Jan 17, 2014, at 8:39, Pieter Hintjens <ph at imatix.com> wrote:

> ZeroMQ does opportunistic batching only afaik. This means it does
> _not_ wait for small messages to arrive before sending them as a
> batch. Rather, when it starts sending a message it will pull as many
> as it can (up to some limit, around 500) off the queue at once. The
> effect should be to reduce latency.
> 
> If there are delays they come from some other source.
> 
> -Pieter
> 
> On Thu, Jan 16, 2014 at 3:56 PM, Charles Remes <chuck at chuckremes.com> wrote:
>> To save time in additional stack traversals. That’s where things get slow.
>> Plus, the batching algorithm could (potentially) be tuned for different
>> workloads and exposed as a setsockopt for additional flexibility (though no
>> one has done this yet).
>> 
>> 
>> On Jan 16, 2014, at 8:53 AM, Lindley French <lindleyf at gmail.com> wrote:
>> 
>> Maybe I'm missing something, but what purpose is there in disabling Nagle's
>> algorithm, only to then re-implement the same concept one layer higher?
>> 
>> 
>> On Thu, Jan 16, 2014 at 9:15 AM, Charles Remes <lists at chuckremes.com> wrote:
>>> 
>>> Nagle’s algo is already disabled in the codebase (you can confirm that
>>> with a quick grep). I think what Bruno is referring to is that zeromq
>>> batches small messages into larger ones before sending. This improves
>>> throughput at the cost of latency as expected.
>>> 
>>> Check out the “performance” section of the FAQ for an explanation:
>>> http://zeromq.org/area:faq
>>> 
>>> 
>>> On Jan 16, 2014, at 7:04 AM, Lindley French <lindleyf at gmail.com> wrote:
>>> 
>>> Ah, that would explain it, yes. It would be great to have a way of
>>> disabling Nagle's algorithm (TCP_NODELAY sockopt).
>>> 
>>> 
>>> On Thu, Jan 16, 2014 at 4:24 AM, Bruno D. Rodrigues
>>> <bruno.rodrigues at litux.org> wrote:
>>>> 
>>>> Without looking at the code I assume ØMQ is not trying to send each
>>>> individual message as a TCP PDU but instead, as the name implies, queues
>>>> messages so it can batch them together and get the performance.
>>>> 
>>>> This then means the wire will be filled up when some internal buffer
>>>> fills, or after a timeout, which looks like 100ms.
>>>> 
>>>> On the other hand I can’t see any setsockopt to configure this possible
>>>> timeout value.
>>>> 
>>>> Any feedback from someone else before I have time to  look at the code?
>>>> 
>>>> On Jan 15, 2014, at 16:20, Lindley French <lindleyf at gmail.com> wrote:
>>>> 
>>>>> I have a test case in which I'm communicating between two threads using
>>>>> zmq sockets. The fact that the sockets are in the same process is an
>>>>> artifact of the test, not the real use-case, so I have a TCP connection
>>>>> between them.
>>>>> 
>>>>> What I'm observing is that a lot of the time, it takes ~100
>>>>> milliseconds between delivery of a message to the sending socket and arrival
>>>>> of that message on the receiving socket. Other times (less frequently) it is
>>>>> a matter of microseconds. I imagine this must be due to some kernel or
>>>>> thread scheduling weirdness, but I can't rule out that it might be due to
>>>>> something in 0MQ.
>>>>> 
>>>>> If I follow the TCP socket write with one or more UDP writes using
>>>>> Boost.Asio, the 100 millisecond delay invariably occurs for the ZMQ TCP
>>>>> message but the UDP messages arrive almost instantly (before the TCP
>>>>> message).
>>>>> 
>>>>> My design requires that the TCP message arrive before *most* of the UDP
>>>>> messages. It's fine if some come through first----UDP is faster after all,
>>>>> that's why I'm using it----but this big of a delay is more than I counted
>>>>> on, and it's concerning. I don't know if it would apply across a real
>>>>> network or if it's an artifact of testing in a single process.
>>>>> 
>>>>> Any insights?
>>>>> _______________________________________________
>>>>> zeromq-dev mailing list
>>>>> zeromq-dev at lists.zeromq.org
>>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>> 
>>>> 
>>>> _______________________________________________
>>>> zeromq-dev mailing list
>>>> zeromq-dev at lists.zeromq.org
>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>> 
>>> 
>>> _______________________________________________
>>> zeromq-dev mailing list
>>> zeromq-dev at lists.zeromq.org
>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> zeromq-dev mailing list
>>> zeromq-dev at lists.zeromq.org
>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> 
>> 
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org
>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>> 
>> 
>> 
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org
>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>> 
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 235 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20140117/949aa8c4/attachment.sig>


More information about the zeromq-dev mailing list