[zeromq-dev] inproc message delivery
Thomas Rodgers
rodgert at twrodgers.com
Thu Apr 6 15:32:07 CEST 2017
That seems about right. As noted by other replies, inproc is still waiting
on an event fd to signal, as is the case with IPC queues. Also as noted by
others there are some newer socket types (caveat I haven't used them) that
use different sync primitives (mutex) which *might* yield better in process
throughout. The mutex impl will spin a bit in CAS loop (with back off)
before resorting to a system call to wait on a contended lock.
On Thu, Apr 6, 2017 at 7:49 AM Marlborough, Rick <RMarlborough at aaccorp.com>
wrote:
> Designation: Non-Export Controlled Content
>
> Tom;
>
> Thanx for your response. We have done most of our testing with 1000 byte
> messages. The 2 endpoint types are ZMQ_REQ and ZMQ_REP. Our loop is a
> simple tight loop, no sleeps. And yes you are correct, I meant to say
> “inproc should be blazing fast compared to ipc”.
>
>
>
> Rick
>
>
>
> *From:* zeromq-dev [mailto:zeromq-dev-bounces at lists.zeromq.org] *On
> Behalf Of *Thomas Rodgers
> *Sent:* Wednesday, April 05, 2017 5:44 PM
> *To:* ZeroMQ development list
> *Subject:* Re: [zeromq-dev] inproc message delivery
>
>
>
> I assume you meant to say 'inproc' would be blazing fast compared to 'ipc'?
>
>
>
> What message size(s) have you tried?
>
>
>
> I'm not convinced this is a reasonable expectation, particularly with
> smallish messages. IPC transport is going to involve a few more kernel
> calls but at the end of the day, it's still memory -> memory, and the
> improc socket type still has most of the zmq socket machinery to traverse.
>
>
>
> Tom.
>
>
>
> On Wed, Apr 5, 2017 at 4:34 PM Marlborough, Rick <RMarlborough at aaccorp.com>
> wrote:
>
> Designation: Non-Export Controlled Content
>
> Folks;
>
> We are testing message delivery between 2 zmq sockets. We have done
> testing over the network between 2 nodes, on a single node and within a
> single process. For the single process case we use inproc transport. When
> we examine the delivery times we find that single node ipc transport is
> better than network. Surprisingly, inproc transport performance is
> virtually indistinguishable from ipc transport. I would expect ipc to be
> blazing fast in comparison. For the record we are using ZeroMQ 4.2.2 on red
> hat 7 64 bit. What should I expect using ipc transport?
>
>
>
> Thanx
>
> Rick
>
> 3.1.1001
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.zeromq.org_mailman_listinfo_zeromq-2Ddev&d=DwMFaQ&c=L6cnQwNGJqqFwLSmuJQH9Q&r=0vkEC-wQCdHs6_7-T7j4hT5guWBPAv8lluqeyT5n0ww&m=mrX-hpGIcyNBaHRii9heQgffNpjktFjcAI_pw2ReYWU&s=P_r7t1SUk889jbfnBv0zqAJ-0htcg2B2eRy-hR6gF3s&e=>
>
> 3.1.1001
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20170406/a8bf2563/attachment.htm>
More information about the zeromq-dev
mailing list