[zeromq-dev] openpgm

Marlborough, Rick RMarlborough at aaccorp.com
Mon Apr 17 16:44:37 CEST 2017


Designation:  Non-Export Controlled Content  

Jim;
	I set the environment variables and reran my test. Seeing something very curious. My publisher is running on 10.0.0.5.  My subscriber is on 10.0.0.4. On my publisher side I see the following output from pgm...

Minor: OpenPGM 5.2.122 (1487) 2017-03-27 13:04:29 Linux x86_64
Minor: Detected 8 available 8 online 8 configured CPUs.
Minor: Using gettimeofday() timer.
Trace: Opening UDP encapsulated sockets.
Trace: Set socket sharing.
Trace: Request socket packet-info.
IP: 224.15.15.5, PORT: 5008                                                                    <-- my test log statement. This is the ip and port we are using
Trace: Assuming IP header size of 20 bytes
Trace: Assuming UDP header size of 8 bytes
Trace: Create transmit window.
Trace: Binding receive socket to INADDR_ANY
Trace: Binding send socket to interface index 0
Trace: Setting ODATA rate regulation to 125000000 bytes per second.
Trace: Join multicast group 224.15.15.5 on interface index 0
Trace: Multicast send interface set to 127.0.0.1 index 0                <-- sending multicast on loopback ??

When I flip this around I get a successful send and the output from pgm looks like so...

Trace: Join multicast group 224.15.15.4 on interface index 0
Trace: Multicast send interface set to 10.0.0.4 index 0             

Ifconfig on these 2 machines yields the following...

enp1s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.5  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::225:90ff:fe1b:fcaa  prefixlen 64  scopeid 0x20<link>
        ether 00:25:90:1b:fc:aa  txqueuelen 1000  (Ethernet)
        RX packets 8482413  bytes 10344786458 (9.6 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3821673  bytes 659374138 (628.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xfbd60000-fbd7ffff  

enp1s0f1: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        ether 00:25:90:1b:fc:ab  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xfbde0000-fbdfffff  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 2212  bytes 179936 (175.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2212  bytes 179936 (175.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0 




Is there a way to control this behavior? 

Thanks

Rick


-----Original Message-----
From: zeromq-dev [mailto:zeromq-dev-bounces at lists.zeromq.org] On Behalf Of Jim Hague
Sent: Thursday, April 13, 2017 5:35 AM
To: ZeroMQ development list
Subject: Re: [zeromq-dev] openpgm

On 12/04/2017 19:08, Marlborough, Rick wrote:
>>Flipping the direction and failing suggests that one of the hosts may have not correctly determined the local network interfaces.
>>You can always explicitly set the local interfaces to bind to when
creating the socket.
>>Later versions in GitHub have corrected some known interface issues
with newer RHEL releases and special configurations.
>>However because of lack of testing no new official release has been
pushed out yet.
>
>                 Thanks for responding. My follow on question is, would 
> this still be the case in light of the following additional information?
> 
> 1.       The 2 nodes in question are diskless clients that are spawned
> from the same image from the same server. The hardware they are 
> running on is identical.
> 
> 2.       Testing with basic multicast works in both directions.

Yes, it's still worth trying. And setting environments PGM_MIN_LOG_LEVEL=TRACE and PGM_LOG_MASK=0xffff.

Under some configurations the exact output interface that gets selected by OpenPGM in the release bundled with 0MQ is a bit unexpected. I had a fair bit of trouble with this myself.

It might be worth compiling up some of the OpenPGM example programs (in the OpenPGM source tree) and concentrating on sorting out OpenPGM configuration without 0MQ around.
-- 
Jim Hague - jim.hague at acm.org          Never trust a computer you can't
lift.
_______________________________________________
zeromq-dev mailing list
zeromq-dev at lists.zeromq.org
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.zeromq.org_mailman_listinfo_zeromq-2Ddev&d=DwIGaQ&c=L6cnQwNGJqqFwLSmuJQH9Q&r=0vkEC-wQCdHs6_7-T7j4hT5guWBPAv8lluqeyT5n0ww&m=8z0Isk7XidOO22DHwMJZ4MuaiNcU3c6heySHLU1qYW8&s=4t7_fwSguycycKt1XkF8tDu9Yb8OSfYLs00zlt-ahmE&e= 

3.1.1001


More information about the zeromq-dev mailing list