[zeromq-dev] EGMP and different subnets

Yamian Quintero yamian.quintero at route1.com
Fri Sep 29 22:26:18 CEST 2017


Hi fellows and thanks for accepting me in your list.

I'm trying to get 0mq sending messages via EPGM using PUB/SUB sockets. I'm using the latest stable release 4.2.2.
If both hosts are in the same subnetwork, the messages do flow properly. If the hosts are in different subnets, no message reach the second subnet (no traffic at all is seen in tcpdump on that multicast address).
If I use pgmsend/pgmrecv that is built with OpenPGM examples, the messages do reach the second host properly, using the same multicast address and port.
My code is just a slightly modified version of the weather server sample.

This is my PUB server:

       void *pub = zmq_socket (ctx, ZMQ_PUB);
       char *message_body = (char*)MESSAGE_BODY;


       rc = zmq_bind (pub, "epgm://192.168.215.99;239.192.1.1:5556");
       if (rc != 0){
                cout << "Error: " << zmq_strerror (errno) << " while binding to: " << config.connection_url << endl;
                exit(1);
       }
       msleep (SETTLE_TIME);

       srand(time(0));
       int zip;
       int temp;
       char *message = new char[255];
       while (loop){
                zip = 9999 + (rand()%5);
                temp = (rand()%215) - 80;
                memset((char*)message, 255, 0);
                sprintf(message, "%d %d", zip, temp);
                send_str(pub, message);
                msleep(1000);
       }

       delete [] message;


This is my code for the SUB client:

       void *sub = zmq_socket (ctx, ZMQ_SUB);

       rc = zmq_connect (sub, "epgm://192.168.216.100;239.192.1.1:5556");

       if (rc != 0){
                cout << "Error: " << zmq_strerror (errno) << " while connecting to: " << "epgm://192.168.216.100;239.192.1.1:5556"<< endl;
                exit(1);
       }
       rc = zmq_setsockopt (sub, ZMQ_SUBSCRIBE, TOPIC, strlen(TOPIC));
       if (rc != 0){
                cout << "Error: " << zmq_strerror (errno) << " while subscribing to: " << TOPIC << endl;
                exit(1);
       }

       for (int i=0; i<5; i++){
         print_str_recv(sub);
       }


The interesting part is what we observe in the routers.

If I use pgmsend/pgmrecv from libpgm-5.2.122, as soon as I start pgmrecv, this is the mroute as seen in the router:

DEV-SW-01#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.192.1.1), 00:00:12/00:02:57, RP 10.222.41.1, flags: SJC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Vlan1, Forward/Sparse, 00:00:12/00:02:57

Vlan1 is where the pgmrecv's host is connected to.

When I send a message from the other host, the mroute does have an active source, with the proper incoming interface:

DEV-SW-01#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.192.1.1), 00:02:29/stopped, RP 10.222.41.1, flags: SJC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Vlan1, Forward/Sparse, 00:02:29/00:02:08

(192.168.216.100, 239.192.1.1), 00:00:08/00:02:51, flags: T
  Incoming interface: Vlan215, RPF nbr 0.0.0.0
  Outgoing interface list:
    Vlan1, Forward/Sparse, 00:00:08/00:02:51

Vlan215 is where the pgmsend's host is connected to.


If I repeat this process, using the 0mq-based code, there is something weird happening in the mroute.

When I start the PUB server, the mroute looks just as in the pgmrecv case:

DEV-SW-01#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.192.1.1), 00:00:14/00:02:50, RP 10.222.41.1, flags: SJC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Vlan1, Forward/Sparse, 00:00:09/00:02:50

But when I subscribe with the SUB client, the mroute doesn't have the active source, corresponding to it, instead another outgoing interface is added to the wildcarded route:

DEV-SW-01#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.192.1.1), 00:01:31/00:02:53, RP 10.222.41.1, flags: SJC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Vlan215, Forward/Sparse, 00:00:06/00:02:53
    Vlan1, Forward/Sparse, 00:01:26/00:02:06


Maybe I'm missing something in the setup of the SUB client socket? Or maybe there is something in the underlying 0mq PGM reciever class that doesn't properly set the multicast parameters?


Thanks for any help provided,

Yamian.






-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20170929/29bb857b/attachment.html>


More information about the zeromq-dev mailing list