[zeromq-dev] EGMP and different subnets
Yamian Quintero
yamian.quintero at route1.com
Tue Oct 3 21:12:01 CEST 2017
That was it. Thanks a lot.
-----Original Message-----
From: zeromq-dev [mailto:zeromq-dev-bounces at lists.zeromq.org] On Behalf Of Luca Boccassi
Sent: Tuesday, October 03, 2017 3:07 PM
To: ZeroMQ development list
Subject: Re: [zeromq-dev] EGMP and different subnets
On Tue, 2017-10-03 at 18:58 +0000, Yamian Quintero wrote:
> Luca,
>
> I have tried both your suggestions and their combinations. zmq_connect
> then zmq_setsockopt(ZMQ_MULTICAST_HOPS, 20) zmq_bind then
> zmq_setsockopt(ZMQ_MULTICAST_HOPS, 20) [I gave this one a try since
> all the examples I find do use bind for PUB/SUB] None of them seem to
> make a difference. What seems to be happening is that the mroute
> required for packets to arrive to the next hop doesn't seem to be set
> properly. Basically, the switch doesn't see the server's interface as
> an incoming interface.
>
> Thanks for any further suggestions,
> Yamian.
As the manpage mentions, socket options (apart from a select few) must be set before connecting/binding.
>
> -----Original Message-----
> From: zeromq-dev [mailto:zeromq-dev-bounces at lists.zeromq.org] On
> Behalf Of Luca Boccassi
> Sent: Tuesday, October 03, 2017 2:01 PM
> To: ZeroMQ development list
> Subject: Re: [zeromq-dev] EGMP and different subnets
>
> On Mon, 2017-10-02 at 17:25 +0100, Luca Boccassi wrote:
> > On Fri, 2017-09-29 at 20:26 +0000, Yamian Quintero wrote:
> > > Hi fellows and thanks for accepting me in your list.
> > >
> > > I'm trying to get 0mq sending messages via EPGM using PUB/SUB
> > > sockets. I'm using the latest stable release 4.2.2.
> > > If both hosts are in the same subnetwork, the messages do flow
> > > properly. If the hosts are in different subnets, no message reach
> > > the second subnet (no traffic at all is seen in tcpdump on that
> > > multicast address).
> > > If I use pgmsend/pgmrecv that is built with OpenPGM examples, the
> > > messages do reach the second host properly, using the same
> > > multicast address and port.
> > > My code is just a slightly modified version of the weather server
> > > sample.
> > >
> > > This is my PUB server:
> > >
> > > void *pub = zmq_socket (ctx, ZMQ_PUB);
> > > char *message_body = (char*)MESSAGE_BODY;
> > >
> > >
> > > rc = zmq_bind (pub,
> > > "epgm://192.168.215.99;239.192.1.1:5556");
> > > if (rc != 0){
> > > cout << "Error: " << zmq_strerror (errno) << "
> > > while
> > > binding to: " << config.connection_url << endl;
> > > exit(1);
> > > }
> > > msleep (SETTLE_TIME);
> > >
> > > srand(time(0));
> > > int zip;
> > > int temp;
> > > char *message = new char[255];
> > > while (loop){
> > > zip = 9999 + (rand()%5);
> > > temp = (rand()%215) - 80;
> > > memset((char*)message, 255, 0);
> > > sprintf(message, "%d %d", zip, temp);
> > > send_str(pub, message);
> > > msleep(1000);
> > > }
> > >
> > > delete [] message;
> > >
> > >
> > > This is my code for the SUB client:
> > >
> > > void *sub = zmq_socket (ctx, ZMQ_SUB);
> > >
> > > rc = zmq_connect (sub,
> > > "epgm://192.168.216.100;239.192.1.1:5556");
> > >
> > > if (rc != 0){
> > > cout << "Error: " << zmq_strerror (errno) << "
> > > while
> > > connecting to: " << "epgm://192.168.216.100;239.192.1.1:5556"<<
> > > endl;
> > > exit(1);
> > > }
> > > rc = zmq_setsockopt (sub, ZMQ_SUBSCRIBE, TOPIC,
> > > strlen(TOPIC));
> > > if (rc != 0){
> > > cout << "Error: " << zmq_strerror (errno) << "
> > > while
> > > subscribing to: " << TOPIC << endl;
> > > exit(1);
> > > }
> > >
> > > for (int i=0; i<5; i++){
> > > print_str_recv(sub);
> > > }
> > >
> > >
> > > The interesting part is what we observe in the routers.
> > >
> > > If I use pgmsend/pgmrecv from libpgm-5.2.122, as soon as I start
> > > pgmrecv, this is the mroute as seen in the router:
> > >
> > > DEV-SW-01#sh ip mroute
> > > IP Multicast Routing Table
> > > Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C
> > > -
> > > Connected,
> > > L - Local, P - Pruned, R - RP-bit set, F - Register flag,
> > > T - SPT-bit set, J - Join SPT, M - MSDP created entry,
> > > X - Proxy Join Timer Running, A - Candidate for MSDP
> > > Advertisement,
> > > U - URD, I - Received Source Specific Host Report,
> > > Z - Multicast Tunnel, z - MDT-data group sender,
> > > Y - Joined MDT-data group, y - Sending to MDT-data group
> > > V - RD & Vector, v - Vector Outgoing interface flags: H -
> > > Hardware switched, A - Assert winner
> > > Timers: Uptime/Expires
> > > Interface state: Interface, Next-Hop or VCD, State/Mode
> > >
> > > (*, 239.192.1.1), 00:00:12/00:02:57, RP 10.222.41.1, flags: SJC
> > > Incoming interface: Null, RPF nbr 0.0.0.0
> > > Outgoing interface list:
> > > Vlan1, Forward/Sparse, 00:00:12/00:02:57
> > >
> > > Vlan1 is where the pgmrecv's host is connected to.
> > >
> > > When I send a message from the other host, the mroute does have an
> > > active source, with the proper incoming interface:
> > >
> > > DEV-SW-01#sh ip mroute
> > > IP Multicast Routing Table
> > > Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C
> > > -
> > > Connected,
> > > L - Local, P - Pruned, R - RP-bit set, F - Register flag,
> > > T - SPT-bit set, J - Join SPT, M - MSDP created entry,
> > > X - Proxy Join Timer Running, A - Candidate for MSDP
> > > Advertisement,
> > > U - URD, I - Received Source Specific Host Report,
> > > Z - Multicast Tunnel, z - MDT-data group sender,
> > > Y - Joined MDT-data group, y - Sending to MDT-data group
> > > V - RD & Vector, v - Vector Outgoing interface flags: H -
> > > Hardware switched, A - Assert winner
> > > Timers: Uptime/Expires
> > > Interface state: Interface, Next-Hop or VCD, State/Mode
> > >
> > > (*, 239.192.1.1), 00:02:29/stopped, RP 10.222.41.1, flags: SJC
> > > Incoming interface: Null, RPF nbr 0.0.0.0
> > > Outgoing interface list:
> > > Vlan1, Forward/Sparse, 00:02:29/00:02:08
> > >
> > > (192.168.216.100, 239.192.1.1), 00:00:08/00:02:51, flags: T
> > > Incoming interface: Vlan215, RPF nbr 0.0.0.0
> > > Outgoing interface list:
> > > Vlan1, Forward/Sparse, 00:00:08/00:02:51
> > >
> > > Vlan215 is where the pgmsend's host is connected to.
> > >
> > >
> > > If I repeat this process, using the 0mq-based code, there is
> > > something weird happening in the mroute.
> > >
> > > When I start the PUB server, the mroute looks just as in the
> > > pgmrecv
> > > case:
> > >
> > > DEV-SW-01#sh ip mroute
> > > IP Multicast Routing Table
> > > Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C
> > > -
> > > Connected,
> > > L - Local, P - Pruned, R - RP-bit set, F - Register flag,
> > > T - SPT-bit set, J - Join SPT, M - MSDP created entry,
> > > X - Proxy Join Timer Running, A - Candidate for MSDP
> > > Advertisement,
> > > U - URD, I - Received Source Specific Host Report,
> > > Z - Multicast Tunnel, z - MDT-data group sender,
> > > Y - Joined MDT-data group, y - Sending to MDT-data group
> > > V - RD & Vector, v - Vector Outgoing interface flags: H -
> > > Hardware switched, A - Assert winner
> > > Timers: Uptime/Expires
> > > Interface state: Interface, Next-Hop or VCD, State/Mode
> > >
> > > (*, 239.192.1.1), 00:00:14/00:02:50, RP 10.222.41.1, flags: SJC
> > > Incoming interface: Null, RPF nbr 0.0.0.0
> > > Outgoing interface list:
> > > Vlan1, Forward/Sparse, 00:00:09/00:02:50
> > >
> > > But when I subscribe with the SUB client, the mroute doesn't have
> > > the active source, corresponding to it, instead another outgoing
> > > interface is added to the wildcarded route:
> > >
> > > DEV-SW-01#sh ip mroute
> > > IP Multicast Routing Table
> > > Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C
> > > -
> > > Connected,
> > > L - Local, P - Pruned, R - RP-bit set, F - Register flag,
> > > T - SPT-bit set, J - Join SPT, M - MSDP created entry,
> > > X - Proxy Join Timer Running, A - Candidate for MSDP
> > > Advertisement,
> > > U - URD, I - Received Source Specific Host Report,
> > > Z - Multicast Tunnel, z - MDT-data group sender,
> > > Y - Joined MDT-data group, y - Sending to MDT-data group
> > > V - RD & Vector, v - Vector Outgoing interface flags: H -
> > > Hardware switched, A - Assert winner
> > > Timers: Uptime/Expires
> > > Interface state: Interface, Next-Hop or VCD, State/Mode
> > >
> > > (*, 239.192.1.1), 00:01:31/00:02:53, RP 10.222.41.1, flags: SJC
> > > Incoming interface: Null, RPF nbr 0.0.0.0
> > > Outgoing interface list:
> > > Vlan215, Forward/Sparse, 00:00:06/00:02:53
> > > Vlan1, Forward/Sparse, 00:01:26/00:02:06
> > >
> > >
> > > Maybe I'm missing something in the setup of the SUB client socket?
> > > Or
> > > maybe there is something in the underlying 0mq PGM reciever class
> > > that doesn't properly set the multicast parameters?
> > >
> > >
> > > Thanks for any help provided,
> > >
> > > Yamian.
> >
> > I'm absolutely not familiar with the whole PGM/EPGM business, but
> > from what I can see in all examples, all sockets call zmq_connect,
> > rather than zmq_bind.
> >
> > If you want to compare the implementation with pgmsend/recv, the
> > setup is largely done in these 2 functions:
> >
> > https://github.com/zeromq/libzmq/blob/master/src/pgm_socket.cpp#L65
> > https://github.com/zeromq/libzmq/blob/master/src/pgm_socket.cpp#L11
> > 7
>
> Also as the manpage says, note that by default ZMQ_MULTICAST_HOPS is
> 1, so packets stay on the same network. Did you change that
> accordingly to your network setup?
>
> --
> Kind regards,
> Luca Boccassi
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
--
Kind regards,
Luca Boccassi
More information about the zeromq-dev
mailing list