[zeromq-dev] PGM: multiple listener behavior?

Stuart Levy salevy at illinois.edu
Thu Aug 9 19:20:03 CEST 2012


Question below - don't we still run into the multiple-listener problem?

On 8/9/12 11:20 AM, Steven McCoy wrote:
> On 9 August 2012 11:33, CFK <cfkaran2 at gmail.com 
> <mailto:cfkaran2 at gmail.com>> wrote:
>   [...]
>
>       can you
>     connect multiple multicast addresses to the same interface in ZMQ?  My
>     thought is to make a pseudo-bus, where each member of a smallish set
>     (10-15 nodes) publishes on its own address, but subscribes to all
>     addresses in the set.
>
>
> This is certainly supported: it is called asymmetric multicast.
>
>      Is it possible to connect the same interface to
>     multiple multicast addresses, without causing any problems?  If so, is
>     the following acceptable code?
>     [...]
>
>     context = zmq.Context()
>
>     publisher = context.socket(zmq.PUB)
>     publisher.setsockopt(zmq.LINGER, 0)    # discard unsent messages on close
>     publisher.setsockopt(zmq.IDENTITY, "Producer - 192.168.1.10")
>     publisher.connect("epgm://192.168.1.10;239.168.1.10:5000")
>
>     subscriber = context.socket(zmq.SUB)
>     subscriber.setsockopt(zmq.SUBSCRIBE, '')
>     subscriber.setsockopt(zmq.IDENTITY, 'Consumer - 192.168.1.10')
>     subscriber.connect("epgm://192.168.1.10;239.168.1.11:5000")
>     subscriber.connect("epgm://192.168.1.10;239.168.1.12:5000")
>
>
>
> Using the same port will multiply the received messages on Linux due 
> to how socket routing is implemented.
>
> The alternative form is using a single PGM socket:
>
> subscriber.connect("epgm://192.168.1.10 
> <http://192.168.1.10/>;239.168.1.11, 
> <http://239.168.1.13:5000/>239.168.1.12, 
> <http://239.168.1.13:5000/>239.168.1.13, 
> <http://239.168.1.13:5000/>239.168.1.10:5000 <http://239.168.1.13:5000/>")
>
> This receives on .11, .12, .13 and sends on .10.
>
> subscriber.connect("epgm://192.168.1.1 
> <http://192.168.1.10/>1;239.168.1.10, 
> <http://239.168.1.13:5000/>239.168.1.12, 
> <http://239.168.1.13:5000/>239.168.1.13, 
> <http://239.168.1.13:5000/>239.168.1.11:5000 
> <http://239.168.1.13:5000/>")
> subscriber.connect("epgm://192.168.1.1 
> <http://192.168.1.10/>2;239.168.1.10, 
> <http://239.168.1.13:5000/>239.168.1.11, 
> <http://239.168.1.13:5000/>239.168.1.13, 
> <http://239.168.1.13:5000/>239.168.1.12:5000 <http://239.168.1.13:5000/>")
> subscriber.connect("epgm://192.168.1.1 
> <http://192.168.1.10/>3;239.168.1.10, 
> <http://239.168.1.13:5000/>239.168.1.12, 
> <http://239.168.1.13:5000/>239.168.1.11, 
> <http://239.168.1.13:5000/>239.168.1.12:5000 <http://239.168.1.13:5000/>")
>
This looks great -- I'd be delighted to do it this way.    (But, doesn't 
someone need to bind()?  Or can everyone connect() in this multicast 
situation?)

However... does it run the same risk?  When I did, on a single linux 
host in a single zeromq process, the C equivalent of

publisher.bind("epgm://192.168.1.10;239.168.1.10:5000")
  subscriber.connect("epgm://192.168.1.10;239.168.1.10:5000")

... then netstat -nau showed two UDP sockets bound to 0.0.0.0:5000. This 
looked like the sort of trouble you were describing earlier, Steven, so 
I assumed it wouldn't work reliably - one of those0.0.0.0:5000 sockets 
was listening for NAKs to the publisher, and depending on the order the 
two sockets had been opened, the unicast NAKs might all go to the wrong 
listener and get tossed.  Am I misunderstanding this?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20120809/781ed3f3/attachment.htm>


More information about the zeromq-dev mailing list