[zeromq-dev] Bind, publish, unbind, repeat

Marcus Ottosson konstruktion at gmail.com
Fri Jun 20 15:17:03 CEST 2014


Hi Randall, thanks for your reply.

Here’s my interpretation of your suggestion. I haven’t had much experience
with Context.destroy() but, without it messages would bunch up in each peer
and get received all at once in the scanner, and it makes more sense to me
to only get a “live” feed without prior messages.

This works great and feels reliable, though let me know if destroying and
recreating the context at each cycle has any side-effects. The only
disadvantage at this point is the fact that I couldn’t have two scanners
running at the same time; which I’d like to.

To delve deeper into my use-case, I’m connecting multiple apps with
multiple running apps on a single workstation, and it is important that
each app broadcasts its presence, but also that there may be multiple apps
querying available peers simultaneously as well, so as to not have to wait
for other scanners before running a new one. At the moment, this last
requirement is something I can work around I think, but ideally I wouldn’t
have this limitation.

*peer.py*

import zmqimport timeimport uuid

unique_id = uuid.uuid4().get_urn()
if __name__ == '__main__':

    while True:
        print "I: Publishing {}".format(unique_id)

        context = zmq.Context()
        socket = context.socket(zmq.PUSH)
        socket.connect("tcp://127.0.0.1:5555")
        socket.send_multipart(['general', unique_id])
        socket.close()
        context.destroy()

        # Wait for a random amount of time
        time.sleep(1)

*scanner.py*

import zmq
if __name__ == '__main__':
    context = zmq.Context()

    socket = context.socket(zmq.PULL)
    socket.bind("tcp://*:5555")

    print "I: Scanning 5555"

    while True:
        message = socket.recv_multipart()
        print "I: Receiving: {}".format(message)

​


On 20 June 2014 13:13, Randall Nortman <rnzmq at wonderclown.net> wrote:

> You can't bind multiple peers to the same address, but you should can
> connect multiple peers to the same address.  There is no law saying
> that the PUB side must be bound and the SUB side connected.  I do the
> reverse in my application.  The general rule is that the "long-lived"
> process should be the one that binds.  Short-lived processes should
> connect to it.
>
> So in your use case, if I understand it correctly, your single
> listener would bind a SUB socket.  The peers that want to advertise
> their presence would then connect their PUB to that bound address.
> Because PUB drops messages when disconnected, you then need to wait to
> ensure that the PUB is connected before sending anything, because this
> can take some time.
>
> However, there is a better way.  I don't know why you think you need
> to use PUB/SUB for this -- PUB/SUB is used for fan-out applications,
> but you don't seem to have a need to fan out (deliver to multiple
> subscribers all at once).  If I understand what you're trying to do,
> you are better off with REQ/REP, ROUTER/DEALER, or if you want to keep
> it simple, PUSH/PULL.  If your listener binds a PULL socket, then the
> peers can connect a PUSH to that address and immediately send their
> message without waiting.  This is because PUSH does not drop messages;
> it will block until it is connected.  But it will be less reliable
> than REQ/REP or ROUTER/DEALER because the message can still be lost if
> something goes wrong with the connection.  REQ/REP and ROUTER/DEALER
> allow you to send an acknowledgement to ensure that the message made
> it to its destination.
>
> On Fri, Jun 20, 2014 at 12:05:05PM +0100, Marcus Ottosson wrote:
> >    Hi Peter, thanks for your reply.
> >    This makes sense, but I'm attempting to run an arbitrary number of
> peers
> >    and they can't all be bound to the same port, can they?. I'm
> definitely no
> >    expert, so do go basic on me, that is ok.
> >    In a nutshell, I'm attempting to mimic real-life in which people
> >    continuously broadcast information about themselves; their location,
> >    appearance and so on and then having an arbitrary amount of spectators
> >    take part in this information flow - i.e. the equivalent of looking
> out
> >    onto a crowd of people.
> >    I would understand if it isn't straightforward, but do you think
> there's a
> >    way? I'd assume the traditional method would be to introduce a
> broker, but
> >    I'd rather not as it violates the real-life equivalent. I'd like each
> >    available peer (on a single workstation, this isn't to be run on a
> >    network) to continuously broadcast information about themselves,
> without a
> >    central broker.
> >    Best,
> >    Marcus
> >
> >    On 20 June 2014 11:57, Pieter Hintjens <[1]ph at imatix.com> wrote:
> >
> >      You are rather abusing the tcp:// transport, as well as the pub/sub
> >      metaphor. This simply will not work without artificial waits.
> >
> >      Proper design would be: keep the socket bound, and send a message
> once
> >      a second. Subscribe, get one message, disconnect.
> >      On Fri, Jun 20, 2014 at 12:13 PM, Marcus Ottosson
> >      <[2]konstruktion at gmail.com> wrote:
> >      > Hi all, first time on this mailing-list, let me know if anything
> is
> >      amiss.
> >      >
> >      > I originally posted on Stackoverflow but had previously seen
> mentions
> >      of
> >      > otherwise posting on an active mailing-list so here goes.
> >      >
> >      > Why isn’t this working?
> >      >
> >      > peer.py
> >      >
> >      > import zmq
> >      > import time
> >      >
> >      > if __name__ == '__main__':
> >      >     context = zmq.Context()
> >      >
> >      >     socket = context.socket(zmq.PUB)
> >      >
> >      >     while True:
> >      >         print "I: Publishing"
> >      >
> >      >         socket.bind("tcp://*:5555")
> >      >         socket.send_multipart(['general', 'Unique peer
> information'])
> >      >         socket.unbind("tcp://*:5555")
> >      >
> >      >         time.sleep(1)
> >      >
> >      > scanner.py
> >      >
> >      > import zmq
> >      >
> >      > if __name__ == '__main__':
> >      >     context = zmq.Context()
> >      >
> >      >     socket = context.socket(zmq.SUB)
> >      >     socket.setsockopt(zmq.SUBSCRIBE, 'general')
> >      >     socket.connect("tcp://localhost:5555")
> >      >
> >      >     print "I: Scanning 5555"
> >      >
> >      >     while True:
> >      >         message = socket.recv_multipart()
> >      >         print "I: Receiving: {}".format(message)
> >      >
> >      > I’m attempting to broadcast multiple peers over the same port, on
> the
> >      same
> >      > computer and have a single “scanner” listen in and “see” who is
> >      available.
> >      > Each peer would broadcast its contact information, a client would
> then
> >      use
> >      > the scanner to find out who is available and then use the
> broadcasted
> >      > information to connect via a REQ/REP channel.
> >      >
> >      > To make this work, I’ve attempted to quickly bind a PUB socket,
> >      broadcast
> >      > information about a peer, and then unbind so as to let other peers
> >      bind to
> >      > the same socket, at a different time, and broadcast a different
> set of
> >      > identifiers for the next peer.
> >      >
> >      > I’m suspecting that messages get discarded before getting sent
> due to
> >      the
> >      > unbinding (the same occurs with a call to close()) but I can’t
> figure
> >      out
> >      > how to get it to empty the queue prior to closing the connection
> so as
> >      to
> >      > not discard any messages.
> >      >
> >      > Any ideas?
> >      >
> >      > Windows 8.1 x64
> >      > Python 2.7.7 x64
> >      > PyZMQ 4.0.4
> >      >
> >      > Thanks
> >      >
> >      > --
> >      > Marcus Ottosson
> >      > [3]konstruktion at gmail.com
> >      >
> >      > _______________________________________________
> >      > zeromq-dev mailing list
> >      > [4]zeromq-dev at lists.zeromq.org
> >      > [5]http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >      >
> >      _______________________________________________
> >      zeromq-dev mailing list
> >      [6]zeromq-dev at lists.zeromq.org
> >      [7]http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> >    --
> >    Marcus Ottosson
> >    [8]konstruktion at gmail.com
> >
> > References
> >
> >    Visible links
> >    1. mailto:ph at imatix.com
> >    2. mailto:konstruktion at gmail.com
> >    3. mailto:konstruktion at gmail.com
> >    4. mailto:zeromq-dev at lists.zeromq.org
> >    5. http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >    6. mailto:zeromq-dev at lists.zeromq.org
> >    7. http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >    8. mailto:konstruktion at gmail.com
>
> > _______________________________________________
> > zeromq-dev mailing list
> > zeromq-dev at lists.zeromq.org
> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>



-- 
*Marcus Ottosson*
konstruktion at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20140620/96c355ef/attachment.htm>


More information about the zeromq-dev mailing list