[zeromq-dev] Bind, publish, unbind, repeat

Marcus Ottosson konstruktion at gmail.com
Fri Jun 20 15:47:45 CEST 2014


To your first point, thanks for that, I'll keep that in mind.

As per your second point, I've ventured into name servers and brokers in
the past, but for this particular issue I'd really rather not have an
additional process running for the sort of basic functionality I'm after
here.

I haven't yet had time to look into zbeacon, but I suspected this was what
is was doing, and I rather like the idea of it. I was otherwise
contemplating bringing in a separate messaging middle-ware for this
particular issue. And again, this isn't for a network of computers, but for
multiple processes running on a single machine. Does this change your
suggestions at all?

Thanks so far, you've been really helpful.


On 20 June 2014 14:35, Randall Nortman <rnzmq at wonderclown.net> wrote:

> Two points:
>
> First, if you are creating and destroying the context more than once
> in the lifespan of your process, you are doing something wrong, or at
> least very counter to the intended usage pattern.  So whatever you do,
> don't do that and abandon any solution that relies on that, because
> you're just obscuring an underlying problem.  Identify and solve the
> underlying problem.
>
> Second, as far as I know ZMQ does not support a "brokerless"
> many-to-many messaging pattern.  You need a process (or a cluster of
> processes if you need redundancy) in the middle, and then you have a
> fan-in plus fan-out design: Messages fan into the cluster from
> services advertising their presence, and then fan out to clients
> wanting to know about services.
>
> Alternately, you can have a name service (again, one process or a
> cluster) which persists for a long time, accepts the service
> advertisements, and remembers what's out there (ideally with a
> heartbeat of some sort so it knows when services go away).  Clients
> then query the name service (via req/rep or router/dealer) to find out
> the current set of live services.  This would be the better solution
> particularly if clients are short-lived, because otherwise they will
> need to wait for a while to hear a service broadcast.
>
>
> Or, you can do what zbeacon (from CZMQ) does and bypass ZMQ for the
> discovery portion of your app.  Just broadcast plain UDP packets.
> This works if everything is on a LAN or you somehow bridge UDP over a
> WAN (not a good idea in general).
>
>
> On Fri, Jun 20, 2014 at 02:17:03PM +0100, Marcus Ottosson wrote:
> >    Hi Randall, thanks for your reply.
> >
> >    Here’s my interpretation of your suggestion. I haven’t had much
> experience
> >    with Context.destroy() but, without it messages would bunch up in each
> >    peer and get received all at once in the scanner, and it makes more
> sense
> >    to me to only get a “live” feed without prior messages.
> >
> >    This works great and feels reliable, though let me know if destroying
> and
> >    recreating the context at each cycle has any side-effects. The only
> >    disadvantage at this point is the fact that I couldn’t have two
> scanners
> >    running at the same time; which I’d like to.
> >
> >    To delve deeper into my use-case, I’m connecting multiple apps with
> >    multiple running apps on a single workstation, and it is important
> that
> >    each app broadcasts its presence, but also that there may be multiple
> apps
> >    querying available peers simultaneously as well, so as to not have to
> wait
> >    for other scanners before running a new one. At the moment, this last
> >    requirement is something I can work around I think, but ideally I
> wouldn’t
> >    have this limitation.
> >
> >    peer.py
> >
> >  import zmq
> >  import time
> >  import uuid
> >
> >  unique_id = uuid.uuid4().get_urn()
> >
> >  if __name__ == '__main__':
> >
> >      while True:
> >          print "I: Publishing {}".format(unique_id)
> >
> >          context = zmq.Context()
> >          socket = context.socket(zmq.PUSH)
> >          socket.connect("tcp://[1]127.0.0.1:5555")
> >          socket.send_multipart(['general', unique_id])
> >          socket.close()
> >          context.destroy()
> >
> >          # Wait for a random amount of time
> >          time.sleep(1)
> >
> >    scanner.py
> >
> >  import zmq
> >
> >  if __name__ == '__main__':
> >      context = zmq.Context()
> >
> >      socket = context.socket(zmq.PULL)
> >      socket.bind("tcp://*:5555")
> >
> >      print "I: Scanning 5555"
> >
> >      while True:
> >          message = socket.recv_multipart()
> >          print "I: Receiving: {}".format(message)
> >
> >    ​
> >
> >    On 20 June 2014 13:13, Randall Nortman <[2]rnzmq at wonderclown.net>
> wrote:
> >
> >      You can't bind multiple peers to the same address, but you should
> can
> >      connect multiple peers to the same address.  There is no law saying
> >      that the PUB side must be bound and the SUB side connected.  I do
> the
> >      reverse in my application.  The general rule is that the
> "long-lived"
> >      process should be the one that binds.  Short-lived processes should
> >      connect to it.
> >
> >      So in your use case, if I understand it correctly, your single
> >      listener would bind a SUB socket.  The peers that want to advertise
> >      their presence would then connect their PUB to that bound address.
> >      Because PUB drops messages when disconnected, you then need to wait
> to
> >      ensure that the PUB is connected before sending anything, because
> this
> >      can take some time.
> >
> >      However, there is a better way.  I don't know why you think you need
> >      to use PUB/SUB for this -- PUB/SUB is used for fan-out applications,
> >      but you don't seem to have a need to fan out (deliver to multiple
> >      subscribers all at once).  If I understand what you're trying to do,
> >      you are better off with REQ/REP, ROUTER/DEALER, or if you want to
> keep
> >      it simple, PUSH/PULL.  If your listener binds a PULL socket, then
> the
> >      peers can connect a PUSH to that address and immediately send their
> >      message without waiting.  This is because PUSH does not drop
> messages;
> >      it will block until it is connected.  But it will be less reliable
> >      than REQ/REP or ROUTER/DEALER because the message can still be lost
> if
> >      something goes wrong with the connection.  REQ/REP and ROUTER/DEALER
> >      allow you to send an acknowledgement to ensure that the message made
> >      it to its destination.
> >      On Fri, Jun 20, 2014 at 12:05:05PM +0100, Marcus Ottosson wrote:
> >      >    Hi Peter, thanks for your reply.
> >      >    This makes sense, but I'm attempting to run an arbitrary
> number of
> >      peers
> >      >    and they can't all be bound to the same port, can they?. I'm
> >      definitely no
> >      >    expert, so do go basic on me, that is ok.
> >      >    In a nutshell, I'm attempting to mimic real-life in which
> people
> >      >    continuously broadcast information about themselves; their
> >      location,
> >      >    appearance and so on and then having an arbitrary amount of
> >      spectators
> >      >    take part in this information flow - i.e. the equivalent of
> looking
> >      out
> >      >    onto a crowd of people.
> >      >    I would understand if it isn't straightforward, but do you
> think
> >      there's a
> >      >    way? I'd assume the traditional method would be to introduce a
> >      broker, but
> >      >    I'd rather not as it violates the real-life equivalent. I'd
> like
> >      each
> >      >    available peer (on a single workstation, this isn't to be run
> on a
> >      >    network) to continuously broadcast information about
> themselves,
> >      without a
> >      >    central broker.
> >      >    Best,
> >      >    Marcus
> >      >
> >      >    On 20 June 2014 11:57, Pieter Hintjens <[1][3]ph at imatix.com>
> wrote:
> >      >
> >      >      You are rather abusing the tcp:// transport, as well as the
> >      pub/sub
> >      >      metaphor. This simply will not work without artificial waits.
> >      >
> >      >      Proper design would be: keep the socket bound, and send a
> message
> >      once
> >      >      a second. Subscribe, get one message, disconnect.
> >      >      On Fri, Jun 20, 2014 at 12:13 PM, Marcus Ottosson
> >      >      <[2][4]konstruktion at gmail.com> wrote:
> >      >      > Hi all, first time on this mailing-list, let me know if
> >      anything is
> >      >      amiss.
> >      >      >
> >      >      > I originally posted on Stackoverflow but had previously
> seen
> >      mentions
> >      >      of
> >      >      > otherwise posting on an active mailing-list so here goes.
> >      >      >
> >      >      > Why isn’t this working?
> >      >      >
> >      >      > peer.py
> >      >      >
> >      >      > import zmq
> >      >      > import time
> >      >      >
> >      >      > if __name__ == '__main__':
> >      >      >     context = zmq.Context()
> >      >      >
> >      >      >     socket = context.socket(zmq.PUB)
> >      >      >
> >      >      >     while True:
> >      >      >         print "I: Publishing"
> >      >      >
> >      >      >         socket.bind("tcp://*:5555")
> >      >      >         socket.send_multipart(['general', 'Unique peer
> >      information'])
> >      >      >         socket.unbind("tcp://*:5555")
> >      >      >
> >      >      >         time.sleep(1)
> >      >      >
> >      >      > scanner.py
> >      >      >
> >      >      > import zmq
> >      >      >
> >      >      > if __name__ == '__main__':
> >      >      >     context = zmq.Context()
> >      >      >
> >      >      >     socket = context.socket(zmq.SUB)
> >      >      >     socket.setsockopt(zmq.SUBSCRIBE, 'general')
> >      >      >     socket.connect("tcp://localhost:5555")
> >      >      >
> >      >      >     print "I: Scanning 5555"
> >      >      >
> >      >      >     while True:
> >      >      >         message = socket.recv_multipart()
> >      >      >         print "I: Receiving: {}".format(message)
> >      >      >
> >      >      > I’m attempting to broadcast multiple peers over the same
> port,
> >      on the
> >      >      same
> >      >      > computer and have a single “scanner” listen in and “see”
> who is
> >      >      available.
> >      >      > Each peer would broadcast its contact information, a client
> >      would then
> >      >      use
> >      >      > the scanner to find out who is available and then use the
> >      broadcasted
> >      >      > information to connect via a REQ/REP channel.
> >      >      >
> >      >      > To make this work, I’ve attempted to quickly bind a PUB
> socket,
> >      >      broadcast
> >      >      > information about a peer, and then unbind so as to let
> other
> >      peers
> >      >      bind to
> >      >      > the same socket, at a different time, and broadcast a
> different
> >      set of
> >      >      > identifiers for the next peer.
> >      >      >
> >      >      > I’m suspecting that messages get discarded before getting
> sent
> >      due to
> >      >      the
> >      >      > unbinding (the same occurs with a call to close()) but I
> can’t
> >      figure
> >      >      out
> >      >      > how to get it to empty the queue prior to closing the
> >      connection so as
> >      >      to
> >      >      > not discard any messages.
> >      >      >
> >      >      > Any ideas?
> >      >      >
> >      >      > Windows 8.1 x64
> >      >      > Python 2.7.7 x64
> >      >      > PyZMQ 4.0.4
> >      >      >
> >      >      > Thanks
> >      >      >
> >      >      > --
> >      >      > Marcus Ottosson
> >      >      > [3][5]konstruktion at gmail.com
> >      >      >
> >      >      > _______________________________________________
> >      >      > zeromq-dev mailing list
> >      >      > [4][6]zeromq-dev at lists.zeromq.org
> >      >      > [5][7]http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >      >      >
> >      >      _______________________________________________
> >      >      zeromq-dev mailing list
> >      >      [6][8]zeromq-dev at lists.zeromq.org
> >      >      [7][9]http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >      >
> >      >    --
> >      >    Marcus Ottosson
> >      >    [8][10]konstruktion at gmail.com
> >      >
> >      > References
> >      >
> >      >    Visible links
> >      >    1. mailto:[11]ph at imatix.com
> >      >    2. mailto:[12]konstruktion at gmail.com
> >      >    3. mailto:[13]konstruktion at gmail.com
> >      >    4. mailto:[14]zeromq-dev at lists.zeromq.org
> >      >    5. [15]http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >      >    6. mailto:[16]zeromq-dev at lists.zeromq.org
> >      >    7. [17]http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >      >    8. mailto:[18]konstruktion at gmail.com
> >      > _______________________________________________
> >      > zeromq-dev mailing list
> >      > [19]zeromq-dev at lists.zeromq.org
> >      > [20]http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> >      _______________________________________________
> >      zeromq-dev mailing list
> >      [21]zeromq-dev at lists.zeromq.org
> >      [22]http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> >    --
> >    Marcus Ottosson
> >    [23]konstruktion at gmail.com
> >
> > References
> >
> >    Visible links
> >    1. http://127.0.0.1:5555/
> >    2. mailto:rnzmq at wonderclown.net
> >    3. mailto:ph at imatix.com
> >    4. mailto:konstruktion at gmail.com
> >    5. mailto:konstruktion at gmail.com
> >    6. mailto:zeromq-dev at lists.zeromq.org
> >    7. http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >    8. mailto:zeromq-dev at lists.zeromq.org
> >    9. http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >   10. mailto:konstruktion at gmail.com
> >   11. mailto:ph at imatix.com
> >   12. mailto:konstruktion at gmail.com
> >   13. mailto:konstruktion at gmail.com
> >   14. mailto:zeromq-dev at lists.zeromq.org
> >   15. http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >   16. mailto:zeromq-dev at lists.zeromq.org
> >   17. http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >   18. mailto:konstruktion at gmail.com
> >   19. mailto:zeromq-dev at lists.zeromq.org
> >   20. http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >   21. mailto:zeromq-dev at lists.zeromq.org
> >   22. http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >   23. mailto:konstruktion at gmail.com
>
> > _______________________________________________
> > zeromq-dev mailing list
> > zeromq-dev at lists.zeromq.org
> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>



-- 
*Marcus Ottosson*
konstruktion at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20140620/df012b58/attachment.htm>


More information about the zeromq-dev mailing list