[zeromq-dev] zmq proxy with INPROC as frontend and TCP as backend: is it feasible?

Stephen Riesenberg stephen.riesenberg at gmail.com
Fri Mar 31 22:12:35 CEST 2017


In general, I think not as well. Only if your program had a bug in it.
There are a lot fewer failure scenarios to consider using INPROC vs other
transports and multi-node. Just need to deal with application-level
failures. Sounds like you're making progress. :)

On Fri, Mar 31, 2017 at 1:03 PM, Francesco <francesco.montorsi at gmail.com>
wrote:

> Hi Stephen,
>
> 2017-03-31 15:49 GMT+02:00 Stephen Riesenberg <
> stephen.riesenberg at gmail.com>:
> > Good deal. Also, a note on the number of threads issue, instead of
> > monitoring sockets, everything is in your control inside this
> multithreaded
> > component you're building, so the proxy that funnels messages should
> have a
> > way to know how many threads are running. You'd want the same setup
> method
> > that fires up the threads in the first place to pass that number to the
> > publisher thread or some side channel that keeps state. And if threads
> come
> > and go for some reason, you'd just need to keep track of when they start
> up
> > and exit to update a counter. This is assuming that number is necessary
> as
> > part of your design. If not, one less thing to worry about, right?
>
> absolutely; moreover for now I'm assert()ing the connection of each
> thread to the frontend socket of the proxy. So that the number of
> threads I create is always matching the number of threads sending data
> to the proxy, otherwise the program would abort.
>
> I don't know if INPROC sockets can fail to connect for some reason or
> not.. apparently no so far :)
>
> thanks,
> Francesco
>
>
> >
> > On Fri, Mar 31, 2017 at 5:55 AM Francesco <francesco.montorsi at gmail.com>
> > wrote:
> >>
> >> Hi Stephen,
> >> indeed I tried and the setup of that networking schema was faster than
> >> I expected... and more importantly it just worked!!! :)
> >> The only "limitation" I discovered is that you cannot do
> >> zmq_monitor_socket() on the INPROC side of the PROXY.
> >> So that I don't have a way to ask to the proxy how many threads have
> >> connected to its frontend socket. However's that not a big issue...
> >> and more importantly I can still monitor the backend socket which is
> >> the one exposed via TCP to the rest of the world.
> >>
> >>
> >> 2017-03-30 21:56 GMT+02:00 Stephen Riesenberg
> >> <stephen.riesenberg at gmail.com>:
> >> >...
> >> > Though I'm not sure what you mean by "I would like to have all these
> >> > thread
> >> > appear as a unique endpoint to the outside world." Would this be
> through
> >> > a
> >> > topic name or some other piece of data? This also seems fine, but
> wasn't
> >> > sure if that's what you meant.
> >>
> >> Well, I just meant that other components must be able to subscribe to
> >> the events produced by all threads by doing subscribe to a single ZMQ
> >> endpoint (i.e., the endpoint of the backend socket of the proxy)...
> >> that may seem obvious but I just wanted to specify that.
> >>
> >> Thanks for your reply!
> >>
> >> Francesco
> >>
> >>
> >>
> >>
> >>
> >> >
> >> > On Tue, Mar 28, 2017 at 7:39 AM Francesco <
> francesco.montorsi at gmail.com>
> >> > wrote:
> >> >>
> >> >> Hi all,
> >> >>
> >> >> I'm trying to work out a problem in a multithreaded application and I
> >> >> would like to use ZMQ to solve it.
> >> >>
> >> >> In particular I have a process with several threads that are
> >> >> generating messages; each one has its own PUB socket.
> >> >> I would like to have all these thread appear as a unique endpoint to
> >> >> the outside world.
> >> >>
> >> >> For this reason I'm thinking of changing the PUB socket of each
> thread
> >> >> to use the INPROC transport and then create a ZMQ proxy exposing an
> >> >> XSUB frontend socket with INPROC transport and an XPUB backend socket
> >> >> with TCP transport, for the external world.
> >> >>
> >> >> Do you think such kind of scenario would work?
> >> >>
> >> >> Thanks!
> >> >> Francesco
> >> >> _______________________________________________
> >> >> zeromq-dev mailing list
> >> >> zeromq-dev at lists.zeromq.org
> >> >> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >> >
> >> >
> >> > _______________________________________________
> >> > zeromq-dev mailing list
> >> > zeromq-dev at lists.zeromq.org
> >> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >> _______________________________________________
> >> zeromq-dev mailing list
> >> zeromq-dev at lists.zeromq.org
> >> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> >
> > _______________________________________________
> > zeromq-dev mailing list
> > zeromq-dev at lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20170331/85a3293e/attachment.htm>


More information about the zeromq-dev mailing list