[zeromq-dev] Malamute broker project
Kenneth Adam Miller
kennethadammiller at gmail.com
Thu Mar 12 19:36:17 CET 2015
I fixed it! Write up coming out soon! Haha, euphoric when you solve that
problem you've been working on!
Also, there is one infrequent irk that is making the tests lock up on
infrequent cases with the way they are. I'd like to find a good mitigation
strategy, but I think code is good enough for a merge :)
On Tue, Mar 10, 2015 at 3:56 AM, Pieter Hintjens <ph at imatix.com> wrote:
> I guess the whole point of service requests is to allow workers and
> clients to have different life cycles.
>
> What you should do is study the Malamute protocol, and whitepaper.
> It's still rather simple. The API wraps the protocol without adding
> any semantics. We can make it more complex as we hit new problems
> while using it.
>
>
> On Mon, Mar 9, 2015 at 10:52 PM, Kenneth Adam Miller
> <kennethadammiller at gmail.com> wrote:
> > Is there any potential guidance on how to construct two clients on
> opposite
> > sides to do concurrent service requests, please? Or better, if there were
> > just client api call semantic documentation that could be provided to
> allow
> > me to drill down into the specifics of how to construct such concurrent
> > architecture correctly... No rush, just inquiring...
> >
> > On Tue, Mar 3, 2015 at 2:05 PM, Kenneth Adam Miller
> > <kennethadammiller at gmail.com> wrote:
> >>
> >> You know what would make this whole objective very simple, is if when
> >> multiple clients contend for a mailbox, one may enter, but the mail that
> >> that client doesn't read can be read by the next.
> >>
> >> In this scenario, I could simply have every client send to some opposite
> >> side's mailbox, and the client could just know whether he was on the
> front
> >> or back. In this way, the front would sendto the back mailbox, and the
> back
> >> to the front mailbox. When clients connect, they can read a single
> message,
> >> shut down, and concurrent clients that were blocked waiting to receive
> would
> >> receive the next message from the mailbox. If it were this way, it
> wouldn't
> >> matter when the opposite side sent the message, because each side would
> get
> >> it; the mailbox persists until the client connects.
> >>
> >> Services were supposed to manage the coordination between each side, but
> >> it seems like I'm missing some of the semantics regarding how messages
> are
> >> sent and received for each of the different message types.
> >>
> >> I'm not sure which it is, but I think that either if a request is made
> to
> >> the malamute broker for a service that doesn't exist that that request
> will
> >> just get dropped or if a service offers itself, the malamute broker will
> >> send as many requests to the service as fast as it can. ZMQ handles
> this by
> >> receiving in the background, but currently I can't figure out how to
> poll on
> >> mlm_client_t's in order to know if there's more or not so that I can
> >> accommodate this possibility. I've tried getting the zsock with the
> msgpipe
> >> api but every time I poll on that, it *immediately returns* signaling
> input,
> >> but then confusingly I can't read from it with recvx. Do I need to
> switch
> >> over the different types of recv calls I can make on a client by
> inspecting
> >> it with one of the event apis?
> >>
> >>
> >> On Tue, Mar 3, 2015 at 10:25 AM, Kenneth Adam Miller
> >> <kennethadammiller at gmail.com> wrote:
> >>>
> >>> Wow, thank you so much. Yeah, I am just really happy that there is
> >>> hopefully space to apply what I've learned writing my own broker and
> use
> >>> more advanced tools. I'm basically supplanting what I've already done
> with
> >>> this code. I'm exited to hopefully learn about Zyre when it comes to
> >>> connecting brokers.
> >>>
> >>> On Tue, Mar 3, 2015 at 6:21 AM, Pieter Hintjens <ph at imatix.com> wrote:
> >>>>
> >>>> :-) I appreciate your work on learning how this works. Sorry I don't
> >>>> already have a tutorial written. Things are a bit busy. I'll push that
> >>>> to top priority.
> >>>>
> >>>> On Tue, Mar 3, 2015 at 10:20 AM, Kenneth Adam Miller
> >>>> <kennethadammiller at gmail.com> wrote:
> >>>> > Ok, a guide written by you would be really good, thanks. I just
> wanted
> >>>> > to
> >>>> > help.
> >>>> >
> >>>> > On Tue, Mar 3, 2015 at 3:38 AM, Kenneth Adam Miller
> >>>> > <kennethadammiller at gmail.com> wrote:
> >>>> >>
> >>>> >> Ok, I understand.
> >>>> >>
> >>>> >> I was thinking maybe it would better to just loop and continuously
> >>>> >> send
> >>>> >> requests for the service until a response is given. Possibly I'm
> >>>> >> misunderstanding what you're saying, but I thought I had got that
> >>>> >> much, and
> >>>> >> that since doing set_worker was a service, maybe it was just the
> >>>> >> request
> >>>> >> message being discarded. It would be like a passenger raising a
> hand
> >>>> >> when
> >>>> >> there is no taxi available-no one can see, and it's sort of
> realistic
> >>>> >> model
> >>>> >> to expect as much in a concurrent environment. It's different once
> >>>> >> there are
> >>>> >> workers to accept a contact (set_worker) is announced; the broker
> >>>> >> queues
> >>>> >> requests for those after that, right?
> >>>> >>
> >>>> >> On Tue, Mar 3, 2015 at 3:16 AM, Pieter Hintjens <ph at imatix.com>
> >>>> >> wrote:
> >>>> >>>
> >>>> >>> I'm not really in a position to debug the code.
> >>>> >>>
> >>>> >>> The point of using an addressing service is to solve the problem
> of
> >>>> >>> synchronization between stream readers and writers. Even if the
> >>>> >>> reader
> >>>> >>> is a microsecond too late, it will miss the message.
> >>>> >>>
> >>>> >>>
> >>>> >>>
> >>>> >>> On Tue, Mar 3, 2015 at 7:32 AM, Kenneth Adam Miller
> >>>> >>> <kennethadammiller at gmail.com> wrote:
> >>>> >>> > This is what I currently have right now-it seems to lock up
> >>>> >>> > sometimes...
> >>>> >>> >
> >>>> >>> >
> >>>> >>> > char * exchange_addresses(std::string consumer_topic,
> std::string
> >>>> >>> > production_topic, std::string toSend) {
> >>>> >>> > mlm_client_t *client = mlm_client_new();
> >>>> >>> > assert(client);
> >>>> >>> >
> >>>> >>> > int rc=mlm_client_connect (client, "tcp://127.0.0.1:9999" ,
> >>>> >>> > 3000,
> >>>> >>> > production_topic.c_str());
> >>>> >>> > assert(rc==0);
> >>>> >>> >
> >>>> >>> > //offer a service to the opposite so that their writers can
> send
> >>>> >>> > SET
> >>>> >>> > responses to our request
> >>>> >>> > mlm_client_set_worker(client, consumer_topic.c_str(), "SET");
> >>>> >>> > //Offer a service to the opposite so that their readers will
> >>>> >>> > will get
> >>>> >>> > GET
> >>>> >>> > messages with the address they want
> >>>> >>> > mlm_client_set_worker(client, production_topic.c_str(),
> "GET");
> >>>> >>> >
> >>>> >>> > //send a request to the opposite using a GET on the
> >>>> >>> > production_topic
> >>>> >>> > if (!mlm_client_sendforx(client, consumer_topic.c_str(),
> "GET",
> >>>> >>> > toSend.c_str(), NULL)) {
> >>>> >>> > std::cout << production_topic << " client sent message" <<
> >>>> >>> > std::endl;
> >>>> >>> > }
> >>>> >>> > else {
> >>>> >>> > std::cerr << "error sending message" << std::endl;
> >>>> >>> > }
> >>>> >>> >
> >>>> >>> > char *get, *origin;
> >>>> >>> > mlm_client_recvx (client, &get, &origin, NULL); //READ A GET
> >>>> >>> > Request
> >>>> >>> > from
> >>>> >>> > the opposite side
> >>>> >>> > std::cout << production_topic << " got get request: ";
> >>>> >>> > if (get)
> >>>> >>> > std::cout << " get: " << get;
> >>>> >>> > if (origin)
> >>>> >>> > std::cout << " origin: " << origin;
> >>>> >>> > std::cout << std::endl;
> >>>> >>> > zstr_free(&get);
> >>>> >>> > mlm_client_destroy(&client);
> >>>> >>> >
> >>>> >>> > return origin;
> >>>> >>> > }
> >>>> >>> >
> >>>> >>> >
> >>>> >>> > void connectToFrontEnd() {
> >>>> >>> > std::cout << "connectToFrontEnd, exchange_addresses" <<
> >>>> >>> > std::endl;
> >>>> >>> > char * servrAddr = exchange_addresses("backendEndpoints",
> >>>> >>> > "frontendEndpoints", "inproc://frontend");
> >>>> >>> > std::cout << "frontend got opp addr: " << servrAddr << ",
> >>>> >>> > exiting" <<
> >>>> >>> > std::endl;
> >>>> >>> > }
> >>>> >>> >
> >>>> >>> > void connectToBackEnd() {
> >>>> >>> > std::cout << "connectToBackEnd, exchange addresses" <<
> >>>> >>> > std::endl;
> >>>> >>> > char * servrAddr = exchange_addresses("frontendEndpoints",
> >>>> >>> > "backendEndpoints", "inproc://backend");
> >>>> >>> > std::cout << "backend got opp addr: " << servrAddr << ",
> >>>> >>> > exiting" <<
> >>>> >>> > std::endl;
> >>>> >>> > }
> >>>> >>> >
> >>>> >>> > TEST(ExchangeTest, TestExchangeString) {
> >>>> >>> > //std::thread mlm_thread (mlm_broker);
> >>>> >>> >
> >>>> >>> > std::thread fclient(connectToFrontEnd);
> >>>> >>> > std::thread bclient(connectToBackEnd);
> >>>> >>> >
> >>>> >>> > //mlm_thread.join();
> >>>> >>> > fclient.join();
> >>>> >>> > bclient.join();
> >>>> >>> > }
> >>>> >>> >
> >>>> >>> >
> >>>> >>> > TEST(ExchangeTest, TestExchangeMultipleStrings) {
> >>>> >>> > unsigned int numWorkers = 5;
> >>>> >>> > std::list<std::thread *> workersList;
> >>>> >>> > for (unsigned int i=0; i< numWorkers; i++) {
> >>>> >>> > workersList.push_back(new std::thread(connectToFrontEnd));
> >>>> >>> > }
> >>>> >>> > for (unsigned int i=0; i< numWorkers; i++) {
> >>>> >>> > workersList.push_back(new std::thread(connectToBackEnd));
> >>>> >>> > }
> >>>> >>> > for (auto w : workersList)
> >>>> >>> > w->join();
> >>>> >>> > }
> >>>> >>> >
> >>>> >>> >
> >>>> >>> > Any advice on how to properly use the API is greatly
> appreciated.
> >>>> >>> > I've
> >>>> >>> > worked on this all day, and while I've gotten a lot closer, I
> >>>> >>> > still
> >>>> >>> > feel
> >>>> >>> > like there's something that I'm missing that I need before I can
> >>>> >>> > move
> >>>> >>> > forward. I'm almost there though!
> >>>> >>> >
> >>>> >>> > On Tue, Mar 3, 2015 at 1:20 AM, Kenneth Adam Miller
> >>>> >>> > <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>
> >>>> >>> >> As it turns out, I think that there was some misunderstanding
> in
> >>>> >>> >> terms
> >>>> >>> >> of
> >>>> >>> >> address exchange semantics. I think you thought it necessary to
> >>>> >>> >> persist the
> >>>> >>> >> messages, hence the requirement for mailboxes, but this address
> >>>> >>> >> exchange
> >>>> >>> >> service as a use of the broker turns out to actually be used to
> >>>> >>> >> facilitate
> >>>> >>> >> exchange of IP addresses, therefore allowing direct direct
> >>>> >>> >> communication.
> >>>> >>> >> Therefore, it's silly to make certain that messages persist
> even
> >>>> >>> >> if
> >>>> >>> >> the
> >>>> >>> >> client on another side leaves, since in fact if a client on the
> >>>> >>> >> other
> >>>> >>> >> side
> >>>> >>> >> leaves, the broker should route any address to that specific
> one,
> >>>> >>> >> since
> >>>> >>> >> making a connection to the one that left shouldn't or wouldn't
> go
> >>>> >>> >> through
> >>>> >>> >> anyway.
> >>>> >>> >>
> >>>> >>> >> Rather, I think now that I understand that a single client can
> be
> >>>> >>> >> a
> >>>> >>> >> worker
> >>>> >>> >> with several subscription possiblities, and that a service
> >>>> >>> >> request can
> >>>> >>> >> include the actual IP address needed to be exchanged in the
> first
> >>>> >>> >> place.
> >>>> >>> >> This simplifies things a lot.
> >>>> >>> >>
> >>>> >>> >> So now it seems that I have a pair of clients exchanging
> >>>> >>> >> addresses
> >>>> >>> >> reliably via the a service oriented approach. But having
> bunches
> >>>> >>> >> of
> >>>> >>> >> clients
> >>>> >>> >> each providing a service and requesting just one seems to be
> >>>> >>> >> locking
> >>>> >>> >> up...
> >>>> >>> >>
> >>>> >>> >> On Tue, Mar 3, 2015 at 12:57 AM, Pieter Hintjens <
> ph at imatix.com>
> >>>> >>> >> wrote:
> >>>> >>> >>>
> >>>> >>> >>> There's no limit on mailboxes, and you don't need to consider
> >>>> >>> >>> lifetimes. Consider these like email addresses. Mailboxes will
> >>>> >>> >>> at
> >>>> >>> >>> some
> >>>> >>> >>> point have to be saved to disk (they're memory only now.)
> >>>> >>> >>>
> >>>> >>> >>> On Tue, Mar 3, 2015 at 4:15 AM, Kenneth Adam Miller
> >>>> >>> >>> <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>> > Is there any limit to the number of mailboxes on the
> malamute
> >>>> >>> >>> > broker?
> >>>> >>> >>> > How do
> >>>> >>> >>> > I manage mailbox lifetimes, or is that something that I need
> >>>> >>> >>> > to
> >>>> >>> >>> > consider?
> >>>> >>> >>> >
> >>>> >>> >>> > On Mon, Mar 2, 2015 at 8:21 PM, Kenneth Adam Miller
> >>>> >>> >>> > <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>> >>
> >>>> >>> >>> >> Ah, now that I considered it in the context of each side
> >>>> >>> >>> >> having
> >>>> >>> >>> >> many
> >>>> >>> >>> >> clients, I can see why having mailboxes is undesirable. You
> >>>> >>> >>> >> want a
> >>>> >>> >>> >> service
> >>>> >>> >>> >> to tell a mailbox name in order that it can be retrieved
> >>>> >>> >>> >> individually,
> >>>> >>> >>> >> per
> >>>> >>> >>> >> client. I finally understand. Some concepts take a bit to
> >>>> >>> >>> >> sink in,
> >>>> >>> >>> >> the
> >>>> >>> >>> >> ZMQ
> >>>> >>> >>> >> manual had a learning curve. But I like this!
> >>>> >>> >>> >>
> >>>> >>> >>> >> In any case, I'm figuring out how to use the API to do what
> >>>> >>> >>> >> you
> >>>> >>> >>> >> said,
> >>>> >>> >>> >> since it was kind of ambiguous. Thanks so much for the
> help.
> >>>> >>> >>> >>
> >>>> >>> >>> >> On Mon, Mar 2, 2015 at 5:22 PM, Pieter Hintjens
> >>>> >>> >>> >> <ph at imatix.com>
> >>>> >>> >>> >> wrote:
> >>>> >>> >>> >>>
> >>>> >>> >>> >>> Each peer has to have its own mailbox, yes.
> >>>> >>> >>> >>>
> >>>> >>> >>> >>> On Mon, Mar 2, 2015 at 10:25 PM, Kenneth Adam Miller
> >>>> >>> >>> >>> <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>> >>> > Yeah that fixed it!
> >>>> >>> >>> >>> > Now I just have to iron out what precisely is
> concurrent.
> >>>> >>> >>> >>> >
> >>>> >>> >>> >>> > On Mon, Mar 2, 2015 at 4:24 PM, Kenneth Adam Miller
> >>>> >>> >>> >>> > <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>> >>> >>
> >>>> >>> >>> >>> >> Wait, is it because each of the peers have specified
> the
> >>>> >>> >>> >>> >> same
> >>>> >>> >>> >>> >> mailbox?
> >>>> >>> >>> >>> >>
> >>>> >>> >>> >>> >> On Mon, Mar 2, 2015 at 4:12 PM, Kenneth Adam Miller
> >>>> >>> >>> >>> >> <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>> >>> >>>
> >>>> >>> >>> >>> >>> But only one side gets a message from the broker. The
> >>>> >>> >>> >>> >>> other
> >>>> >>> >>> >>> >>> side
> >>>> >>> >>> >>> >>> just
> >>>> >>> >>> >>> >>> freezes.
> >>>> >>> >>> >>> >>>
> >>>> >>> >>> >>> >>> On Mon, Mar 2, 2015 at 3:54 PM, Pieter Hintjens
> >>>> >>> >>> >>> >>> <ph at imatix.com>
> >>>> >>> >>> >>> >>> wrote:
> >>>> >>> >>> >>> >>>>
> >>>> >>> >>> >>> >>>> Sure, it'd work as subjects.
> >>>> >>> >>> >>> >>>>
> >>>> >>> >>> >>> >>>> On Mon, Mar 2, 2015 at 8:07 PM, Kenneth Adam Miller
> >>>> >>> >>> >>> >>>> <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>> >>> >>>> > What are the two messages SET and GET that you're
> >>>> >>> >>> >>> >>>> > talking
> >>>> >>> >>> >>> >>>> > about?
> >>>> >>> >>> >>> >>>> > Are
> >>>> >>> >>> >>> >>>> > you
> >>>> >>> >>> >>> >>>> > saying that sendfor parameter char * address is
> >>>> >>> >>> >>> >>>> > "ADDRESS"
> >>>> >>> >>> >>> >>>> > and
> >>>> >>> >>> >>> >>>> > subject
> >>>> >>> >>> >>> >>>> > is
> >>>> >>> >>> >>> >>>> > "GET" or "SET" depending on whether or not there
> >>>> >>> >>> >>> >>>> > should be
> >>>> >>> >>> >>> >>>> > a
> >>>> >>> >>> >>> >>>> > read,
> >>>> >>> >>> >>> >>>> > with the
> >>>> >>> >>> >>> >>>> > contents being the actual
> "tcp://some_IP:some_port"?
> >>>> >>> >>> >>> >>>> > Or
> >>>> >>> >>> >>> >>>> > actually
> >>>> >>> >>> >>> >>>> > author the
> >>>> >>> >>> >>> >>>> > protocol for brokering where I use actual broker
> >>>> >>> >>> >>> >>>> > commands
> >>>> >>> >>> >>> >>>> > SET
> >>>> >>> >>> >>> >>>> > and
> >>>> >>> >>> >>> >>>> > GET?
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> > Is send_for exchangeable for send_forx in this
> >>>> >>> >>> >>> >>>> > context?
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> > I changed it to this, trying to follow
> mlm_client.c:
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> > char * exchange_addresses(std::string
> consumer_topic,
> >>>> >>> >>> >>> >>>> > std::string
> >>>> >>> >>> >>> >>>> > production_topic, std::string toSend) {
> >>>> >>> >>> >>> >>>> > mlm_client_t *client_reader = mlm_client_new();
> >>>> >>> >>> >>> >>>> > assert(client_reader);
> >>>> >>> >>> >>> >>>> > mlm_client_t *client_writer = mlm_client_new();
> >>>> >>> >>> >>> >>>> > assert(client_writer);
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> > int rc=mlm_client_connect (client_reader,
> >>>> >>> >>> >>> >>>> > "tcp://127.0.0.1:9999" ,
> >>>> >>> >>> >>> >>>> > 3000,
> >>>> >>> >>> >>> >>>> > "ADDRESS");
> >>>> >>> >>> >>> >>>> > assert(rc==0);
> >>>> >>> >>> >>> >>>> > rc=mlm_client_connect (client_writer,
> >>>> >>> >>> >>> >>>> > "tcp://127.0.0.1:9999" ,
> >>>> >>> >>> >>> >>>> > 3000,
> >>>> >>> >>> >>> >>>> > "");
> >>>> >>> >>> >>> >>>> > assert(rc==0);
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> > std::cout << "producing to topic: " <<
> >>>> >>> >>> >>> >>>> > production_topic
> >>>> >>> >>> >>> >>>> > <<
> >>>> >>> >>> >>> >>>> > std::endl;
> >>>> >>> >>> >>> >>>> > std::cout << "consuming from topic: " <<
> >>>> >>> >>> >>> >>>> > consumer_topic
> >>>> >>> >>> >>> >>>> > <<
> >>>> >>> >>> >>> >>>> > std::endl;
> >>>> >>> >>> >>> >>>> > if (!mlm_client_sendtox(client_writer, "ADDRESS",
> >>>> >>> >>> >>> >>>> > "SET",
> >>>> >>> >>> >>> >>>> > toSend.c_str(),
> >>>> >>> >>> >>> >>>> > NULL)) {
> >>>> >>> >>> >>> >>>> > std::cout << "client sent message" <<
> std::endl;
> >>>> >>> >>> >>> >>>> > }
> >>>> >>> >>> >>> >>>> > else {
> >>>> >>> >>> >>> >>>> > std::cerr << "error sending message" <<
> >>>> >>> >>> >>> >>>> > std::endl;
> >>>> >>> >>> >>> >>>> > }
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> > char *subject, *content, *attach;
> >>>> >>> >>> >>> >>>> > std::cerr << consumer_topic << " receiving
> message"
> >>>> >>> >>> >>> >>>> > <<
> >>>> >>> >>> >>> >>>> > std::endl;
> >>>> >>> >>> >>> >>>> > mlm_client_recvx (client_reader, &subject,
> >>>> >>> >>> >>> >>>> > &content,
> >>>> >>> >>> >>> >>>> > &attach,
> >>>> >>> >>> >>> >>>> > NULL);
> >>>> >>> >>> >>> >>>> > mlm_client_destroy(&client_writer);
> >>>> >>> >>> >>> >>>> > mlm_client_destroy(&client_reader);
> >>>> >>> >>> >>> >>>> > std::cout << "received: \"" << subject << "\" :"
> <<
> >>>> >>> >>> >>> >>>> > content
> >>>> >>> >>> >>> >>>> > <<
> >>>> >>> >>> >>> >>>> > "."
> >>>> >>> >>> >>> >>>> > <<
> >>>> >>> >>> >>> >>>> > std::endl;
> >>>> >>> >>> >>> >>>> > zstr_free(&subject);
> >>>> >>> >>> >>> >>>> > return content;
> >>>> >>> >>> >>> >>>> > }
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> > I get one of the set messages, but
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> > On Mon, Mar 2, 2015 at 12:59 PM, Pieter Hintjens
> >>>> >>> >>> >>> >>>> > <ph at imatix.com>
> >>>> >>> >>> >>> >>>> > wrote:
> >>>> >>> >>> >>> >>>> >>
> >>>> >>> >>> >>> >>>> >> Sure, it's much more fun if you write this up.
> >>>> >>> >>> >>> >>>> >>
> >>>> >>> >>> >>> >>>> >> On Mon, Mar 2, 2015 at 6:56 PM, Kenneth Adam
> Miller
> >>>> >>> >>> >>> >>>> >> <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>> >>> >>>> >> > I can help you with writing an article :)
> >>>> >>> >>> >>> >>>> >> >
> >>>> >>> >>> >>> >>>> >> > I was literally discovery what you were telling
> >>>> >>> >>> >>> >>>> >> > me,
> >>>> >>> >>> >>> >>>> >> > since
> >>>> >>> >>> >>> >>>> >> > sometimes,
> >>>> >>> >>> >>> >>>> >> > about
> >>>> >>> >>> >>> >>>> >> > 1/4 of the time, it would succeed. What you say
> >>>> >>> >>> >>> >>>> >> > rationalizes my
> >>>> >>> >>> >>> >>>> >> > considerations since I was literally writing you
> >>>> >>> >>> >>> >>>> >> > an
> >>>> >>> >>> >>> >>>> >> > email
> >>>> >>> >>> >>> >>>> >> > about
> >>>> >>> >>> >>> >>>> >> > what I
> >>>> >>> >>> >>> >>>> >> > was
> >>>> >>> >>> >>> >>>> >> > witnessing.
> >>>> >>> >>> >>> >>>> >> >
> >>>> >>> >>> >>> >>>> >> > Let me try to work out what you're saying, then
> I
> >>>> >>> >>> >>> >>>> >> > can
> >>>> >>> >>> >>> >>>> >> > post
> >>>> >>> >>> >>> >>>> >> > what
> >>>> >>> >>> >>> >>>> >> > I
> >>>> >>> >>> >>> >>>> >> > established to a public github repo :)
> >>>> >>> >>> >>> >>>> >> >
> >>>> >>> >>> >>> >>>> >> > On Mon, Mar 2, 2015 at 12:50 PM, Pieter Hintjens
> >>>> >>> >>> >>> >>>> >> > <ph at imatix.com>
> >>>> >>> >>> >>> >>>> >> > wrote:
> >>>> >>> >>> >>> >>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> The problem with streams is there's no
> >>>> >>> >>> >>> >>>> >> >> persistence
> >>>> >>> >>> >>> >>>> >> >> yet, so
> >>>> >>> >>> >>> >>>> >> >> both
> >>>> >>> >>> >>> >>>> >> >> peers
> >>>> >>> >>> >>> >>>> >> >> have to be present at the same time. A name
> >>>> >>> >>> >>> >>>> >> >> registration/lookup
> >>>> >>> >>> >>> >>>> >> >> service is probably better.
> >>>> >>> >>> >>> >>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> Yes, the set_worker call offers a service. I'd
> do
> >>>> >>> >>> >>> >>>> >> >> this:
> >>>> >>> >>> >>> >>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> - offer a service "ADDRESS" using set_worker
> >>>> >>> >>> >>> >>>> >> >> - two messages: SET and GET, each taking a
> >>>> >>> >>> >>> >>>> >> >> name/value
> >>>> >>> >>> >>> >>>> >> >> (use
> >>>> >>> >>> >>> >>>> >> >> frames
> >>>> >>> >>> >>> >>>> >> >> or
> >>>> >>> >>> >>> >>>> >> >> any other encoding you like)
> >>>> >>> >>> >>> >>>> >> >> - use the sendfor method to send the request
> >>>> >>> >>> >>> >>>> >> >> - use the sendto method to send the replies,
> >>>> >>> >>> >>> >>>> >> >> which end
> >>>> >>> >>> >>> >>>> >> >> in
> >>>> >>> >>> >>> >>>> >> >> a
> >>>> >>> >>> >>> >>>> >> >> client's
> >>>> >>> >>> >>> >>>> >> >> mailbox
> >>>> >>> >>> >>> >>>> >> >> - read the replies using the recv method
> >>>> >>> >>> >>> >>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> For this to work, peers need to specify a
> mailbox
> >>>> >>> >>> >>> >>>> >> >> address
> >>>> >>> >>> >>> >>>> >> >> in
> >>>> >>> >>> >>> >>>> >> >> the
> >>>> >>> >>> >>> >>>> >> >> connect
> >>>> >>> >>> >>> >>>> >> >> method.
> >>>> >>> >>> >>> >>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> If you like I'll write an article and make
> >>>> >>> >>> >>> >>>> >> >> examples.
> >>>> >>> >>> >>> >>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> -Pieter
> >>>> >>> >>> >>> >>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> On Mon, Mar 2, 2015 at 6:45 PM, Kenneth Adam
> >>>> >>> >>> >>> >>>> >> >> Miller
> >>>> >>> >>> >>> >>>> >> >> <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>> >>> >>>> >> >> > I got it to work by setting the subscribed
> >>>> >>> >>> >>> >>>> >> >> > topic to
> >>>> >>> >>> >>> >>>> >> >> > "inproc*" on
> >>>> >>> >>> >>> >>>> >> >> > mlm_client_set_worker call.
> >>>> >>> >>> >>> >>>> >> >> >
> >>>> >>> >>> >>> >>>> >> >> > On Mon, Mar 2, 2015 at 12:07 PM, Kenneth Adam
> >>>> >>> >>> >>> >>>> >> >> > Miller
> >>>> >>> >>> >>> >>>> >> >> > <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >> Ok after looking at mlm_client.c, I have the
> >>>> >>> >>> >>> >>>> >> >> >> following:
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >> Two concurrent calls to exchange addresses
> >>>> >>> >>> >>> >>>> >> >> >> with the
> >>>> >>> >>> >>> >>>> >> >> >> following
> >>>> >>> >>> >>> >>>> >> >> >> parameters:
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >> //thread 1
> >>>> >>> >>> >>> >>>> >> >> >> char * servrAddr =
> >>>> >>> >>> >>> >>>> >> >> >> exchange_addresses("backendEndpoints",
> >>>> >>> >>> >>> >>>> >> >> >> "frontendEndpoints", "inproc://frontend");
> >>>> >>> >>> >>> >>>> >> >> >> //thread 2
> >>>> >>> >>> >>> >>>> >> >> >> char * servrAddr =
> >>>> >>> >>> >>> >>>> >> >> >> exchange_addresses("frontendEndpoints",
> >>>> >>> >>> >>> >>>> >> >> >> "backendEndpoints", "inproc://backend");
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >> Where exchange addresses is implemented as:
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >> char * exchange_addresses(std::string
> >>>> >>> >>> >>> >>>> >> >> >> consumer_topic,
> >>>> >>> >>> >>> >>>> >> >> >> std::string
> >>>> >>> >>> >>> >>>> >> >> >> production_topic, std::string toSend) {
> >>>> >>> >>> >>> >>>> >> >> >> mlm_client_t *client_reader =
> >>>> >>> >>> >>> >>>> >> >> >> mlm_client_new();
> >>>> >>> >>> >>> >>>> >> >> >> assert(client_reader);
> >>>> >>> >>> >>> >>>> >> >> >> mlm_client_t *client_writer =
> >>>> >>> >>> >>> >>>> >> >> >> mlm_client_new();
> >>>> >>> >>> >>> >>>> >> >> >> assert(client_writer);
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >> int rc=mlm_client_connect (client_reader,
> >>>> >>> >>> >>> >>>> >> >> >> "tcp://127.0.0.1:9999"
> >>>> >>> >>> >>> >>>> >> >> >> ,
> >>>> >>> >>> >>> >>>> >> >> >> 3000, "");
> >>>> >>> >>> >>> >>>> >> >> >> assert(rc==0);
> >>>> >>> >>> >>> >>>> >> >> >> rc=mlm_client_connect (client_writer,
> >>>> >>> >>> >>> >>>> >> >> >> "tcp://127.0.0.1:9999"
> >>>> >>> >>> >>> >>>> >> >> >> ,
> >>>> >>> >>> >>> >>>> >> >> >> 3000,
> >>>> >>> >>> >>> >>>> >> >> >> "");
> >>>> >>> >>> >>> >>>> >> >> >> assert(rc==0);
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >> std::cout << "producing to topic: " <<
> >>>> >>> >>> >>> >>>> >> >> >> production_topic
> >>>> >>> >>> >>> >>>> >> >> >> <<
> >>>> >>> >>> >>> >>>> >> >> >> std::endl;
> >>>> >>> >>> >>> >>>> >> >> >> std::cout << "consuming from topic: " <<
> >>>> >>> >>> >>> >>>> >> >> >> consumer_topic
> >>>> >>> >>> >>> >>>> >> >> >> <<
> >>>> >>> >>> >>> >>>> >> >> >> std::endl;
> >>>> >>> >>> >>> >>>> >> >> >> mlm_client_set_worker(client_reader,
> >>>> >>> >>> >>> >>>> >> >> >> consumer_topic.c_str(),
> >>>> >>> >>> >>> >>>> >> >> >> "*");
> >>>> >>> >>> >>> >>>> >> >> >> if (!mlm_client_sendforx (client_writer,
> >>>> >>> >>> >>> >>>> >> >> >> production_topic.c_str(),
> >>>> >>> >>> >>> >>>> >> >> >> toSend.c_str(), "", NULL))
> >>>> >>> >>> >>> >>>> >> >> >> std::cout << "client sent message" <<
> >>>> >>> >>> >>> >>>> >> >> >> std::endl;
> >>>> >>> >>> >>> >>>> >> >> >> else
> >>>> >>> >>> >>> >>>> >> >> >> std::cerr << "error sending message" <<
> >>>> >>> >>> >>> >>>> >> >> >> std::endl;
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >> char *subject, *content, *attach;
> >>>> >>> >>> >>> >>>> >> >> >> mlm_client_recvx (client_reader, &subject,
> >>>> >>> >>> >>> >>>> >> >> >> &content,
> >>>> >>> >>> >>> >>>> >> >> >> NULL);
> >>>> >>> >>> >>> >>>> >> >> >> //<--
> >>>> >>> >>> >>> >>>> >> >> >> blocking here
> >>>> >>> >>> >>> >>>> >> >> >> mlm_client_destroy(&client_writer);
> >>>> >>> >>> >>> >>>> >> >> >> mlm_client_destroy(&client_reader);
> >>>> >>> >>> >>> >>>> >> >> >> std::cout << "received: " << subject << "
> "
> >>>> >>> >>> >>> >>>> >> >> >> <<
> >>>> >>> >>> >>> >>>> >> >> >> content <<
> >>>> >>> >>> >>> >>>> >> >> >> std::endl;
> >>>> >>> >>> >>> >>>> >> >> >> return content;
> >>>> >>> >>> >>> >>>> >> >> >> }
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >> Problem is, both threads block at
> >>>> >>> >>> >>> >>>> >> >> >> mlm_client_recvx...
> >>>> >>> >>> >>> >>>> >> >> >> As
> >>>> >>> >>> >>> >>>> >> >> >> per
> >>>> >>> >>> >>> >>>> >> >> >> example,
> >>>> >>> >>> >>> >>>> >> >> >> it
> >>>> >>> >>> >>> >>>> >> >> >> looks correct.
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >> On Mon, Mar 2, 2015 at 11:30 AM, Kenneth
> Adam
> >>>> >>> >>> >>> >>>> >> >> >> Miller
> >>>> >>> >>> >>> >>>> >> >> >> <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>> >>> >>>> >> >> >>>
> >>>> >>> >>> >>> >>>> >> >> >>> Oh you mean with mlm_client_set_worker! Do
> I
> >>>> >>> >>> >>> >>>> >> >> >>> do
> >>>> >>> >>> >>> >>>> >> >> >>> set_worker
> >>>> >>> >>> >>> >>>> >> >> >>> on
> >>>> >>> >>> >>> >>>> >> >> >>> each
> >>>> >>> >>> >>> >>>> >> >> >>> side
> >>>> >>> >>> >>> >>>> >> >> >>> with different service names? How does a
> >>>> >>> >>> >>> >>>> >> >> >>> client
> >>>> >>> >>> >>> >>>> >> >> >>> get a
> >>>> >>> >>> >>> >>>> >> >> >>> specific
> >>>> >>> >>> >>> >>>> >> >> >>> service?
> >>>> >>> >>> >>> >>>> >> >> >>>
> >>>> >>> >>> >>> >>>> >> >> >>> On Mon, Mar 2, 2015 at 11:26 AM, Kenneth
> Adam
> >>>> >>> >>> >>> >>>> >> >> >>> Miller
> >>>> >>> >>> >>> >>>> >> >> >>> <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>> >>> >>>> >> >> >>>>
> >>>> >>> >>> >>> >>>> >> >> >>>> Service semantics? I don't know what those
> >>>> >>> >>> >>> >>>> >> >> >>>> are...
> >>>> >>> >>> >>> >>>> >> >> >>>> I read what tutorials I think that there
> >>>> >>> >>> >>> >>>> >> >> >>>> are. I
> >>>> >>> >>> >>> >>>> >> >> >>>> have
> >>>> >>> >>> >>> >>>> >> >> >>>> some
> >>>> >>> >>> >>> >>>> >> >> >>>> questions
> >>>> >>> >>> >>> >>>> >> >> >>>> about the how things are forwarded-I want
> >>>> >>> >>> >>> >>>> >> >> >>>> only
> >>>> >>> >>> >>> >>>> >> >> >>>> one to
> >>>> >>> >>> >>> >>>> >> >> >>>> one
> >>>> >>> >>> >>> >>>> >> >> >>>> pairing...
> >>>> >>> >>> >>> >>>> >> >> >>>> I'm not
> >>>> >>> >>> >>> >>>> >> >> >>>> sure if what I'm doing is setting up for
> >>>> >>> >>> >>> >>>> >> >> >>>> publishing
> >>>> >>> >>> >>> >>>> >> >> >>>> and
> >>>> >>> >>> >>> >>>> >> >> >>>> subscriptions. There
> >>>> >>> >>> >>> >>>> >> >> >>>> was a lot of talk about some of the other
> >>>> >>> >>> >>> >>>> >> >> >>>> features in
> >>>> >>> >>> >>> >>>> >> >> >>>> the
> >>>> >>> >>> >>> >>>> >> >> >>>> malamute
> >>>> >>> >>> >>> >>>> >> >> >>>> manual/whitepaper, and it's kind of
> >>>> >>> >>> >>> >>>> >> >> >>>> confusing.
> >>>> >>> >>> >>> >>>> >> >> >>>> Basically,
> >>>> >>> >>> >>> >>>> >> >> >>>> I
> >>>> >>> >>> >>> >>>> >> >> >>>> just
> >>>> >>> >>> >>> >>>> >> >> >>>> want
> >>>> >>> >>> >>> >>>> >> >> >>>> FCFS
> >>>> >>> >>> >>> >>>> >> >> >>>> exchange of information for mutually
> >>>> >>> >>> >>> >>>> >> >> >>>> requiring
> >>>> >>> >>> >>> >>>> >> >> >>>> parties.
> >>>> >>> >>> >>> >>>> >> >> >>>>
> >>>> >>> >>> >>> >>>> >> >> >>>> On Mon, Mar 2, 2015 at 4:13 AM, Pieter
> >>>> >>> >>> >>> >>>> >> >> >>>> Hintjens
> >>>> >>> >>> >>> >>>> >> >> >>>> <ph at imatix.com>
> >>>> >>> >>> >>> >>>> >> >> >>>> wrote:
> >>>> >>> >>> >>> >>>> >> >> >>>>>
> >>>> >>> >>> >>> >>>> >> >> >>>>> The simplest way to make a lookup service
> >>>> >>> >>> >>> >>>> >> >> >>>>> is
> >>>> >>> >>> >>> >>>> >> >> >>>>> using
> >>>> >>> >>> >>> >>>> >> >> >>>>> the
> >>>> >>> >>> >>> >>>> >> >> >>>>> service
> >>>> >>> >>> >>> >>>> >> >> >>>>> semantics, and the lookup service can
> talk
> >>>> >>> >>> >>> >>>> >> >> >>>>> to
> >>>> >>> >>> >>> >>>> >> >> >>>>> the
> >>>> >>> >>> >>> >>>> >> >> >>>>> broker
> >>>> >>> >>> >>> >>>> >> >> >>>>> over
> >>>> >>> >>> >>> >>>> >> >> >>>>> inproc
> >>>> >>> >>> >>> >>>> >> >> >>>>> or tcp as it wants (it could be a thread
> in
> >>>> >>> >>> >>> >>>> >> >> >>>>> the
> >>>> >>> >>> >>> >>>> >> >> >>>>> same
> >>>> >>> >>> >>> >>>> >> >> >>>>> process, or
> >>>> >>> >>> >>> >>>> >> >> >>>>> a
> >>>> >>> >>> >>> >>>> >> >> >>>>> separate process).
> >>>> >>> >>> >>> >>>> >> >> >>>>>
> >>>> >>> >>> >>> >>>> >> >> >>>>> On Sun, Mar 1, 2015 at 9:00 PM, Kenneth
> >>>> >>> >>> >>> >>>> >> >> >>>>> Adam
> >>>> >>> >>> >>> >>>> >> >> >>>>> Miller
> >>>> >>> >>> >>> >>>> >> >> >>>>> <kennethadammiller at gmail.com> wrote:
> >>>> >>> >>> >>> >>>> >> >> >>>>> > So, in order to manage a mutual
> exchange
> >>>> >>> >>> >>> >>>> >> >> >>>>> > of
> >>>> >>> >>> >>> >>>> >> >> >>>>> > address
> >>>> >>> >>> >>> >>>> >> >> >>>>> > between two
> >>>> >>> >>> >>> >>>> >> >> >>>>> > concurrent
> >>>> >>> >>> >>> >>>> >> >> >>>>> > parties, I thought that on each side I
> >>>> >>> >>> >>> >>>> >> >> >>>>> > would
> >>>> >>> >>> >>> >>>> >> >> >>>>> > have
> >>>> >>> >>> >>> >>>> >> >> >>>>> > a
> >>>> >>> >>> >>> >>>> >> >> >>>>> > producer
> >>>> >>> >>> >>> >>>> >> >> >>>>> > produce
> >>>> >>> >>> >>> >>>> >> >> >>>>> > to a
> >>>> >>> >>> >>> >>>> >> >> >>>>> > topic that the opposite side was
> >>>> >>> >>> >>> >>>> >> >> >>>>> > subscribed
> >>>> >>> >>> >>> >>>> >> >> >>>>> > to.
> >>>> >>> >>> >>> >>>> >> >> >>>>> > That
> >>>> >>> >>> >>> >>>> >> >> >>>>> > means
> >>>> >>> >>> >>> >>>> >> >> >>>>> > that
> >>>> >>> >>> >>> >>>> >> >> >>>>> > each
> >>>> >>> >>> >>> >>>> >> >> >>>>> > side is
> >>>> >>> >>> >>> >>>> >> >> >>>>> > both a producer and a consumer.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> > I have the two entities running in
> >>>> >>> >>> >>> >>>> >> >> >>>>> > parallel.
> >>>> >>> >>> >>> >>>> >> >> >>>>> > The
> >>>> >>> >>> >>> >>>> >> >> >>>>> > front
> >>>> >>> >>> >>> >>>> >> >> >>>>> > end
> >>>> >>> >>> >>> >>>> >> >> >>>>> > client
> >>>> >>> >>> >>> >>>> >> >> >>>>> > connects
> >>>> >>> >>> >>> >>>> >> >> >>>>> > to the malamute broker, and subscribes
> to
> >>>> >>> >>> >>> >>>> >> >> >>>>> > the
> >>>> >>> >>> >>> >>>> >> >> >>>>> > backendEndpoints
> >>>> >>> >>> >>> >>>> >> >> >>>>> > topic,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > and
> >>>> >>> >>> >>> >>>> >> >> >>>>> > then producing it's endpoint to the
> >>>> >>> >>> >>> >>>> >> >> >>>>> > frontendEndpoints
> >>>> >>> >>> >>> >>>> >> >> >>>>> > topic.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> > The opposite side does the same thing,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > with
> >>>> >>> >>> >>> >>>> >> >> >>>>> > the
> >>>> >>> >>> >>> >>>> >> >> >>>>> > back
> >>>> >>> >>> >>> >>>> >> >> >>>>> > end
> >>>> >>> >>> >>> >>>> >> >> >>>>> > subscribing
> >>>> >>> >>> >>> >>>> >> >> >>>>> > to the
> >>>> >>> >>> >>> >>>> >> >> >>>>> > frontendEndpoints and producing to
> >>>> >>> >>> >>> >>>> >> >> >>>>> > backendEndpoints.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> > The problem is that if the front end
> and
> >>>> >>> >>> >>> >>>> >> >> >>>>> > back
> >>>> >>> >>> >>> >>>> >> >> >>>>> > end
> >>>> >>> >>> >>> >>>> >> >> >>>>> > are
> >>>> >>> >>> >>> >>>> >> >> >>>>> > in
> >>>> >>> >>> >>> >>>> >> >> >>>>> > their
> >>>> >>> >>> >>> >>>> >> >> >>>>> > own
> >>>> >>> >>> >>> >>>> >> >> >>>>> > threads
> >>>> >>> >>> >>> >>>> >> >> >>>>> > then only the thread that completes the
> >>>> >>> >>> >>> >>>> >> >> >>>>> > mlm_set_producer
> >>>> >>> >>> >>> >>>> >> >> >>>>> > and
> >>>> >>> >>> >>> >>>> >> >> >>>>> > mlm_set_consumer call proceed. The one
> >>>> >>> >>> >>> >>>> >> >> >>>>> > that
> >>>> >>> >>> >>> >>>> >> >> >>>>> > didn't
> >>>> >>> >>> >>> >>>> >> >> >>>>> > make it
> >>>> >>> >>> >>> >>>> >> >> >>>>> > that
> >>>> >>> >>> >>> >>>> >> >> >>>>> > far
> >>>> >>> >>> >>> >>>> >> >> >>>>> > will
> >>>> >>> >>> >>> >>>> >> >> >>>>> > hang at that mlm_set_x pair point...
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> > code:
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> > std::cout << "connectToFrontEnd" <<
> >>>> >>> >>> >>> >>>> >> >> >>>>> > std::endl;
> >>>> >>> >>> >>> >>>> >> >> >>>>> > mlm_client_t *frontend_reader =
> >>>> >>> >>> >>> >>>> >> >> >>>>> > mlm_client_new();
> >>>> >>> >>> >>> >>>> >> >> >>>>> > assert(frontend_reader);
> >>>> >>> >>> >>> >>>> >> >> >>>>> > mlm_client_t *frontend_writer =
> >>>> >>> >>> >>> >>>> >> >> >>>>> > mlm_client_new();
> >>>> >>> >>> >>> >>>> >> >> >>>>> > assert(frontend_writer);
> >>>> >>> >>> >>> >>>> >> >> >>>>> > int rc=mlm_client_connect
> >>>> >>> >>> >>> >>>> >> >> >>>>> > (frontend_reader,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > "tcp://127.0.0.1:9999"
> >>>> >>> >>> >>> >>>> >> >> >>>>> > ,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > 1000, "reader/secret");
> >>>> >>> >>> >>> >>>> >> >> >>>>> > assert(rc==0);
> >>>> >>> >>> >>> >>>> >> >> >>>>> > rc=mlm_client_connect
> (frontend_writer,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > "tcp://127.0.0.1:9999"
> >>>> >>> >>> >>> >>>> >> >> >>>>> > ,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > 1000,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > "writer/secret");
> >>>> >>> >>> >>> >>>> >> >> >>>>> > assert(rc==0);
> >>>> >>> >>> >>> >>>> >> >> >>>>> > std::cout << "frontend mlm clients
> >>>> >>> >>> >>> >>>> >> >> >>>>> > connected" <<
> >>>> >>> >>> >>> >>>> >> >> >>>>> > std::endl;
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> mlm_client_set_consumer(frontend_reader,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > "backendEndpoints",
> >>>> >>> >>> >>> >>>> >> >> >>>>> > "*");
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> mlm_client_set_producer(frontend_writer,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > "frontendEndpoints");
> >>>> >>> >>> >>> >>>> >> >> >>>>> > std::cout << "frontend client
> producers
> >>>> >>> >>> >>> >>>> >> >> >>>>> > and
> >>>> >>> >>> >>> >>>> >> >> >>>>> > consumers
> >>>> >>> >>> >>> >>>> >> >> >>>>> > set" <<
> >>>> >>> >>> >>> >>>> >> >> >>>>> > std::endl;
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> > The code looks exactly* the same for
> the
> >>>> >>> >>> >>> >>>> >> >> >>>>> > backend,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > but
> >>>> >>> >>> >>> >>>> >> >> >>>>> > with
> >>>> >>> >>> >>> >>>> >> >> >>>>> > some
> >>>> >>> >>> >>> >>>> >> >> >>>>> > variable and
> >>>> >>> >>> >>> >>>> >> >> >>>>> > other changes.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> > std::cout << "connectToBackEnd" <<
> >>>> >>> >>> >>> >>>> >> >> >>>>> > std::endl;
> >>>> >>> >>> >>> >>>> >> >> >>>>> > mlm_client_t *backend_reader =
> >>>> >>> >>> >>> >>>> >> >> >>>>> > mlm_client_new();
> >>>> >>> >>> >>> >>>> >> >> >>>>> > assert(backend_reader);
> >>>> >>> >>> >>> >>>> >> >> >>>>> > mlm_client_t *backend_writer =
> >>>> >>> >>> >>> >>>> >> >> >>>>> > mlm_client_new();
> >>>> >>> >>> >>> >>>> >> >> >>>>> > assert(backend_writer);
> >>>> >>> >>> >>> >>>> >> >> >>>>> > int
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> rc=mlm_client_connect(backend_reader,"tcp://127.0.0.1:9999",
> >>>> >>> >>> >>> >>>> >> >> >>>>> > 1000,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > "reader/secret");
> >>>> >>> >>> >>> >>>> >> >> >>>>> > assert(rc==0);
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> rc=mlm_client_connect(backend_writer,"tcp://127.0.0.1:9999",
> >>>> >>> >>> >>> >>>> >> >> >>>>> > 1000,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > "writer/secret");
> >>>> >>> >>> >>> >>>> >> >> >>>>> > assert(rc==0);
> >>>> >>> >>> >>> >>>> >> >> >>>>> > std::cout << "backend mlm clients
> >>>> >>> >>> >>> >>>> >> >> >>>>> > connected"
> >>>> >>> >>> >>> >>>> >> >> >>>>> > <<
> >>>> >>> >>> >>> >>>> >> >> >>>>> > std::endl;
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> mlm_client_set_consumer(backend_reader,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > "frontendEndpoints",
> >>>> >>> >>> >>> >>>> >> >> >>>>> > "*");
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> mlm_client_set_producer(backend_writer,
> >>>> >>> >>> >>> >>>> >> >> >>>>> > "backendEndpoints");
> >>>> >>> >>> >>> >>>> >> >> >>>>> > std::cout << "backend client
> producers
> >>>> >>> >>> >>> >>>> >> >> >>>>> > and
> >>>> >>> >>> >>> >>>> >> >> >>>>> > consumers
> >>>> >>> >>> >>> >>>> >> >> >>>>> > set" <<
> >>>> >>> >>> >>> >>>> >> >> >>>>> > std::endl;
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> > I only ever will see either "frontend
> >>>> >>> >>> >>> >>>> >> >> >>>>> > client
> >>>> >>> >>> >>> >>>> >> >> >>>>> > produces
> >>>> >>> >>> >>> >>>> >> >> >>>>> > and
> >>>> >>> >>> >>> >>>> >> >> >>>>> > consumers
> >>>> >>> >>> >>> >>>> >> >> >>>>> > set" or
> >>>> >>> >>> >>> >>>> >> >> >>>>> > "backend client producers and consumers
> >>>> >>> >>> >>> >>>> >> >> >>>>> > set".
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> > On Sun, Mar 1, 2015 at 2:00 PM, Pieter
> >>>> >>> >>> >>> >>>> >> >> >>>>> > Hintjens
> >>>> >>> >>> >>> >>>> >> >> >>>>> > <ph at imatix.com>
> >>>> >>> >>> >>> >>>> >> >> >>>>> > wrote:
> >>>> >>> >>> >>> >>>> >> >> >>>>> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> My assumption is that a broker that's
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> doing a
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> lot
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> of
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> service
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> requests
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> won't be showing costs of regular
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> expression
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> matching,
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> compared
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> to
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> the
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> workload.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> On Sun, Mar 1, 2015 at 7:49 PM, Doron
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> Somech
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> <somdoron at gmail.com>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> wrote:
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> I did it in actors and then moved
> it
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> back
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> into
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> the
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> main
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> server
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> as
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> it
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> was complexity for nothing (at that
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> stage).
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> I'd
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> rather
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> design
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> against
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> real use than against theory.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > Don't you worry about the matching
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > performance
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > which
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > will
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > happen
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > on the
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > main
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > thread? Also a usage I can see is to
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > use
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > exact
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > matching
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > (string
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > comparison)
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > over regular expression (I usually
> use
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > exact
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > matching),
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > this
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > is
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > way I
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > think
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > the plugin model fits the service as
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > well.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > On Sun, Mar 1, 2015 at 8:09 PM,
> Pieter
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > Hintjens
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > <ph at imatix.com>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > wrote:
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> On Sun, Mar 1, 2015 at 5:52 PM,
> Doron
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> Somech
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> <somdoron at gmail.com>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> wrote:
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > So I went over the code, really
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > liked
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > it.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > Very
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > simple.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> Thanks. I like the plugin model,
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> especially
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> neat
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> using
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> CZMQ
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> actors.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > I have a question regarding
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > services,
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > for
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > each
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > stream you
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > are
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > using a
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > dedicate thread (actors) and one
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > thread
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > for
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > managing
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > mailboxes.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > However
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > (if
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > I understood correctly) for
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > services you
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > are
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > doing
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > the
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > processing
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > inside
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > the
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > server thread, why didn't you use
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > an
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > actor
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > for
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > each
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > service
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > or
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > actor
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > to
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > manage all services? I think the
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > matching of
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > services can
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > be
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > expensive
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > and
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> > block the main thread.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> I did it in actors and then moved
> it
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> back
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> into
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> the
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> main
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> server
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> as
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> it
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> was complexity for nothing (at that
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> stage).
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> I'd
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> rather
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> design
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> against
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> real use than against theory.
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> -Pieter
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> _______________________________________________
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> zeromq-dev mailing list
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >> zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >>
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> _______________________________________________
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > zeromq-dev mailing list
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> > zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >>
> _______________________________________________
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> zeromq-dev mailing list
> >>>> >>> >>> >>> >>>> >> >> >>>>> >> zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> >>>> >> >> >>>>> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >>
> >>>> >>> >>> >>> >>>> >> >> >>>>> >>
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> _______________________________________________
> >>>> >>> >>> >>> >>>> >> >> >>>>> > zeromq-dev mailing list
> >>>> >>> >>> >>> >>>> >> >> >>>>> > zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>> >>>> >> >> >>>>> >
> >>>> >>> >>> >>> >>>> >> >> >>>>>
> >>>> >>> >>> >>> >>>> >> >> >>>>>
> _______________________________________________
> >>>> >>> >>> >>> >>>> >> >> >>>>> zeromq-dev mailing list
> >>>> >>> >>> >>> >>>> >> >> >>>>> zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> >>>> >> >> >>>>>
> >>>> >>> >>> >>> >>>> >> >> >>>>>
> >>>> >>> >>> >>> >>>> >> >> >>>>>
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>> >>>> >> >> >>>>
> >>>> >>> >>> >>> >>>> >> >> >>>>
> >>>> >>> >>> >>> >>>> >> >> >>>
> >>>> >>> >>> >>> >>>> >> >> >>
> >>>> >>> >>> >>> >>>> >> >> >
> >>>> >>> >>> >>> >>>> >> >> >
> >>>> >>> >>> >>> >>>> >> >> >
> _______________________________________________
> >>>> >>> >>> >>> >>>> >> >> > zeromq-dev mailing list
> >>>> >>> >>> >>> >>>> >> >> > zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> >>>> >> >> >
> >>>> >>> >>> >>> >>>> >> >> >
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>> >>>> >> >> >
> >>>> >>> >>> >>> >>>> >> >> _______________________________________________
> >>>> >>> >>> >>> >>>> >> >> zeromq-dev mailing list
> >>>> >>> >>> >>> >>>> >> >> zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> >>>> >> >>
> >>>> >>> >>> >>> >>>> >> >>
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>> >>>> >> >
> >>>> >>> >>> >>> >>>> >> >
> >>>> >>> >>> >>> >>>> >> >
> >>>> >>> >>> >>> >>>> >> > _______________________________________________
> >>>> >>> >>> >>> >>>> >> > zeromq-dev mailing list
> >>>> >>> >>> >>> >>>> >> > zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> >>>> >> >
> >>>> >>> >>> >>> >>>> >> >
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>> >>>> >> >
> >>>> >>> >>> >>> >>>> >> _______________________________________________
> >>>> >>> >>> >>> >>>> >> zeromq-dev mailing list
> >>>> >>> >>> >>> >>>> >> zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> >>>> >>
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> > _______________________________________________
> >>>> >>> >>> >>> >>>> > zeromq-dev mailing list
> >>>> >>> >>> >>> >>>> > zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> >>>> >
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>> >>>> >
> >>>> >>> >>> >>> >>>> _______________________________________________
> >>>> >>> >>> >>> >>>> zeromq-dev mailing list
> >>>> >>> >>> >>> >>>> zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> >>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>> >>>
> >>>> >>> >>> >>> >>>
> >>>> >>> >>> >>> >>
> >>>> >>> >>> >>> >
> >>>> >>> >>> >>> >
> >>>> >>> >>> >>> > _______________________________________________
> >>>> >>> >>> >>> > zeromq-dev mailing list
> >>>> >>> >>> >>> > zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>> >
> >>>> >>> >>> >>> _______________________________________________
> >>>> >>> >>> >>> zeromq-dev mailing list
> >>>> >>> >>> >>> zeromq-dev at lists.zeromq.org
> >>>> >>> >>> >>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >>
> >>>> >>> >>> >>
> >>>> >>> >>> >
> >>>> >>> >>> >
> >>>> >>> >>> > _______________________________________________
> >>>> >>> >>> > zeromq-dev mailing list
> >>>> >>> >>> > zeromq-dev at lists.zeromq.org
> >>>> >>> >>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>> >
> >>>> >>> >>> _______________________________________________
> >>>> >>> >>> zeromq-dev mailing list
> >>>> >>> >>> zeromq-dev at lists.zeromq.org
> >>>> >>> >>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >>
> >>>> >>> >>
> >>>> >>> >
> >>>> >>> >
> >>>> >>> > _______________________________________________
> >>>> >>> > zeromq-dev mailing list
> >>>> >>> > zeromq-dev at lists.zeromq.org
> >>>> >>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>> >
> >>>> >>> _______________________________________________
> >>>> >>> zeromq-dev mailing list
> >>>> >>> zeromq-dev at lists.zeromq.org
> >>>> >>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >>
> >>>> >>
> >>>> >
> >>>> >
> >>>> > _______________________________________________
> >>>> > zeromq-dev mailing list
> >>>> > zeromq-dev at lists.zeromq.org
> >>>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>> >
> >>>> _______________________________________________
> >>>> zeromq-dev mailing list
> >>>> zeromq-dev at lists.zeromq.org
> >>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>>
> >>>
> >>
> >
> >
> > _______________________________________________
> > zeromq-dev mailing list
> > zeromq-dev at lists.zeromq.org
> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20150312/d57bb3f1/attachment.htm>
More information about the zeromq-dev
mailing list