[zeromq-dev] Malamute broker project

Pieter Hintjens ph at imatix.com
Tue Mar 3 06:57:17 CET 2015


There's no limit on mailboxes, and you don't need to consider
lifetimes. Consider these like email addresses. Mailboxes will at some
point have to be saved to disk (they're memory only now.)

On Tue, Mar 3, 2015 at 4:15 AM, Kenneth Adam Miller
<kennethadammiller at gmail.com> wrote:
> Is there any limit to the number of mailboxes on the malamute broker? How do
> I manage mailbox lifetimes, or is that something that I need to consider?
>
> On Mon, Mar 2, 2015 at 8:21 PM, Kenneth Adam Miller
> <kennethadammiller at gmail.com> wrote:
>>
>> Ah, now that I considered it in the context of each side having many
>> clients, I can see why having mailboxes is undesirable. You want a service
>> to tell a mailbox name in order that it can be retrieved individually, per
>> client. I finally understand. Some concepts take a bit to sink in, the ZMQ
>> manual had a learning curve. But I like this!
>>
>> In any case, I'm figuring out how to use the API to do what you said,
>> since it was kind of ambiguous. Thanks so much for the help.
>>
>> On Mon, Mar 2, 2015 at 5:22 PM, Pieter Hintjens <ph at imatix.com> wrote:
>>>
>>> Each peer has to have its own mailbox, yes.
>>>
>>> On Mon, Mar 2, 2015 at 10:25 PM, Kenneth Adam Miller
>>> <kennethadammiller at gmail.com> wrote:
>>> > Yeah that fixed it!
>>> > Now I just have to iron out what precisely is concurrent.
>>> >
>>> > On Mon, Mar 2, 2015 at 4:24 PM, Kenneth Adam Miller
>>> > <kennethadammiller at gmail.com> wrote:
>>> >>
>>> >> Wait, is it because each of the peers have specified the same mailbox?
>>> >>
>>> >> On Mon, Mar 2, 2015 at 4:12 PM, Kenneth Adam Miller
>>> >> <kennethadammiller at gmail.com> wrote:
>>> >>>
>>> >>> But only one side gets a message from the broker. The other side just
>>> >>> freezes.
>>> >>>
>>> >>> On Mon, Mar 2, 2015 at 3:54 PM, Pieter Hintjens <ph at imatix.com>
>>> >>> wrote:
>>> >>>>
>>> >>>> Sure, it'd work as subjects.
>>> >>>>
>>> >>>> On Mon, Mar 2, 2015 at 8:07 PM, Kenneth Adam Miller
>>> >>>> <kennethadammiller at gmail.com> wrote:
>>> >>>> > What are the two messages SET and GET that you're talking about?
>>> >>>> > Are
>>> >>>> > you
>>> >>>> > saying that sendfor parameter char * address is "ADDRESS" and
>>> >>>> > subject
>>> >>>> > is
>>> >>>> > "GET" or "SET" depending on whether or not there should be a read,
>>> >>>> > with the
>>> >>>> > contents being the actual "tcp://some_IP:some_port"? Or actually
>>> >>>> > author the
>>> >>>> > protocol for brokering where I use actual broker commands SET and
>>> >>>> > GET?
>>> >>>> >
>>> >>>> > Is send_for exchangeable for send_forx in this context?
>>> >>>> >
>>> >>>> > I changed it to this, trying to follow mlm_client.c:
>>> >>>> >
>>> >>>> > char * exchange_addresses(std::string consumer_topic, std::string
>>> >>>> > production_topic, std::string toSend) {
>>> >>>> >   mlm_client_t *client_reader = mlm_client_new();
>>> >>>> >   assert(client_reader);
>>> >>>> >   mlm_client_t *client_writer = mlm_client_new();
>>> >>>> >   assert(client_writer);
>>> >>>> >
>>> >>>> >   int rc=mlm_client_connect (client_reader,
>>> >>>> > "tcp://127.0.0.1:9999" ,
>>> >>>> > 3000,
>>> >>>> > "ADDRESS");
>>> >>>> >   assert(rc==0);
>>> >>>> >   rc=mlm_client_connect (client_writer,  "tcp://127.0.0.1:9999" ,
>>> >>>> > 3000,
>>> >>>> > "");
>>> >>>> >   assert(rc==0);
>>> >>>> >
>>> >>>> >   std::cout << "producing to topic: " << production_topic <<
>>> >>>> > std::endl;
>>> >>>> >   std::cout << "consuming from topic: " << consumer_topic <<
>>> >>>> > std::endl;
>>> >>>> >   if (!mlm_client_sendtox(client_writer, "ADDRESS", "SET",
>>> >>>> > toSend.c_str(),
>>> >>>> > NULL)) {
>>> >>>> >     std::cout << "client sent message" << std::endl;
>>> >>>> >   }
>>> >>>> >   else {
>>> >>>> >     std::cerr << "error sending message" << std::endl;
>>> >>>> >   }
>>> >>>> >
>>> >>>> >
>>> >>>> >   char *subject, *content, *attach;
>>> >>>> >   std::cerr << consumer_topic << " receiving message" <<
>>> >>>> > std::endl;
>>> >>>> >   mlm_client_recvx (client_reader, &subject, &content, &attach,
>>> >>>> > NULL);
>>> >>>> >   mlm_client_destroy(&client_writer);
>>> >>>> >   mlm_client_destroy(&client_reader);
>>> >>>> >   std::cout << "received: \"" << subject << "\" :" << content <<
>>> >>>> > "."
>>> >>>> > <<
>>> >>>> > std::endl;
>>> >>>> >   zstr_free(&subject);
>>> >>>> >   return content;
>>> >>>> > }
>>> >>>> >
>>> >>>> >
>>> >>>> > I get one of the set messages, but
>>> >>>> >
>>> >>>> > On Mon, Mar 2, 2015 at 12:59 PM, Pieter Hintjens <ph at imatix.com>
>>> >>>> > wrote:
>>> >>>> >>
>>> >>>> >> Sure, it's much more fun if you write this up.
>>> >>>> >>
>>> >>>> >> On Mon, Mar 2, 2015 at 6:56 PM, Kenneth Adam Miller
>>> >>>> >> <kennethadammiller at gmail.com> wrote:
>>> >>>> >> > I can help you with writing an article :)
>>> >>>> >> >
>>> >>>> >> > I was literally discovery what you were telling me, since
>>> >>>> >> > sometimes,
>>> >>>> >> > about
>>> >>>> >> > 1/4 of the time, it would succeed. What you say rationalizes my
>>> >>>> >> > considerations since I was literally writing you an email about
>>> >>>> >> > what I
>>> >>>> >> > was
>>> >>>> >> > witnessing.
>>> >>>> >> >
>>> >>>> >> > Let me try to work out what you're saying, then I can post what
>>> >>>> >> > I
>>> >>>> >> > established to a public github repo :)
>>> >>>> >> >
>>> >>>> >> > On Mon, Mar 2, 2015 at 12:50 PM, Pieter Hintjens
>>> >>>> >> > <ph at imatix.com>
>>> >>>> >> > wrote:
>>> >>>> >> >>
>>> >>>> >> >> The problem with streams is there's no persistence yet, so
>>> >>>> >> >> both
>>> >>>> >> >> peers
>>> >>>> >> >> have to be present at the same time.  A name
>>> >>>> >> >> registration/lookup
>>> >>>> >> >> service is probably better.
>>> >>>> >> >>
>>> >>>> >> >> Yes, the set_worker call offers a service. I'd do this:
>>> >>>> >> >>
>>> >>>> >> >> - offer a service "ADDRESS" using set_worker
>>> >>>> >> >> - two messages: SET and GET, each taking a name/value (use
>>> >>>> >> >> frames
>>> >>>> >> >> or
>>> >>>> >> >> any other encoding you like)
>>> >>>> >> >> - use the sendfor method to send the request
>>> >>>> >> >> - use the sendto method to send the replies, which end in a
>>> >>>> >> >> client's
>>> >>>> >> >> mailbox
>>> >>>> >> >> - read the replies using the recv method
>>> >>>> >> >>
>>> >>>> >> >> For this to work, peers need to specify a mailbox address in
>>> >>>> >> >> the
>>> >>>> >> >> connect
>>> >>>> >> >> method.
>>> >>>> >> >>
>>> >>>> >> >> If you like I'll write an article and make examples.
>>> >>>> >> >>
>>> >>>> >> >> -Pieter
>>> >>>> >> >>
>>> >>>> >> >> On Mon, Mar 2, 2015 at 6:45 PM, Kenneth Adam Miller
>>> >>>> >> >> <kennethadammiller at gmail.com> wrote:
>>> >>>> >> >> > I got it to work by setting the subscribed topic to
>>> >>>> >> >> > "inproc*" on
>>> >>>> >> >> > mlm_client_set_worker call.
>>> >>>> >> >> >
>>> >>>> >> >> > On Mon, Mar 2, 2015 at 12:07 PM, Kenneth Adam Miller
>>> >>>> >> >> > <kennethadammiller at gmail.com> wrote:
>>> >>>> >> >> >>
>>> >>>> >> >> >> Ok after looking at mlm_client.c, I have the following:
>>> >>>> >> >> >>
>>> >>>> >> >> >> Two concurrent calls to exchange addresses with the
>>> >>>> >> >> >> following
>>> >>>> >> >> >> parameters:
>>> >>>> >> >> >>
>>> >>>> >> >> >> //thread 1
>>> >>>> >> >> >>   char * servrAddr = exchange_addresses("backendEndpoints",
>>> >>>> >> >> >> "frontendEndpoints", "inproc://frontend");
>>> >>>> >> >> >> //thread 2
>>> >>>> >> >> >>   char * servrAddr =
>>> >>>> >> >> >> exchange_addresses("frontendEndpoints",
>>> >>>> >> >> >> "backendEndpoints", "inproc://backend");
>>> >>>> >> >> >>
>>> >>>> >> >> >> Where exchange addresses is implemented as:
>>> >>>> >> >> >>
>>> >>>> >> >> >> char * exchange_addresses(std::string consumer_topic,
>>> >>>> >> >> >> std::string
>>> >>>> >> >> >> production_topic, std::string toSend) {
>>> >>>> >> >> >>   mlm_client_t *client_reader = mlm_client_new();
>>> >>>> >> >> >>   assert(client_reader);
>>> >>>> >> >> >>   mlm_client_t *client_writer = mlm_client_new();
>>> >>>> >> >> >>   assert(client_writer);
>>> >>>> >> >> >>
>>> >>>> >> >> >>   int rc=mlm_client_connect (client_reader,
>>> >>>> >> >> >> "tcp://127.0.0.1:9999"
>>> >>>> >> >> >> ,
>>> >>>> >> >> >> 3000, "");
>>> >>>> >> >> >>   assert(rc==0);
>>> >>>> >> >> >>   rc=mlm_client_connect (client_writer,
>>> >>>> >> >> >> "tcp://127.0.0.1:9999"
>>> >>>> >> >> >> ,
>>> >>>> >> >> >> 3000,
>>> >>>> >> >> >> "");
>>> >>>> >> >> >>   assert(rc==0);
>>> >>>> >> >> >>
>>> >>>> >> >> >>   std::cout << "producing to topic: " << production_topic
>>> >>>> >> >> >> <<
>>> >>>> >> >> >> std::endl;
>>> >>>> >> >> >>   std::cout << "consuming from topic: " << consumer_topic
>>> >>>> >> >> >> <<
>>> >>>> >> >> >> std::endl;
>>> >>>> >> >> >>   mlm_client_set_worker(client_reader,
>>> >>>> >> >> >> consumer_topic.c_str(),
>>> >>>> >> >> >> "*");
>>> >>>> >> >> >>   if (!mlm_client_sendforx (client_writer,
>>> >>>> >> >> >> production_topic.c_str(),
>>> >>>> >> >> >> toSend.c_str(), "", NULL))
>>> >>>> >> >> >>     std::cout << "client sent message" << std::endl;
>>> >>>> >> >> >>   else
>>> >>>> >> >> >>     std::cerr << "error sending message" << std::endl;
>>> >>>> >> >> >>
>>> >>>> >> >> >>
>>> >>>> >> >> >>   char *subject, *content, *attach;
>>> >>>> >> >> >>   mlm_client_recvx (client_reader, &subject, &content,
>>> >>>> >> >> >> NULL);
>>> >>>> >> >> >> //<--
>>> >>>> >> >> >> blocking here
>>> >>>> >> >> >>   mlm_client_destroy(&client_writer);
>>> >>>> >> >> >>   mlm_client_destroy(&client_reader);
>>> >>>> >> >> >>   std::cout << "received: " << subject << " " << content <<
>>> >>>> >> >> >> std::endl;
>>> >>>> >> >> >>   return content;
>>> >>>> >> >> >> }
>>> >>>> >> >> >>
>>> >>>> >> >> >>
>>> >>>> >> >> >> Problem is, both threads block at mlm_client_recvx... As
>>> >>>> >> >> >> per
>>> >>>> >> >> >> example,
>>> >>>> >> >> >> it
>>> >>>> >> >> >> looks correct.
>>> >>>> >> >> >>
>>> >>>> >> >> >> On Mon, Mar 2, 2015 at 11:30 AM, Kenneth Adam Miller
>>> >>>> >> >> >> <kennethadammiller at gmail.com> wrote:
>>> >>>> >> >> >>>
>>> >>>> >> >> >>> Oh you mean with mlm_client_set_worker! Do I do set_worker
>>> >>>> >> >> >>> on
>>> >>>> >> >> >>> each
>>> >>>> >> >> >>> side
>>> >>>> >> >> >>> with different service names? How does a client get a
>>> >>>> >> >> >>> specific
>>> >>>> >> >> >>> service?
>>> >>>> >> >> >>>
>>> >>>> >> >> >>> On Mon, Mar 2, 2015 at 11:26 AM, Kenneth Adam Miller
>>> >>>> >> >> >>> <kennethadammiller at gmail.com> wrote:
>>> >>>> >> >> >>>>
>>> >>>> >> >> >>>> Service semantics? I don't know what those are...
>>> >>>> >> >> >>>> I read what tutorials I think that there are. I have some
>>> >>>> >> >> >>>> questions
>>> >>>> >> >> >>>> about the how things are forwarded-I want only one to one
>>> >>>> >> >> >>>> pairing...
>>> >>>> >> >> >>>> I'm not
>>> >>>> >> >> >>>> sure if what I'm doing is setting up for publishing and
>>> >>>> >> >> >>>> subscriptions. There
>>> >>>> >> >> >>>> was a lot of talk about some of the other features in the
>>> >>>> >> >> >>>> malamute
>>> >>>> >> >> >>>> manual/whitepaper, and it's kind of confusing. Basically,
>>> >>>> >> >> >>>> I
>>> >>>> >> >> >>>> just
>>> >>>> >> >> >>>> want
>>> >>>> >> >> >>>> FCFS
>>> >>>> >> >> >>>> exchange of information for mutually requiring parties.
>>> >>>> >> >> >>>>
>>> >>>> >> >> >>>> On Mon, Mar 2, 2015 at 4:13 AM, Pieter Hintjens
>>> >>>> >> >> >>>> <ph at imatix.com>
>>> >>>> >> >> >>>> wrote:
>>> >>>> >> >> >>>>>
>>> >>>> >> >> >>>>> The simplest way to make a lookup service is using the
>>> >>>> >> >> >>>>> service
>>> >>>> >> >> >>>>> semantics, and the lookup service can talk to the broker
>>> >>>> >> >> >>>>> over
>>> >>>> >> >> >>>>> inproc
>>> >>>> >> >> >>>>> or tcp as it wants (it could be a thread in the same
>>> >>>> >> >> >>>>> process, or
>>> >>>> >> >> >>>>> a
>>> >>>> >> >> >>>>> separate process).
>>> >>>> >> >> >>>>>
>>> >>>> >> >> >>>>> On Sun, Mar 1, 2015 at 9:00 PM, Kenneth Adam Miller
>>> >>>> >> >> >>>>> <kennethadammiller at gmail.com> wrote:
>>> >>>> >> >> >>>>> > So, in order to manage a mutual exchange of address
>>> >>>> >> >> >>>>> > between two
>>> >>>> >> >> >>>>> > concurrent
>>> >>>> >> >> >>>>> > parties, I thought that on each side I would have a
>>> >>>> >> >> >>>>> > producer
>>> >>>> >> >> >>>>> > produce
>>> >>>> >> >> >>>>> > to a
>>> >>>> >> >> >>>>> > topic that the opposite side was subscribed to. That
>>> >>>> >> >> >>>>> > means
>>> >>>> >> >> >>>>> > that
>>> >>>> >> >> >>>>> > each
>>> >>>> >> >> >>>>> > side is
>>> >>>> >> >> >>>>> > both a producer and a consumer.
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> > I have the two entities running in parallel. The front
>>> >>>> >> >> >>>>> > end
>>> >>>> >> >> >>>>> > client
>>> >>>> >> >> >>>>> > connects
>>> >>>> >> >> >>>>> > to the malamute broker, and subscribes to the
>>> >>>> >> >> >>>>> > backendEndpoints
>>> >>>> >> >> >>>>> > topic,
>>> >>>> >> >> >>>>> > and
>>> >>>> >> >> >>>>> > then producing it's endpoint to the frontendEndpoints
>>> >>>> >> >> >>>>> > topic.
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> > The opposite side does the same thing, with the back
>>> >>>> >> >> >>>>> > end
>>> >>>> >> >> >>>>> > subscribing
>>> >>>> >> >> >>>>> > to the
>>> >>>> >> >> >>>>> > frontendEndpoints and producing to backendEndpoints.
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> > The problem is that if the front end and back end are
>>> >>>> >> >> >>>>> > in
>>> >>>> >> >> >>>>> > their
>>> >>>> >> >> >>>>> > own
>>> >>>> >> >> >>>>> > threads
>>> >>>> >> >> >>>>> > then only the thread that completes the
>>> >>>> >> >> >>>>> > mlm_set_producer
>>> >>>> >> >> >>>>> > and
>>> >>>> >> >> >>>>> > mlm_set_consumer call proceed. The one that didn't
>>> >>>> >> >> >>>>> > make it
>>> >>>> >> >> >>>>> > that
>>> >>>> >> >> >>>>> > far
>>> >>>> >> >> >>>>> > will
>>> >>>> >> >> >>>>> > hang at that mlm_set_x pair point...
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> > code:
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> >   std::cout << "connectToFrontEnd" << std::endl;
>>> >>>> >> >> >>>>> >   mlm_client_t *frontend_reader = mlm_client_new();
>>> >>>> >> >> >>>>> >   assert(frontend_reader);
>>> >>>> >> >> >>>>> >   mlm_client_t *frontend_writer = mlm_client_new();
>>> >>>> >> >> >>>>> >   assert(frontend_writer);
>>> >>>> >> >> >>>>> >   int rc=mlm_client_connect (frontend_reader,
>>> >>>> >> >> >>>>> > "tcp://127.0.0.1:9999"
>>> >>>> >> >> >>>>> > ,
>>> >>>> >> >> >>>>> > 1000, "reader/secret");
>>> >>>> >> >> >>>>> >   assert(rc==0);
>>> >>>> >> >> >>>>> >   rc=mlm_client_connect (frontend_writer,
>>> >>>> >> >> >>>>> > "tcp://127.0.0.1:9999"
>>> >>>> >> >> >>>>> > ,
>>> >>>> >> >> >>>>> > 1000,
>>> >>>> >> >> >>>>> > "writer/secret");
>>> >>>> >> >> >>>>> >   assert(rc==0);
>>> >>>> >> >> >>>>> >   std::cout << "frontend mlm clients connected" <<
>>> >>>> >> >> >>>>> > std::endl;
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> >   mlm_client_set_consumer(frontend_reader,
>>> >>>> >> >> >>>>> > "backendEndpoints",
>>> >>>> >> >> >>>>> > "*");
>>> >>>> >> >> >>>>> >   mlm_client_set_producer(frontend_writer,
>>> >>>> >> >> >>>>> > "frontendEndpoints");
>>> >>>> >> >> >>>>> >   std::cout << "frontend client producers and
>>> >>>> >> >> >>>>> > consumers
>>> >>>> >> >> >>>>> > set" <<
>>> >>>> >> >> >>>>> > std::endl;
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> > The code looks exactly* the same for the backend, but
>>> >>>> >> >> >>>>> > with
>>> >>>> >> >> >>>>> > some
>>> >>>> >> >> >>>>> > variable and
>>> >>>> >> >> >>>>> > other changes.
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> >   std::cout << "connectToBackEnd" << std::endl;
>>> >>>> >> >> >>>>> >   mlm_client_t *backend_reader = mlm_client_new();
>>> >>>> >> >> >>>>> >   assert(backend_reader);
>>> >>>> >> >> >>>>> >   mlm_client_t *backend_writer = mlm_client_new();
>>> >>>> >> >> >>>>> >   assert(backend_writer);
>>> >>>> >> >> >>>>> >   int
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> > rc=mlm_client_connect(backend_reader,"tcp://127.0.0.1:9999",
>>> >>>> >> >> >>>>> > 1000,
>>> >>>> >> >> >>>>> > "reader/secret");
>>> >>>> >> >> >>>>> >   assert(rc==0);
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> > rc=mlm_client_connect(backend_writer,"tcp://127.0.0.1:9999",
>>> >>>> >> >> >>>>> > 1000,
>>> >>>> >> >> >>>>> > "writer/secret");
>>> >>>> >> >> >>>>> >   assert(rc==0);
>>> >>>> >> >> >>>>> >   std::cout << "backend mlm clients connected" <<
>>> >>>> >> >> >>>>> > std::endl;
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> >   mlm_client_set_consumer(backend_reader,
>>> >>>> >> >> >>>>> > "frontendEndpoints",
>>> >>>> >> >> >>>>> > "*");
>>> >>>> >> >> >>>>> >   mlm_client_set_producer(backend_writer,
>>> >>>> >> >> >>>>> > "backendEndpoints");
>>> >>>> >> >> >>>>> >   std::cout << "backend client producers and consumers
>>> >>>> >> >> >>>>> > set" <<
>>> >>>> >> >> >>>>> > std::endl;
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> > I only ever will see either "frontend client produces
>>> >>>> >> >> >>>>> > and
>>> >>>> >> >> >>>>> > consumers
>>> >>>> >> >> >>>>> > set" or
>>> >>>> >> >> >>>>> > "backend client producers and consumers set".
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> > On Sun, Mar 1, 2015 at 2:00 PM, Pieter Hintjens
>>> >>>> >> >> >>>>> > <ph at imatix.com>
>>> >>>> >> >> >>>>> > wrote:
>>> >>>> >> >> >>>>> >>
>>> >>>> >> >> >>>>> >> My assumption is that a broker that's doing a lot of
>>> >>>> >> >> >>>>> >> service
>>> >>>> >> >> >>>>> >> requests
>>> >>>> >> >> >>>>> >> won't be showing costs of regular expression
>>> >>>> >> >> >>>>> >> matching,
>>> >>>> >> >> >>>>> >> compared
>>> >>>> >> >> >>>>> >> to
>>> >>>> >> >> >>>>> >> the
>>> >>>> >> >> >>>>> >> workload.
>>> >>>> >> >> >>>>> >>
>>> >>>> >> >> >>>>> >>
>>> >>>> >> >> >>>>> >> On Sun, Mar 1, 2015 at 7:49 PM, Doron Somech
>>> >>>> >> >> >>>>> >> <somdoron at gmail.com>
>>> >>>> >> >> >>>>> >> wrote:
>>> >>>> >> >> >>>>> >> >> I did it in actors and then moved it back into the
>>> >>>> >> >> >>>>> >> >> main
>>> >>>> >> >> >>>>> >> >> server
>>> >>>> >> >> >>>>> >> >> as
>>> >>>> >> >> >>>>> >> >> it
>>> >>>> >> >> >>>>> >> >> was complexity for nothing (at that stage). I'd
>>> >>>> >> >> >>>>> >> >> rather
>>> >>>> >> >> >>>>> >> >> design
>>> >>>> >> >> >>>>> >> >> against
>>> >>>> >> >> >>>>> >> >> real use than against theory.
>>> >>>> >> >> >>>>> >> >
>>> >>>> >> >> >>>>> >> > Don't you worry about the matching performance
>>> >>>> >> >> >>>>> >> > which
>>> >>>> >> >> >>>>> >> > will
>>> >>>> >> >> >>>>> >> > happen
>>> >>>> >> >> >>>>> >> > on the
>>> >>>> >> >> >>>>> >> > main
>>> >>>> >> >> >>>>> >> > thread? Also a usage I can see is to use exact
>>> >>>> >> >> >>>>> >> > matching
>>> >>>> >> >> >>>>> >> > (string
>>> >>>> >> >> >>>>> >> > comparison)
>>> >>>> >> >> >>>>> >> > over regular expression (I usually use exact
>>> >>>> >> >> >>>>> >> > matching),
>>> >>>> >> >> >>>>> >> > this
>>> >>>> >> >> >>>>> >> > is
>>> >>>> >> >> >>>>> >> > way I
>>> >>>> >> >> >>>>> >> > think
>>> >>>> >> >> >>>>> >> > the plugin model fits the service as well.
>>> >>>> >> >> >>>>> >> >
>>> >>>> >> >> >>>>> >> > On Sun, Mar 1, 2015 at 8:09 PM, Pieter Hintjens
>>> >>>> >> >> >>>>> >> > <ph at imatix.com>
>>> >>>> >> >> >>>>> >> > wrote:
>>> >>>> >> >> >>>>> >> >>
>>> >>>> >> >> >>>>> >> >> On Sun, Mar 1, 2015 at 5:52 PM, Doron Somech
>>> >>>> >> >> >>>>> >> >> <somdoron at gmail.com>
>>> >>>> >> >> >>>>> >> >> wrote:
>>> >>>> >> >> >>>>> >> >>
>>> >>>> >> >> >>>>> >> >> > So I went over the code, really liked it. Very
>>> >>>> >> >> >>>>> >> >> > simple.
>>> >>>> >> >> >>>>> >> >>
>>> >>>> >> >> >>>>> >> >> Thanks. I like the plugin model, especially neat
>>> >>>> >> >> >>>>> >> >> using
>>> >>>> >> >> >>>>> >> >> CZMQ
>>> >>>> >> >> >>>>> >> >> actors.
>>> >>>> >> >> >>>>> >> >>
>>> >>>> >> >> >>>>> >> >> > I have a question regarding services, for each
>>> >>>> >> >> >>>>> >> >> > stream you
>>> >>>> >> >> >>>>> >> >> > are
>>> >>>> >> >> >>>>> >> >> > using a
>>> >>>> >> >> >>>>> >> >> > dedicate thread (actors) and one thread for
>>> >>>> >> >> >>>>> >> >> > managing
>>> >>>> >> >> >>>>> >> >> > mailboxes.
>>> >>>> >> >> >>>>> >> >> > However
>>> >>>> >> >> >>>>> >> >> > (if
>>> >>>> >> >> >>>>> >> >> > I understood correctly) for services you are
>>> >>>> >> >> >>>>> >> >> > doing
>>> >>>> >> >> >>>>> >> >> > the
>>> >>>> >> >> >>>>> >> >> > processing
>>> >>>> >> >> >>>>> >> >> > inside
>>> >>>> >> >> >>>>> >> >> > the
>>> >>>> >> >> >>>>> >> >> > server thread, why didn't you use an actor for
>>> >>>> >> >> >>>>> >> >> > each
>>> >>>> >> >> >>>>> >> >> > service
>>> >>>> >> >> >>>>> >> >> > or
>>> >>>> >> >> >>>>> >> >> > actor
>>> >>>> >> >> >>>>> >> >> > to
>>> >>>> >> >> >>>>> >> >> > manage all services? I think the matching of
>>> >>>> >> >> >>>>> >> >> > services can
>>> >>>> >> >> >>>>> >> >> > be
>>> >>>> >> >> >>>>> >> >> > expensive
>>> >>>> >> >> >>>>> >> >> > and
>>> >>>> >> >> >>>>> >> >> > block the main thread.
>>> >>>> >> >> >>>>> >> >>
>>> >>>> >> >> >>>>> >> >> I did it in actors and then moved it back into the
>>> >>>> >> >> >>>>> >> >> main
>>> >>>> >> >> >>>>> >> >> server
>>> >>>> >> >> >>>>> >> >> as
>>> >>>> >> >> >>>>> >> >> it
>>> >>>> >> >> >>>>> >> >> was complexity for nothing (at that stage). I'd
>>> >>>> >> >> >>>>> >> >> rather
>>> >>>> >> >> >>>>> >> >> design
>>> >>>> >> >> >>>>> >> >> against
>>> >>>> >> >> >>>>> >> >> real use than against theory.
>>> >>>> >> >> >>>>> >> >>
>>> >>>> >> >> >>>>> >> >> -Pieter
>>> >>>> >> >> >>>>> >> >> _______________________________________________
>>> >>>> >> >> >>>>> >> >> zeromq-dev mailing list
>>> >>>> >> >> >>>>> >> >> zeromq-dev at lists.zeromq.org
>>> >>>> >> >> >>>>> >> >>
>>> >>>> >> >> >>>>> >> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>> >> >> >>>>> >> >
>>> >>>> >> >> >>>>> >> >
>>> >>>> >> >> >>>>> >> >
>>> >>>> >> >> >>>>> >> > _______________________________________________
>>> >>>> >> >> >>>>> >> > zeromq-dev mailing list
>>> >>>> >> >> >>>>> >> > zeromq-dev at lists.zeromq.org
>>> >>>> >> >> >>>>> >> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>> >> >> >>>>> >> >
>>> >>>> >> >> >>>>> >> _______________________________________________
>>> >>>> >> >> >>>>> >> zeromq-dev mailing list
>>> >>>> >> >> >>>>> >> zeromq-dev at lists.zeromq.org
>>> >>>> >> >> >>>>> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> > _______________________________________________
>>> >>>> >> >> >>>>> > zeromq-dev mailing list
>>> >>>> >> >> >>>>> > zeromq-dev at lists.zeromq.org
>>> >>>> >> >> >>>>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>> >> >> >>>>> >
>>> >>>> >> >> >>>>> _______________________________________________
>>> >>>> >> >> >>>>> zeromq-dev mailing list
>>> >>>> >> >> >>>>> zeromq-dev at lists.zeromq.org
>>> >>>> >> >> >>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>> >> >> >>>>
>>> >>>> >> >> >>>>
>>> >>>> >> >> >>>
>>> >>>> >> >> >>
>>> >>>> >> >> >
>>> >>>> >> >> >
>>> >>>> >> >> > _______________________________________________
>>> >>>> >> >> > zeromq-dev mailing list
>>> >>>> >> >> > zeromq-dev at lists.zeromq.org
>>> >>>> >> >> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>> >> >> >
>>> >>>> >> >> _______________________________________________
>>> >>>> >> >> zeromq-dev mailing list
>>> >>>> >> >> zeromq-dev at lists.zeromq.org
>>> >>>> >> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>> >> >
>>> >>>> >> >
>>> >>>> >> >
>>> >>>> >> > _______________________________________________
>>> >>>> >> > zeromq-dev mailing list
>>> >>>> >> > zeromq-dev at lists.zeromq.org
>>> >>>> >> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>> >> >
>>> >>>> >> _______________________________________________
>>> >>>> >> zeromq-dev mailing list
>>> >>>> >> zeromq-dev at lists.zeromq.org
>>> >>>> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>> >
>>> >>>> >
>>> >>>> >
>>> >>>> > _______________________________________________
>>> >>>> > zeromq-dev mailing list
>>> >>>> > zeromq-dev at lists.zeromq.org
>>> >>>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>> >
>>> >>>> _______________________________________________
>>> >>>> zeromq-dev mailing list
>>> >>>> zeromq-dev at lists.zeromq.org
>>> >>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >>>
>>> >>>
>>> >>
>>> >
>>> >
>>> > _______________________________________________
>>> > zeromq-dev mailing list
>>> > zeromq-dev at lists.zeromq.org
>>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>> >
>>> _______________________________________________
>>> zeromq-dev mailing list
>>> zeromq-dev at lists.zeromq.org
>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
>>
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>



More information about the zeromq-dev mailing list