[zeromq-dev] Malamute broker project
Kenneth Adam Miller
kennethadammiller at gmail.com
Mon Mar 2 18:45:58 CET 2015
I got it to work by setting the subscribed topic to "inproc*" on
mlm_client_set_worker call.
On Mon, Mar 2, 2015 at 12:07 PM, Kenneth Adam Miller <
kennethadammiller at gmail.com> wrote:
> Ok after looking at mlm_client.c, I have the following:
>
> Two concurrent calls to exchange addresses with the following parameters:
>
> //thread 1
> char * servrAddr = exchange_addresses("backendEndpoints",
> "frontendEndpoints", "inproc://frontend");
> //thread 2
> char * servrAddr = exchange_addresses("frontendEndpoints",
> "backendEndpoints", "inproc://backend");
>
> Where exchange addresses is implemented as:
>
> char * exchange_addresses(std::string consumer_topic, std::string
> production_topic, std::string toSend) {
> mlm_client_t *client_reader = mlm_client_new();
> assert(client_reader);
> mlm_client_t *client_writer = mlm_client_new();
> assert(client_writer);
>
> int rc=mlm_client_connect (client_reader, "tcp://127.0.0.1:9999" ,
> 3000, "");
> assert(rc==0);
> rc=mlm_client_connect (client_writer, "tcp://127.0.0.1:9999" , 3000,
> "");
> assert(rc==0);
>
> std::cout << "producing to topic: " << production_topic << std::endl;
> std::cout << "consuming from topic: " << consumer_topic << std::endl;
> mlm_client_set_worker(client_reader, consumer_topic.c_str(), "*");
> if (!mlm_client_sendforx (client_writer, production_topic.c_str(),
> toSend.c_str(), "", NULL))
> std::cout << "client sent message" << std::endl;
> else
> std::cerr << "error sending message" << std::endl;
>
>
> char *subject, *content, *attach;
> mlm_client_recvx (client_reader, &subject, &content, NULL); //<--
> blocking here
> mlm_client_destroy(&client_writer);
> mlm_client_destroy(&client_reader);
> std::cout << "received: " << subject << " " << content << std::endl;
> return content;
> }
>
>
> Problem is, both threads block at mlm_client_recvx... As per example, it
> looks correct.
>
> On Mon, Mar 2, 2015 at 11:30 AM, Kenneth Adam Miller <
> kennethadammiller at gmail.com> wrote:
>
>> Oh you mean with mlm_client_set_worker! Do I do set_worker on each side
>> with different service names? How does a client get a specific service?
>>
>> On Mon, Mar 2, 2015 at 11:26 AM, Kenneth Adam Miller <
>> kennethadammiller at gmail.com> wrote:
>>
>>> Service semantics? I don't know what those are...
>>> I read what tutorials I think that there are. I have some questions
>>> about the how things are forwarded-I want only one to one pairing... I'm
>>> not sure if what I'm doing is setting up for publishing and subscriptions.
>>> There was a lot of talk about some of the other features in the malamute
>>> manual/whitepaper, and it's kind of confusing. Basically, I just want FCFS
>>> exchange of information for mutually requiring parties.
>>>
>>> On Mon, Mar 2, 2015 at 4:13 AM, Pieter Hintjens <ph at imatix.com> wrote:
>>>
>>>> The simplest way to make a lookup service is using the service
>>>> semantics, and the lookup service can talk to the broker over inproc
>>>> or tcp as it wants (it could be a thread in the same process, or a
>>>> separate process).
>>>>
>>>> On Sun, Mar 1, 2015 at 9:00 PM, Kenneth Adam Miller
>>>> <kennethadammiller at gmail.com> wrote:
>>>> > So, in order to manage a mutual exchange of address between two
>>>> concurrent
>>>> > parties, I thought that on each side I would have a producer produce
>>>> to a
>>>> > topic that the opposite side was subscribed to. That means that each
>>>> side is
>>>> > both a producer and a consumer.
>>>> >
>>>> > I have the two entities running in parallel. The front end client
>>>> connects
>>>> > to the malamute broker, and subscribes to the backendEndpoints topic,
>>>> and
>>>> > then producing it's endpoint to the frontendEndpoints topic.
>>>> >
>>>> > The opposite side does the same thing, with the back end subscribing
>>>> to the
>>>> > frontendEndpoints and producing to backendEndpoints.
>>>> >
>>>> >
>>>> > The problem is that if the front end and back end are in their own
>>>> threads
>>>> > then only the thread that completes the mlm_set_producer and
>>>> > mlm_set_consumer call proceed. The one that didn't make it that far
>>>> will
>>>> > hang at that mlm_set_x pair point...
>>>> >
>>>> > code:
>>>> >
>>>> > std::cout << "connectToFrontEnd" << std::endl;
>>>> > mlm_client_t *frontend_reader = mlm_client_new();
>>>> > assert(frontend_reader);
>>>> > mlm_client_t *frontend_writer = mlm_client_new();
>>>> > assert(frontend_writer);
>>>> > int rc=mlm_client_connect (frontend_reader, "tcp://127.0.0.1:9999"
>>>> ,
>>>> > 1000, "reader/secret");
>>>> > assert(rc==0);
>>>> > rc=mlm_client_connect (frontend_writer, "tcp://127.0.0.1:9999" ,
>>>> 1000,
>>>> > "writer/secret");
>>>> > assert(rc==0);
>>>> > std::cout << "frontend mlm clients connected" << std::endl;
>>>> >
>>>> > mlm_client_set_consumer(frontend_reader, "backendEndpoints", "*");
>>>> > mlm_client_set_producer(frontend_writer, "frontendEndpoints");
>>>> > std::cout << "frontend client producers and consumers set" <<
>>>> std::endl;
>>>> >
>>>> >
>>>> > The code looks exactly* the same for the backend, but with some
>>>> variable and
>>>> > other changes.
>>>> >
>>>> > std::cout << "connectToBackEnd" << std::endl;
>>>> > mlm_client_t *backend_reader = mlm_client_new();
>>>> > assert(backend_reader);
>>>> > mlm_client_t *backend_writer = mlm_client_new();
>>>> > assert(backend_writer);
>>>> > int rc=mlm_client_connect(backend_reader,"tcp://127.0.0.1:9999",
>>>> 1000,
>>>> > "reader/secret");
>>>> > assert(rc==0);
>>>> > rc=mlm_client_connect(backend_writer,"tcp://127.0.0.1:9999", 1000,
>>>> > "writer/secret");
>>>> > assert(rc==0);
>>>> > std::cout << "backend mlm clients connected" << std::endl;
>>>> >
>>>> > mlm_client_set_consumer(backend_reader, "frontendEndpoints", "*");
>>>> > mlm_client_set_producer(backend_writer, "backendEndpoints");
>>>> > std::cout << "backend client producers and consumers set" <<
>>>> std::endl;
>>>> >
>>>> > I only ever will see either "frontend client produces and consumers
>>>> set" or
>>>> > "backend client producers and consumers set".
>>>> >
>>>> > On Sun, Mar 1, 2015 at 2:00 PM, Pieter Hintjens <ph at imatix.com>
>>>> wrote:
>>>> >>
>>>> >> My assumption is that a broker that's doing a lot of service requests
>>>> >> won't be showing costs of regular expression matching, compared to
>>>> the
>>>> >> workload.
>>>> >>
>>>> >>
>>>> >> On Sun, Mar 1, 2015 at 7:49 PM, Doron Somech <somdoron at gmail.com>
>>>> wrote:
>>>> >> >> I did it in actors and then moved it back into the main server as
>>>> it
>>>> >> >> was complexity for nothing (at that stage). I'd rather design
>>>> against
>>>> >> >> real use than against theory.
>>>> >> >
>>>> >> > Don't you worry about the matching performance which will happen
>>>> on the
>>>> >> > main
>>>> >> > thread? Also a usage I can see is to use exact matching (string
>>>> >> > comparison)
>>>> >> > over regular expression (I usually use exact matching), this is
>>>> way I
>>>> >> > think
>>>> >> > the plugin model fits the service as well.
>>>> >> >
>>>> >> > On Sun, Mar 1, 2015 at 8:09 PM, Pieter Hintjens <ph at imatix.com>
>>>> wrote:
>>>> >> >>
>>>> >> >> On Sun, Mar 1, 2015 at 5:52 PM, Doron Somech <somdoron at gmail.com>
>>>> >> >> wrote:
>>>> >> >>
>>>> >> >> > So I went over the code, really liked it. Very simple.
>>>> >> >>
>>>> >> >> Thanks. I like the plugin model, especially neat using CZMQ
>>>> actors.
>>>> >> >>
>>>> >> >> > I have a question regarding services, for each stream you are
>>>> using a
>>>> >> >> > dedicate thread (actors) and one thread for managing mailboxes.
>>>> >> >> > However
>>>> >> >> > (if
>>>> >> >> > I understood correctly) for services you are doing the
>>>> processing
>>>> >> >> > inside
>>>> >> >> > the
>>>> >> >> > server thread, why didn't you use an actor for each service or
>>>> actor
>>>> >> >> > to
>>>> >> >> > manage all services? I think the matching of services can be
>>>> >> >> > expensive
>>>> >> >> > and
>>>> >> >> > block the main thread.
>>>> >> >>
>>>> >> >> I did it in actors and then moved it back into the main server as
>>>> it
>>>> >> >> was complexity for nothing (at that stage). I'd rather design
>>>> against
>>>> >> >> real use than against theory.
>>>> >> >>
>>>> >> >> -Pieter
>>>> >> >> _______________________________________________
>>>> >> >> zeromq-dev mailing list
>>>> >> >> zeromq-dev at lists.zeromq.org
>>>> >> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>> >> >
>>>> >> >
>>>> >> >
>>>> >> > _______________________________________________
>>>> >> > zeromq-dev mailing list
>>>> >> > zeromq-dev at lists.zeromq.org
>>>> >> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>> >> >
>>>> >> _______________________________________________
>>>> >> zeromq-dev mailing list
>>>> >> zeromq-dev at lists.zeromq.org
>>>> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>> >
>>>> >
>>>> >
>>>> > _______________________________________________
>>>> > zeromq-dev mailing list
>>>> > zeromq-dev at lists.zeromq.org
>>>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>> >
>>>> _______________________________________________
>>>> zeromq-dev mailing list
>>>> zeromq-dev at lists.zeromq.org
>>>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20150302/1c5f3bf4/attachment.htm>
More information about the zeromq-dev
mailing list