[zeromq-dev] Asynchrony in malamute clients

Kenneth Adam Miller kennethadammiller at gmail.com
Sun Apr 12 19:59:40 CEST 2015


As it turns out, I didn't completely understand my problem when I asked. I
feel I do better now. Here goes attempt 2: I have the following problem
using malamute broker-

I have mutual parties of clients on either side, groups A (size M) and B
(size N). By exchanging addresses, we can scale much much better. For this,
we want to use the malamute service API; since it specifically says "when a
client sends a service request, the broker creates a service queue if
needed. Service queues are persisted in the same way as mailboxes." We
don't want to use mailboxes at all, being as they require unique identity
management system. Unacknowledged service requests are good because we use
a separate socket to ferry traffic directly anyway, rather than through the
broker-each client gets acknowledgement anyway.

But this is misleading; in mlm_client.c, you can clearly see the following
demonstrating the semantics of *mailboxes*.

Client A sends to client B's mailbox.
Client B connects at *any time*.
Client B reads from its mailbox.

Key here is the message client A sent is persisted, but if this were
swapped out for service semantics APIs, the tool would hang at client B
reading.

I hope I'm not creating too much traffic! Thanks so much for helping me, I
know you do it voluntarily, and I appreciate it greatly!

https://github.com/KennethAdamMiller/MalamuteClientUseIssue

On Sun, Apr 12, 2015 at 2:32 AM, Pieter Hintjens <ph at imatix.com> wrote:

> Hi Kenneth,
>
> Sorry for the slow response. The best way to debug such issues for now
> is to run with animation (verbose) on the server and the client. You
> should quickly see what the server and client is doing.
>
> Alternatively if you write up a minimal C testcase that fails or
> hangs, I'll be able to tell you what's wrong. Your C++ code has stuff
> in it that makes it impossible for me to use as-is.
>
> Cheers
> Pieter
>
>
> On Sat, Apr 11, 2015 at 8:39 PM, Kenneth Adam Miller
> <kennethadammiller at gmail.com> wrote:
> > Allow me to clarify:
> >
> > The unit test doesn't specifically fail, it in fact just hangs. I don't
> know
> > what to do make sure malamute queue on sendforx and in order that it can
> > forward it to the second client when it calls recvx. I thought these were
> > the semantics all along. How do I get that?
> >
> > Can someone please please help? I need to fix this so that I have many
> > clients on each side of the broker running potentially simultaneously. If
> > sendforx gets dropped potentially, then I don't know that I can make it
> > work. Should I try and modify malamute itself to try and hunt this down?
> Is
> > anybody else having this issue?
> >
> > On Fri, Apr 10, 2015 at 8:23 PM, Kenneth Adam Miller
> > <kennethadammiller at gmail.com> wrote:
> >>
> >> I've found that, with similar unit tests, there no way to ensure that
> >> before a send occur that there is a client on the other side in order to
> >> make sure that the broker will create a queue for the message. This is a
> >> major problem I've been trying repeatedly to solve, because if I have
> >> clients connect asynchronously, each subscribed to what the other side
> >> produces, in is implausible to prevent messages being lost. If messages
> get
> >> lost this way, since I can't timeout on receive operations, the clients
> >> deadlock.
> >>
> >> On Fri, Apr 10, 2015 at 6:27 PM, Kenneth Adam Miller
> >> <kennethadammiller at gmail.com> wrote:
> >>>
> >>> The following unit test fails:
> >>>
> >>> void test_services_are_preserved_across_connections() {
> >>>   std::cout << "test service messages are preserved across connectiosn"
> >>> << std::endl;
> >>>   startUP();
> >>>   mlm_client_t *client=mlm_client_new();
> >>>   char *topic, *content;
> >>>   int rc=mlm_client_connect(client, "tcp://127.0.0.1:9999", 2000, "");
> >>>   assert ( rc == 0);
> >>>   mlm_client_sendforx(client, MANAGER_GIVE, "gotit", "", NULL);
> >>>   mlm_client_destroy(&client);
> >>>
> >>>   client=mlm_client_new();
> >>>   rc=mlm_client_connect(client, "tcp://127.0.0.1:9999", 2000, "");
> >>>   assert ( rc == 0);
> >>>   mlm_client_set_worker(client, MANAGER_GIVE, "got*");
> >>>   mlm_client_recvx(client, &topic, &content, NULL);
> >>>   assert(!strcmp(content, "gotit"));
> >>>   mlm_client_destroy(&client);
> >>>   shutDown();
> >>> }
> >>>
> >>> start up and shut down just create an in process malamute actor and
> bind
> >>> it to port 9999.
> >>>
> >>> On Fri, Apr 10, 2015 at 2:41 PM, Kenneth Adam Miller
> >>> <kennethadammiller at gmail.com> wrote:
> >>>>
> >>>> "Replies back to an originating client are sent to the client mailbox.
> >>>> Thus clients may send service requests, disconnect, and then retrieve
> their
> >>>> replies at a later time."
> >>>>
> >>>> Oh I just found this :) So, what if I don't associate unique
> identities
> >>>> with clients-what are the client mailboxes then? Is this the problem,
> that
> >>>> acknowledgements aren't being sent?
> >>>>
> >>>> On Fri, Apr 10, 2015 at 2:36 PM, Kenneth Adam Miller
> >>>> <kennethadammiller at gmail.com> wrote:
> >>>>>
> >>>>> Actually, how do workers specifically acknowledge that they have
> >>>>> received a request? I don't think that's in mlm_client.c...
> >>>>>
> >>>>> On Fri, Apr 10, 2015 at 10:01 AM, Kenneth Adam Miller
> >>>>> <kennethadammiller at gmail.com> wrote:
> >>>>>>
> >>>>>> So, I have the following regarding how to make sure that clients
> that
> >>>>>> are asynchronous manage to use the Service Request api correctly:
> >>>>>>
> >>>>>>
> >>>>>> void syncOppClient(const char *broker_endpoint, const char
> >>>>>> *subscribe_stream, const char *subscribe_pattern,
> >>>>>>   const char *publish_stream, const char *publish_message) {
> >>>>>>   mlm_client_t *_init_client = mlm_client_new();
> >>>>>>   int rc=mlm_client_connect (_init_client, broker_endpoint, 1000,
> "");
> >>>>>>   assert(rc==0);
> >>>>>>   mlm_client_set_consumer (_init_client, subscribe_stream,
> >>>>>> subscribe_pattern);
> >>>>>>   mlm_client_set_producer (_init_client, publish_stream);
> >>>>>>   mlm_client_sendx (_init_client, publish_message, NULL);
> >>>>>>   mlm_client_recv(_init_client);
> >>>>>>   mlm_client_destroy(&_init_client);
> >>>>>> }
> >>>>>>
> >>>>>> I notice that often the clients freeze; it's as though the messages
> >>>>>> aren't routed. Some of the time, they freeze at the
> >>>>>> mlm_client_recv(_init_client);, so I'm wondering if I should be
> using some
> >>>>>> other malamute client api to make sure that clients are on each
> side at the
> >>>>>> same time by making them wait. They are indeed concurrent, meaning
> that the
> >>>>>> clients should be running simultaneously regardless, but if the
> >>>>>> syncOppClient call isn't made, they race to the broker in the sense
> that
> >>>>>> calls to sendforx don't seem to make it through. The broker
> documentation
> >>>>>> about api semantics specifically says:
> >>>>>>
> >>>>>> "When a client sends a service request, the broker creates a service
> >>>>>> queue if needed. Service queues are persisted in the same way as
> mailboxes.
> >>>>>> "
> >>>>>>
> >>>>>> but just doing sendforx; recvx as in the change that I merged into
> the
> >>>>>> mlm_client test with *multiple* concurrent clients on each side
> results in
> >>>>>> lockups for the clients. syncOppClient reduces these races some,
> but doesn't
> >>>>>> eliminate them. How do I facilitate having more than one client on
> each side
> >>>>>> of the broker, managers and workers? What I want is exactly what is
> >>>>>> documented that I added to the mlm_client_test, except that the
> frontend and
> >>>>>> backend are concurrent and there can be more than one of them
> >>>>>> simultaneously.
> >>>>>
> >>>>>
> >>>>
> >>>
> >>
> >
> >
> > _______________________________________________
> > zeromq-dev mailing list
> > zeromq-dev at lists.zeromq.org
> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20150412/47a7f696/attachment.htm>


More information about the zeromq-dev mailing list