[zeromq-dev] Malamute broker project
Doron Somech
somdoron at gmail.com
Mon Mar 2 17:39:29 CET 2015
> Would you like to make an issue on the Malamute
project for this?
Done
On Mon, Mar 2, 2015 at 3:01 PM, Pieter Hintjens <ph at imatix.com> wrote:
> OK, as usual, it's worth defining the problem clearly before
> discussing solutions. Would you like to make an issue on the Malamute
> project for this?
>
> On Mon, Mar 2, 2015 at 10:50 AM, Doron Somech <somdoron at gmail.com> wrote:
> > It should reach the same service as long as we didn't add (or removed)
> > another service, if we added another service we want it to reach another
> > service.
> >
> > This is what we do now: we have a range of hashes between 0 and 10000,
> each
> > client can calculate the hash from the account id and then send a service
> > request with the hash key.
> >
> > The broker find out which service is currently handling this hash and
> > forward the message to the service. When service starts it send to the
> > broker the ranges of hash it can handle.
> >
> > The is used a lot in in NoSQL databases.
> >
> > So using Malamute the service can register to all hashes, but that can
> be a
> > lot. If I can replace the matching algorithm for some services (not all,
> we
> > also have regular stateless services) with simple range that will solve
> it.
> > The pattern the service will send will be a string representing a range
> and
> > the request will just have a hash key, matching will just check that it
> is
> > in range.
> >
> >
> >
> >
> >
> > On Mon, Mar 2, 2015 at 11:12 AM, Pieter Hintjens <ph at imatix.com> wrote:
> >>
> >> When you say services have state, you mean that a given client always
> >> has to reach the same service instance?
> >>
> >> On Mon, Mar 2, 2015 at 9:39 AM, Doron Somech <somdoron at gmail.com>
> wrote:
> >> > Today we also use some kind of consistent hashing to distribute the
> load
> >> > with service requests (where services are not state less and
> exclusivity
> >> > needed), how can we do it with malamute?
> >> >
> >> > On Sun, Mar 1, 2015 at 10:00 PM, Kenneth Adam Miller
> >> > <kennethadammiller at gmail.com> wrote:
> >> >>
> >> >> So, in order to manage a mutual exchange of address between two
> >> >> concurrent
> >> >> parties, I thought that on each side I would have a producer produce
> to
> >> >> a
> >> >> topic that the opposite side was subscribed to. That means that each
> >> >> side is
> >> >> both a producer and a consumer.
> >> >>
> >> >> I have the two entities running in parallel. The front end client
> >> >> connects
> >> >> to the malamute broker, and subscribes to the backendEndpoints topic,
> >> >> and
> >> >> then producing it's endpoint to the frontendEndpoints topic.
> >> >>
> >> >> The opposite side does the same thing, with the back end subscribing
> to
> >> >> the frontendEndpoints and producing to backendEndpoints.
> >> >>
> >> >>
> >> >> The problem is that if the front end and back end are in their own
> >> >> threads
> >> >> then only the thread that completes the mlm_set_producer and
> >> >> mlm_set_consumer call proceed. The one that didn't make it that far
> >> >> will
> >> >> hang at that mlm_set_x pair point...
> >> >>
> >> >> code:
> >> >>
> >> >> std::cout << "connectToFrontEnd" << std::endl;
> >> >> mlm_client_t *frontend_reader = mlm_client_new();
> >> >> assert(frontend_reader);
> >> >> mlm_client_t *frontend_writer = mlm_client_new();
> >> >> assert(frontend_writer);
> >> >> int rc=mlm_client_connect (frontend_reader, "tcp://127.0.0.1:9999"
> ,
> >> >> 1000, "reader/secret");
> >> >> assert(rc==0);
> >> >> rc=mlm_client_connect (frontend_writer, "tcp://127.0.0.1:9999" ,
> >> >> 1000,
> >> >> "writer/secret");
> >> >> assert(rc==0);
> >> >> std::cout << "frontend mlm clients connected" << std::endl;
> >> >>
> >> >> mlm_client_set_consumer(frontend_reader, "backendEndpoints", "*");
> >> >> mlm_client_set_producer(frontend_writer, "frontendEndpoints");
> >> >> std::cout << "frontend client producers and consumers set" <<
> >> >> std::endl;
> >> >>
> >> >>
> >> >> The code looks exactly* the same for the backend, but with some
> >> >> variable
> >> >> and other changes.
> >> >>
> >> >> std::cout << "connectToBackEnd" << std::endl;
> >> >> mlm_client_t *backend_reader = mlm_client_new();
> >> >> assert(backend_reader);
> >> >> mlm_client_t *backend_writer = mlm_client_new();
> >> >> assert(backend_writer);
> >> >> int rc=mlm_client_connect(backend_reader,"tcp://127.0.0.1:9999",
> >> >> 1000,
> >> >> "reader/secret");
> >> >> assert(rc==0);
> >> >> rc=mlm_client_connect(backend_writer,"tcp://127.0.0.1:9999", 1000,
> >> >> "writer/secret");
> >> >> assert(rc==0);
> >> >> std::cout << "backend mlm clients connected" << std::endl;
> >> >>
> >> >> mlm_client_set_consumer(backend_reader, "frontendEndpoints", "*");
> >> >> mlm_client_set_producer(backend_writer, "backendEndpoints");
> >> >> std::cout << "backend client producers and consumers set" <<
> >> >> std::endl;
> >> >>
> >> >> I only ever will see either "frontend client produces and consumers
> >> >> set"
> >> >> or "backend client producers and consumers set".
> >> >>
> >> >> On Sun, Mar 1, 2015 at 2:00 PM, Pieter Hintjens <ph at imatix.com>
> wrote:
> >> >>>
> >> >>> My assumption is that a broker that's doing a lot of service
> requests
> >> >>> won't be showing costs of regular expression matching, compared to
> the
> >> >>> workload.
> >> >>>
> >> >>>
> >> >>> On Sun, Mar 1, 2015 at 7:49 PM, Doron Somech <somdoron at gmail.com>
> >> >>> wrote:
> >> >>> >> I did it in actors and then moved it back into the main server as
> >> >>> >> it
> >> >>> >> was complexity for nothing (at that stage). I'd rather design
> >> >>> >> against
> >> >>> >> real use than against theory.
> >> >>> >
> >> >>> > Don't you worry about the matching performance which will happen
> on
> >> >>> > the
> >> >>> > main
> >> >>> > thread? Also a usage I can see is to use exact matching (string
> >> >>> > comparison)
> >> >>> > over regular expression (I usually use exact matching), this is
> way
> >> >>> > I
> >> >>> > think
> >> >>> > the plugin model fits the service as well.
> >> >>> >
> >> >>> > On Sun, Mar 1, 2015 at 8:09 PM, Pieter Hintjens <ph at imatix.com>
> >> >>> > wrote:
> >> >>> >>
> >> >>> >> On Sun, Mar 1, 2015 at 5:52 PM, Doron Somech <somdoron at gmail.com
> >
> >> >>> >> wrote:
> >> >>> >>
> >> >>> >> > So I went over the code, really liked it. Very simple.
> >> >>> >>
> >> >>> >> Thanks. I like the plugin model, especially neat using CZMQ
> actors.
> >> >>> >>
> >> >>> >> > I have a question regarding services, for each stream you are
> >> >>> >> > using
> >> >>> >> > a
> >> >>> >> > dedicate thread (actors) and one thread for managing mailboxes.
> >> >>> >> > However
> >> >>> >> > (if
> >> >>> >> > I understood correctly) for services you are doing the
> processing
> >> >>> >> > inside
> >> >>> >> > the
> >> >>> >> > server thread, why didn't you use an actor for each service or
> >> >>> >> > actor
> >> >>> >> > to
> >> >>> >> > manage all services? I think the matching of services can be
> >> >>> >> > expensive
> >> >>> >> > and
> >> >>> >> > block the main thread.
> >> >>> >>
> >> >>> >> I did it in actors and then moved it back into the main server as
> >> >>> >> it
> >> >>> >> was complexity for nothing (at that stage). I'd rather design
> >> >>> >> against
> >> >>> >> real use than against theory.
> >> >>> >>
> >> >>> >> -Pieter
> >> >>> >> _______________________________________________
> >> >>> >> zeromq-dev mailing list
> >> >>> >> zeromq-dev at lists.zeromq.org
> >> >>> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >> >>> >
> >> >>> >
> >> >>> >
> >> >>> > _______________________________________________
> >> >>> > zeromq-dev mailing list
> >> >>> > zeromq-dev at lists.zeromq.org
> >> >>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >> >>> >
> >> >>> _______________________________________________
> >> >>> zeromq-dev mailing list
> >> >>> zeromq-dev at lists.zeromq.org
> >> >>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >> >>
> >> >>
> >> >>
> >> >> _______________________________________________
> >> >> zeromq-dev mailing list
> >> >> zeromq-dev at lists.zeromq.org
> >> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >> >>
> >> >
> >> >
> >> > _______________________________________________
> >> > zeromq-dev mailing list
> >> > zeromq-dev at lists.zeromq.org
> >> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >> >
> >> _______________________________________________
> >> zeromq-dev mailing list
> >> zeromq-dev at lists.zeromq.org
> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> >
> >
> > _______________________________________________
> > zeromq-dev mailing list
> > zeromq-dev at lists.zeromq.org
> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20150302/e4b9611f/attachment.htm>
More information about the zeromq-dev
mailing list