[zeromq-dev] Limitations of patterns?
Matt Weinstein
matt_weinstein at yahoo.com
Tue Aug 31 14:42:02 CEST 2010
On Aug 31, 2010, at 8:14 AM, Kelly Brock wrote:
> Hi Matt,
>
>> I'm trying to catch up on this, but you might think about using a
>> "reverse server" pattern:
>>
>> Build a device which connects (XREP--XREP).
>> Servers _and_ clients connect via REQ sockets.
>> Your device contains a freelist of available servers.
>> Each server sends an initial hello to put its identity on the
>> freelist.
>> You only recv new work from clients when a server is available, and
>> assign the work to a server on the freelist.
>> When you assign work keep a map (hash) from server_identity to
>> client_identity.
>> When a server sends a response put its identity back on the freelist,
>> and send to client_identity.
>>
>> All unassigned work is now in the inbound client XREP socket, meaning
>> if a server fails you only have one stuck request (at that server).
>>
>> Multiple recovery options are available (keeping state around about
>> the requests at each server).
>> Multiple server scheduling policies are available with a more
>> sophisticated freelist.
>>
>> Does this help?
>
> I can work around the problem in my usage; that was not really an
> issue. The intention was more of a discussion to see if there is a
> method
> of allowing push connections to support other distribution methods
> than
> round robin. Push is the most appropriate connection type but the
> round
> robin distribution breaks down if your tasks are not fairly reliable
> at
> retiring work in a consistently timely manner.
>
What I suggested above should solve this for some value of "policy" :-)
> More importantly is the idea that you shouldn't let tasks pile up on
> the queues indiscriminately. It's generally a bad idea to have tasks
> stacked up beyond a couple, there is likely a reason they got backed
> up in
> the first place. (Someone runs a virus scan, decides to use your
> server as
> a quake server, someone kicked the direct router out and you are
> routing to
> Egypt and back instead of directly, etc etc.)
>
There's only one "queue" in the above model. You can decide whether
to prefetch and store locally, leave the requests in the pipe, set
HWM, etc. And you can measure server performance for each server
identity, allowing you to make routing decisions (all in a single
threaded environment), including measuring average service times per
request type per server.
> So, the point of this was to reduce the case of major task pileups.
> Additionally I "hope" we could solve for the broader usage and allow
> a first
> come, first serve distribution on push without the queued messages.
> I just
> don't see how to get that without a separated ack ability. :/
>
If you need fairness across inbound streams, you could use ZMQ_PAIR or
PUSH/PULL for each client, use a reactor to read the various input
pipes, do round robin on the inputs, and set HWM so if you don't have
enough servers, ZMQ_NOBLOCK will allow clients to balk.
Is this more on track?
Best,
Matt
PS I've already implemented a good portion of the above in a
production system.
> KB
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
More information about the zeromq-dev
mailing list