[zeromq-dev] Limitations of patterns?

Matt Weinstein matt_weinstein at yahoo.com
Tue Aug 31 13:54:13 CEST 2010


I'm trying to catch up on this, but you might think about using a  
"reverse server" pattern:

Build a device which connects (XREP--XREP).
Servers _and_ clients connect via REQ sockets.
Your device contains a freelist of available servers.
Each server sends an initial hello to put its identity on the freelist.
You only recv new work from clients when a server is available, and  
assign the work to a server on the freelist.
When you assign work keep a map (hash) from server_identity to  
client_identity.
When a server sends a response put its identity back on the freelist,  
and send to client_identity.

All unassigned work is now in the inbound client XREP socket, meaning  
if a server fails you only have one stuck request (at that server).

Multiple recovery options are available (keeping state around about  
the requests at each server).
Multiple server scheduling policies are available with a more  
sophisticated freelist.

Does this help?

On Aug 30, 2010, at 4:25 PM, Kelly Brock wrote:

> Howdy,
>
>>>> What should be done IMO is to implement acks inside 0MQ so that  
>>>> they
>> are
>>>> invisible to user. The user would set the max number of message  
>>>> on-fly
>>>> using HWM socket option and that's it.
>>>
>>> 	Ok, so hmm.  Would the following make sense then:
>>>
>>> // Server works as normal except the PUSH side has:
>>> zmq_setsockopt( s, ZMQ_PUSH_ACK, NULL, 0 );
>>>
>>>
>>> // Clients add:
>>> zmq_setsockopt( s, ZMQ_PULL_ACK, NULL, 0 );
>>>
>>> 	The "zmq_recv" will check the option and send an ack to tell the  
>>> the
>>> push side to send me a message.  This does not allow me to say I'll
>> receive
>>> more than one message since that would still require some sort of  
>>> manual
>> ack
>>> but it solves the single worker case without any new user size  
>>> calls.
>>
>> Yes. Something like that, but:
>>
>> 1. The value should be integer "max number of msgs on-fly".
>> 2. The socket option should be unified with ZMQ_HWM option. Having
>> separate options for "window size" and "number of message in-flight"
>> seems to be redundant.
>
> 	Without a specific ack though, it would seem that any number more
> than one would have to be coded specifically or would just allow any  
> the
> "ack count-1" messages to pile up.  So the following outline:
>
> // Single worker pull.
> setsockopt( s, ZMQ_PULL_ACK, "4", 1 );
>
> loop
>  // The first time recv is called, it will send ack(4).
>  // Subsequent calls would send ack(1).
>  recv( s, msg );
>  { do work }
> goto loop
>
> 	For this use-case, we will have one message we are working on and up
> to 3 waiting in the socket/queue at any given time.  For the case of  
> keeping
> this worker well fed assuming the tasks are all fairly small, this  
> would
> probably work well.  Is this the behavior you were thinking of?
>
> 	This is a much better solution for inconsistent work loads, but..
> Unfortunately it still does not quite work perfectly for hard to  
> define long
> persistent work items.  I.e.:
>
> Distributor
> DB of items to be processed.
> Pushes items one at a time onto a push channel.
> Receives new items to be processed and processing complete messages.
>
>
> Controller
> Connects to distributor and starts 1 worker per CPU in the box it is  
> running
> on.
> loop
>  recv( s, msg );
>  send( p, msg );
> goto loop
>
> Worker
> Connects 'inproc' to the controller with an ack of 1.
> Loop
>  recv( s, msg );
>  < do work >
>  Send results to Distributor: i.e. new found work and completed items.
> Goto loop
>
> 	So, assume my 8 core box starts a controller.  Once we start things
> moving along, there will be 8 messages being processed, 8 messages  
> waiting
> on the individual "worker" queues, one message stuck waiting for  
> send to
> unblock in the controller and one message on the queue to the  
> controller.
>
> 	This is a major reduction in the possible number of stuck items but
> not yet zero.  :)  Further thoughts?
>
> KB
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev




More information about the zeromq-dev mailing list