[zeromq-dev] Limitations of patterns?
Kelly Brock
Kerby at inocode.com
Mon Aug 30 22:25:20 CEST 2010
Howdy,
> >> What should be done IMO is to implement acks inside 0MQ so that they
> are
> >> invisible to user. The user would set the max number of message on-fly
> >> using HWM socket option and that's it.
> >
> > Ok, so hmm. Would the following make sense then:
> >
> > // Server works as normal except the PUSH side has:
> > zmq_setsockopt( s, ZMQ_PUSH_ACK, NULL, 0 );
> >
> >
> > // Clients add:
> > zmq_setsockopt( s, ZMQ_PULL_ACK, NULL, 0 );
> >
> > The "zmq_recv" will check the option and send an ack to tell the the
> > push side to send me a message. This does not allow me to say I'll
> receive
> > more than one message since that would still require some sort of manual
> ack
> > but it solves the single worker case without any new user size calls.
>
> Yes. Something like that, but:
>
> 1. The value should be integer "max number of msgs on-fly".
> 2. The socket option should be unified with ZMQ_HWM option. Having
> separate options for "window size" and "number of message in-flight"
> seems to be redundant.
Without a specific ack though, it would seem that any number more
than one would have to be coded specifically or would just allow any the
"ack count-1" messages to pile up. So the following outline:
// Single worker pull.
setsockopt( s, ZMQ_PULL_ACK, "4", 1 );
loop
// The first time recv is called, it will send ack(4).
// Subsequent calls would send ack(1).
recv( s, msg );
{ do work }
goto loop
For this use-case, we will have one message we are working on and up
to 3 waiting in the socket/queue at any given time. For the case of keeping
this worker well fed assuming the tasks are all fairly small, this would
probably work well. Is this the behavior you were thinking of?
This is a much better solution for inconsistent work loads, but..
Unfortunately it still does not quite work perfectly for hard to define long
persistent work items. I.e.:
Distributor
DB of items to be processed.
Pushes items one at a time onto a push channel.
Receives new items to be processed and processing complete messages.
Controller
Connects to distributor and starts 1 worker per CPU in the box it is running
on.
loop
recv( s, msg );
send( p, msg );
goto loop
Worker
Connects 'inproc' to the controller with an ack of 1.
Loop
recv( s, msg );
< do work >
Send results to Distributor: i.e. new found work and completed items.
Goto loop
So, assume my 8 core box starts a controller. Once we start things
moving along, there will be 8 messages being processed, 8 messages waiting
on the individual "worker" queues, one message stuck waiting for send to
unblock in the controller and one message on the queue to the controller.
This is a major reduction in the possible number of stuck items but
not yet zero. :) Further thoughts?
KB
More information about the zeromq-dev
mailing list