[zeromq-dev] How to properly close a socket and HWMRECV

Antonio Teixeira eagle.antonio at gmail.com
Fri Jul 8 12:17:27 CEST 2011

Hello List.

Since i have been able to bypass my little ( Big ) Bug i have come for a
more interesting subject.

My APP uses the pipeline topology , due to all the load-balance  and other
features gained from this topology but i have found a problem.

My APP requires alot of CPU so i have bypassed that with MultiProcessing +
IPC and it works great , but i have all my processes based on the
muliprocesing.cpu_count() value.

(JFYI Messaging processing only takes in my case 0.007 seconds :) With SQL
Query Included. against 0.12 using rabbitmq :P )


Core 2 DUo > Cpu Count = 2

If you do ps aux you should see 2 workers all goes fine

I send 2 tasks and both cpus get used , great now the problem.
If i send a third task to the client the socket will accept it and will
probably "buffer" it for latter use .

But i would like to set HWM on my DOWNSTREAM ( At the Client side not the
Broker ) socket ,  looking at the source i have noticed 2 values :
zmq.HWM and zmq.RCVHWM

My objective is the following :
On a 2 core machine , broker sends one Task
it will occupy Process 1 and it takes 30 minutes to end this task.

So we have one more working slot , now broker sends another task , the HWM
should be reached now and ZMQ ignores further Work from Broker so it can
overflow to other standby servers,
This new task will occupy Process 2.

Now Process 1 ends  and the Task Worker resets into network_socket.recv()
The HWM should now allow one more task to flow trough.

Is this the correct way or i'm messing something here ? Should i use HWM or


Another question is any nice way to say hello socket please stop accepting
data , clear any message on the buffer and terminate nicely without ugly
Exception erros , specially if this socket is working with zmq.Device()

Thank you all again
Antonio Teixeira
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20110708/77340bb8/attachment.htm>

More information about the zeromq-dev mailing list