[zeromq-dev] Round robin problems with offline servers

Andrew Hume andrew at research.att.com
Fri Jun 1 13:35:24 CEST 2012


time for my periodic soliloquy:

	message distribution is NOT the same as job scheduling.

zeromq tempts you into thinking the former can be used to do
the latter, and indeed, provided you don't care about edge conditions
and do lots of things, zeromq can do a good approximation.

but its not. if you want this to work properly, then you need to do it properly.
conceptually, the simplest solution is to interpose a scheduler.
clients push to teh scheduler; servers to req/rep to the scheduler to ask for a job.
more robust and scalable solutions abound in the guide.

	andrew

On Jun 1, 2012, at 2:43 AM, Ben Gray wrote:

> I have a number of clients talking to a number of servers which may or
> may not be accessible at any given time. The server that processes the
> client message doesn't matter but only one should do it and so a
> pretty basic round robin push / pull system seemed to make sense. As
> only the server locations are know it made sense to bind with them and
> provide a list to the clients to connect to.
> 
> However it turns out that even if a pipe is in the connecting state it
> still gets routed messages into its buffer. This means what was a high
> speed response to any client message now may sit idle waiting for a
> given server to be available.
> Ian provided me with some test code I have been using which simulates
> this nicely:
> 
> #include <cstdlib>
> #include <cstring>
> #include <iostream>
> 
> #include "zmq.h"
> 
> int main (void)
> {
> 	void *context = zmq_ctx_new();
> 
> 	void *to = zmq_socket(context, ZMQ_PULL);
> 	zmq_bind(to, "tcp://*:5555");
> 
> 	void *from = zmq_socket (context, ZMQ_PUSH);
> 	zmq_connect(from, "tcp://localhost:5555");
> 	zmq_connect(from, "tcp://localhost:5556");
> 
> 	for (int i = 0; i < 10; ++i)
> 	{
> 		std::string message("message ");
> 		message += ('0' + i);
> 
> 		std::cout << "Sending " << message << std::endl;
> 		zmq_send(from, message.data(), message.size(), 0);
> 	}
> 
> 	char buffer[16];
> 	for (int i = 0; i < 10; ++i)
> 	{
> 		memset(&buffer, 0, sizeof(buffer));
> 		zmq_recv(to, &buffer, sizeof(buffer), 0);
> 		std::cout << "Got " << buffer << std::endl;
> 	}
> 
> 	zmq_close(from);
> 	zmq_close(to);
> 	zmq_ctx_destroy(context);
> 
> 	return EXIT_SUCCESS;
> }
> 
> 
> Short of setting all the send high water marks to 0 messages is there
> a good work around to stop buffering for none connected pipes?
> 
> As far as I can tell my options are to
> a) attempt a connection to the server on a different socket and then
> use the new monitor callbacks to see if it gets connected, at which
> point tell the main socket to connect to it instead.
> 
> b) patch the code to provide a socket option that treats connecting as
> though the pipe is full.
> 
> The first option seems overly messy to me, but if I'm going to patch
> the library I want to check if someone has a better solution first, or
> a better patch.
> Actually I don't see the benefit of a round robin using connecting
> pipes while others are still active, so this might even tread in bug
> territory.
> 
> Ben
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev


------------------
Andrew Hume  (best -> Telework) +1 623-551-2845
andrew at research.att.com  (Work) +1 973-236-2014
AT&T Labs - Research; member of USENIX and LOPSA




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20120601/a5ff633a/attachment.htm>


More information about the zeromq-dev mailing list