[zeromq-dev] Round robin problems with offline servers

Ben Gray ben at benjamg.com
Fri Jun 1 11:43:01 CEST 2012


I have a number of clients talking to a number of servers which may or
may not be accessible at any given time. The server that processes the
client message doesn't matter but only one should do it and so a
pretty basic round robin push / pull system seemed to make sense. As
only the server locations are know it made sense to bind with them and
provide a list to the clients to connect to.

However it turns out that even if a pipe is in the connecting state it
still gets routed messages into its buffer. This means what was a high
speed response to any client message now may sit idle waiting for a
given server to be available.
Ian provided me with some test code I have been using which simulates
this nicely:

#include <cstdlib>
#include <cstring>
#include <iostream>

#include "zmq.h"

int main (void)
{
	void *context = zmq_ctx_new();

	void *to = zmq_socket(context, ZMQ_PULL);
	zmq_bind(to, "tcp://*:5555");

	void *from = zmq_socket (context, ZMQ_PUSH);
	zmq_connect(from, "tcp://localhost:5555");
	zmq_connect(from, "tcp://localhost:5556");

	for (int i = 0; i < 10; ++i)
	{
		std::string message("message ");
		message += ('0' + i);

		std::cout << "Sending " << message << std::endl;
		zmq_send(from, message.data(), message.size(), 0);
	}

	char buffer[16];
	for (int i = 0; i < 10; ++i)
	{
		memset(&buffer, 0, sizeof(buffer));
		zmq_recv(to, &buffer, sizeof(buffer), 0);
		std::cout << "Got " << buffer << std::endl;
	}

	zmq_close(from);
	zmq_close(to);
	zmq_ctx_destroy(context);

	return EXIT_SUCCESS;
}


Short of setting all the send high water marks to 0 messages is there
a good work around to stop buffering for none connected pipes?

As far as I can tell my options are to
a) attempt a connection to the server on a different socket and then
use the new monitor callbacks to see if it gets connected, at which
point tell the main socket to connect to it instead.

b) patch the code to provide a socket option that treats connecting as
though the pipe is full.

The first option seems overly messy to me, but if I'm going to patch
the library I want to check if someone has a better solution first, or
a better patch.
Actually I don't see the benefit of a round robin using connecting
pipes while others are still active, so this might even tread in bug
territory.

Ben



More information about the zeromq-dev mailing list