[zeromq-dev] Load balancing REQ/REP sockets
Martin Sustrik
sustrik at 250bpm.com
Fri Mar 19 15:09:51 CET 2010
Brian,
>Thanks for the clarification. This is quite different from how I was
>thinking about it.
>
>* Is the process lossy? Do any messages get lost when the SENDBUF/RECVBUF
>fill up?
Let's define it in terms of "exceptional situation". Exceptional
situation happens when _all_ the pipes are full, i.e. queues on both
peers, SNDBUF, RCVBUF, network buffers etc. Unless there is exceptional
situation, the transport is lossless.
When exceptional situation happens, it is solved depending on the
messaging pattern used. For PUB/SUB the exceptional situation is solved
by dropping messages. For all other messaging pattern, exceptional
situation causes send to block (or return EAGAIN in case of non-blocking
send).
>* Could we add this description to the docs?
Yes. If you feel like writing it down in a sane manner, give it a try.
>* Can I think of the sender stopping sending messages at HWM +
>sizeof(SENDBUF + RECVBUF)?
You've forgot about the intermittent network. If there's say a
satellite connection between sender and receiver it should be able to
keep a lot of data "in the air".
>But back to my original question of having a more dynamic load
>balancing. In my case I have small messages that take different
>amounts of time to process. Often, the total number of these small
>messages won't come close to filling up the default sized
>SENDBUF/RECVBUF. Obviously, the next step is to reduce the size of
>SENDBUF/RECVBUF. What is the effect of doing that on
>latency/throughput? Can you think of any other way of handling a this
>type of load balancing?
Making the buffers smaller has negative impact on throughput.
However, that's not the point here IMO.
What you want is to send at most 1 request to one replier at a time. You
can easily hack XREQ socket to behave that way.
However, once you want to implement it in generic manner your problem
looks like this:
Let's assume N requesters and M repliers. Let's assume X intermadiate
nodes forming a directed acyclic graph. Let's assume nodes are being
added and removed in dynamic manner. We want an algorithm that would
spread the processing load fairly among the repliers.
The current system doesn't solve the problem in it's entirety, however,
it does guarantee that no replier and no subgraph of the system will be
overloaded by ever growing amount of work. If it's not able to keep up,
finally all the buffers will fill up and work will be redirected to
different destinations.
The original problem requires more than that. It requires some oracle
that would be able to estimate processing capacity on a particular route
and then load it as much as possible, but never overload it. One may
recall TCP congestion mechanism (slow start+congestion avoidance) as
something pretty similar. Still, the whole thing needs much more
thinking and experimenting.
Martin
More information about the zeromq-dev
mailing list