[zeromq-dev] Handling OOM
Martin Sustrik
sustrik at 250bpm.com
Wed May 18 08:56:26 CEST 2011
Hi Paul,
> Despite of few Martin attempts to scare me, I'm working on fixing zeromq
> work in out of memory conditions. And I really want to discuss semantics
> of it.
>
> First assertion pointed to by Martin is:
>
> src/msg.cpp:56
>
> This is not an assertion and has two sides. One when user calls
> zmq_msg_init_size(), if no memory can be allocated error will be just
> propagated to the user.
There are two scenarios even on that side. Either user asked for
allocating a bogus amount of memory, in which case returning ENOMEM is
sufficient.
Or, user asked for allocating reasonable-sized message, however,
allocation failed because system is running low on memory. In this case
emergency measures would be more appropriate.
It's not clear how to distinguish the two though.
> When it's called by the code, handling network
> on OOM decoder closes underlying connection (e.g. somewhere along the
> lines 76-80 in decoder.cpp). This leads to lots of errors described
> later. This thing is very convenient if you haven't set max message size
> and header with big message size received from network, but for small
> messages closing connection seems overkill and it provocates all the
> memory assertions in the reconnection code. Should that be fixed to
> something more reasonable? The way I'm thinking of is close connection
> if message size is bigger than some arbitrary fixed value (e.g. 64Kb or
> 1Mb, actual value doesn't matter, but it should somehow compensate with
> all the cost of reconnecting) if max message size is not specified. When
> connection should not be closed, just wait some time (or wait until some
> message will be sent, which is probably hard and unreasonable to do).
I would say the subsequent asserts should be fixed as a first step.
Heuristics about connection dropping can be discussed afterwards.
> Next group of assertions:
>
> encoder.hpp:47
> zmq_init.cpp:47
> poller_base.cpp:52 (bad alloc exception)
>
> Seems all related to reconnection code (there are plenty of other I've
> seen before, these just repeated today), and partially will be fixed if
> reconnection code will be more rare in OOM conditions. The complexity of
> fixing them was described in previous email: they are either
> allocactions in constructors, or exceptions in standard classes. Martin,
> they are really happen in the test programs, so I *really* need a hint
> how to implement them, before I've started fixing them in a wrong way :)
Right. I've said it's going to be difficult :)
What's needed in these cases is propagating the errors to the caller and
deallocating the objects properly. That means changing constructors to
init functions with proper error return values. When the error is
propagated to the appropriate level, an emergency measure should be
taken, such as closing the new connection that caused creation of the
engine in the accept case or postponing the re-connection in the connect
case.
> And the last issue today:
>
> yqueue.hpp:109
>
> This also has two sides. One when we are receiving messages, at the
> first time I plan to turn assertion into disconnect, which will then
> probably fixed in a way similar to solution of the first issue in this
> letter.
Ack.
> The other side of this assertion is when user send messages. zmq_send in
> this case should return -1, setting errno to ENOMEM. This brings an
> issue as people used to rely on having no errors while sending second,
> third, etc. message parts. But good code has assert for this case
> anyway, so it will fail with same assertion a bit later in the user
> code, which is a good thing. But if user want to continue to use socket
> we have three variants:
>
> 1. All message parts sent so far are discarded, so user must start from
> the first
> 2. All message parts sent so far are kept, so user can send each message
> part in a loop
> 3. All messages are discarded, but continue to be discarded until
> message without ZMQ_SNDMORE will be clear
>
> Seems (2) is not very useful (there is a chance that memory will be
> freed by IO code, or other threads, but it can also lead to a deadlock).
> Third is quite strange, but something similar is implemented in zeromq
> for the case when connection dropped in the middle of the multipart
> message. I see a good reason for (1). If in python you use:
>
> socket.send_multipart(messages)
>
> It's polite raise MemoryError when getting ENOMEM, and there are no way
> to discover after which message an error happened. So given (1) will be
> implemented you can recover from MemoryError gracefully, by starting
> always in a clean state (and you usually can recover from memory error
> in python by catching exception). Discarding message parts sent so far
> will also free some resources. But care must be taken to always discard
> messages if ENOMEM encountered. Also yqueue is probably not suited for
> backtracking queue. Anyway my vote will be for (1) if it can be implemented.
>
> Thoughs?
My vote would be for 3 as it is least invasive for the user code. The
message just disappears which has the same effect as if the message was
lost somewhere in the network.
Martin
More information about the zeromq-dev
mailing list