[zeromq-dev] Reliability question
hannes at eyealike.com
Tue Aug 3 19:53:46 CEST 2010
On Mon, Aug 2, 2010 at 1:49 AM, Steven McCoy <steven.mccoy at miru.hk> wrote:
> On 2 August 2010 16:28, Martin Sustrik <sustrik at 250bpm.com> wrote:
>> The LBM solution makes following assumption: We have enough resources
>> (disk space) to store the historical feed till the crybaby consumer is
>> fixed/replaced/killed by the datacenter staff.
>> When applying it to Internet there are two problems:
>> 1. Slow/hung-up consumers are out of publisher's control. They may never
>> be fixed. They can literally sit there and cause problems for years.
>> 2. The resource (memory/disk space) allocated to your communication at a
>> middle box (think of an Internet backbone) is going to be severely
>> limited. The worst-case assumption should be that the consumer can stop
>> consuming only for a fraction of a second, otherwise the buffers at the
>> middle nodes start overflowing.
> Which makes, just like last value caching, it more suited to a higher
> layer. You need an entirely new infrastructure to propagate live data to a
> continuous archive system that can scale on demand to client requests.
> Otherwise you are ending up with a framework like Apache Hadoop tuned to
> be equivalent of Vhayu's historical tick-data store.
I have the hunch that clients with a need for 100% reliable messaging tend
to be low in bandwidth requirements and that messaging with high bandwidth
is used to send frequent updates about a relatively small dataset. In other
words there could be two kinds of reliability: the one for low bandwidth
with an iron-cast guarantee (and modest buffer sizes) and a degraded one for
high bandwidth which only guarantees that it reliably *detects* message loss
and permutation. Clients of a high-bandwidth approach could use OOB methods
to bring the state back up to date as long as they know that there was a
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the zeromq-dev