[zeromq-dev] Can a Trading Platform rely on ØMQ?

Anatoly tolitius at gmail.com
Thu Sep 8 21:08:17 CEST 2011

Thanks Martin,

On Wed, Sep 7, 2011 at 2:51 AM, Martin Sustrik <sustrik at 250bpm.com> wrote:

> Hi Anatoly,
>       1. Is there a built in mechanism to e.g. start off loading
>> messages to disk, in case a queue is close to be overflowed? Or to just
>> STOP instead of discard messages silently?
> There's ZMQ_SWAP option in 2.x versions.
> Applying backpressure ("STOP") works fro REQ/REP and PUSH/PULL patterns.
> With PUB/SUB, applying backpressure combined with slow/dead subscriber can
> lead to unbounded latencies, even to deadlock of the whole message
> distribution system.
>       2. Mad Black Box is something that looks the closest to what we
>> need, however we already have publishers themselves sharded. Would
>> sharding them further into different Subscriber's "PUSH"ers be necessary
>> to avoid a "subscriber's overflow"? ( e.g. processing side would be
>> slower, as it deserializes [probably google protobufs] and persists
>> messages to disk )
> I guess you are speaking of market data here. If the publisher is
> overloaded, think of creating a more complex topology with devices in the
> middle to distribute the load.
>       3. In case the answer to 3 is YES, does not an additional internal
>> sharding introduce a new point of failure ( e.g. a thread dies, etc.. ),
>> in which case is there a some kind of built in recovery / retry
>> mechanism ( similar to PGM, but a bit of a higher level, since we are
>> dealing with ZMTP messages )?
> If the point that stores the message dies, the message is lost. That
> applies to PGM or any other mechanism. The only option is to store the
> messages on a disk, with obvious performance penalty. Even then, if the disk
> dies, the messages are lost. To prevent that you have to store them on RAID,
> SAN or somesuch. If the whole RAID is destroyed etc.
> Martin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20110908/16fd351c/attachment.htm>

More information about the zeromq-dev mailing list