[zeromq-dev] QoS support
Martin Sustrik
sustrik at 250bpm.com
Sat Nov 14 20:09:04 CET 2009
Great discussion, guys! I'm enjoying it immensely :)
Maybe it would make sense to sketch the networking stack and play with
moving QoS controls up and down a little to find out how it effects
latency, fairness, head-of-line blocking etc.?
Martin
Steven McCoy wrote:
> 2009/11/12 Erich Heine <sophacles at gmail.com <mailto:sophacles at gmail.com>>
>
> See answers inline below, but at a general level, why are you so
> opposed to the idea of endpoints participating in QoS?
>
>
> I like the idea, but unfortunately QoS is a catchall term that really
> has no meaning without pinpointing the precise area of quality desired.
>
>
>
>
> Next layer is the OS kernel, can it accept all the incoming
> packets without drops. Evidence has shown that best effort is
> generally preferable to a prioritised queuing scheme.
>
> I can accept this may be the case, however I'd like to know link
> speed, what is meant by accept (how far up the stack they are
> processed), and what preferrable means. Also, in another place in
> the thread you mention that prio queues can add latency over best
> effort. is this added latency noticed in the high prio packets, or
> do they get processed faster than best effort, while the other, low
> prio packets suffer disproportionately? If the latter, it can be a
> perfectly acceptable outcome.
>
>
> So much so that Linus & Alan removed any attempts at it from Linux, so
> except for using something like QNX you have limited platform support
> for this scenario.
>
>
>
> So your solution to prevent priority queueing in the network stack
> is to implement priority queueing in the app? Doesn't this just beg
> for many poor implementations?
>
>
> This is how many large vendors currently implement it, either in the app
> or in a broker. I surmise the logic being that everything underneath
> the application layer is sufficiently fast to not require such support,
> at millions of messages per second whether you are one or twenty packets
> behind is not going to be too noticeable for many.
>
> Having priority support implies queuing which implies additional latency
> issues and for which produces a large scope of trade offs which may be
> dependent on the application domain.
>
> The main issue seems to be whether this is really a requirement of the
> transport and not simply the API feeding from the incoming data streams
> to the application workers. Then continues questions about the queue,
> how large can the queue grow, is it persistent, is it mapped to disk?
>
> If you look at the system as a whole you have a few choices, you can say
> sensor data has absolute #1 priority and any other traffic, i.e. the
> video streams should even halt network usage until the sensor broadcast
> is complete - a concept of restricted the flow of information at the
> source. Conversely you can restrict the flow of information at the
> receiver and say everything is going to be sent on the network and all
> receivers must cope with it - q/a and development of such receivers
> designed to support worst case scenario.
>
> --
> Steve-o
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
More information about the zeromq-dev
mailing list