[zeromq-dev] QoS support

Erich Heine sophacles at gmail.com
Thu Nov 12 06:59:34 CET 2009


See answers inline below, but at a general level, why are you so opposed to
the idea of endpoints participating in QoS?

On Wed, Nov 11, 2009 at 8:37 PM, Steven McCoy <steven.mccoy at miru.hk> wrote:

>
> It depends what is the limiting factor in the system and what quality you
> are expecting.  If you are talking about pure network bandwidth then QoS
> settings on the host are completely ignored if the switch port is maxed out
> by video packets.  Such a system should be setup with bandwidth limiting on
> the video feed, separate LANs for the video and sensors, or simple
> prioritization at the network elements.
>
> You are correct, however I am talking about endpoint QoS. I cannot solve
problems not on the host with host based QoS. I never assumed link
saturation here. My ingress controls simple feed data to higher levels of
the network stack in a priority based order. Network stacks add latency so
reducing wait times for packets in queue can matter (e.g. selecting those
with a high priority first).


> Next layer is the OS kernel, can it accept all the incoming packets without
> drops.  Evidence has shown that best effort is generally preferable to a
> prioritised queuing scheme.
>
> I can accept this may be the case, however I'd like to know link speed,
what is meant by accept (how far up the stack they are processed), and what
preferrable means. Also, in another place in the thread you mention that
prio queues can add latency over best effort. is this added latency noticed
in the high prio packets, or do they get processed faster than best effort,
while the other, low prio packets suffer disproportionately? If the latter,
it can be a perfectly acceptable outcome.


> Finally at the application layer you have simply processing power to handle
> the incoming data streams.  This depends on number of threads, number of
> cores those threads are running on, etc.  Similar to Rendezvous is to have
> one incoming stream running at best effort taking data from the kernel,
> separate worker threads can then cherry pick messages from the incoming
> queue based on tagged priority whether tagged from source or derived from
> subject or content.  You can tweak the worker pool depending on your
> requirements, for instance keep some workers in reserve to only process high
> priority messages.
>

So your solution to prevent priority queueing in the network stack is to
implement priority queueing in the app?  Doesn't this just beg for many poor
implementations?


> For example, if you have four cores on a box you could reserve two for
> handling sensor data, keep the third for queuing, and bind everything else
> to the forth.
>
>
Doing core based reservations like this is a form of host-based QoS. The
service is not network bandwidth in this case but processing power

Regards,
Erich
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20091111/4b9086ed/attachment.htm>


More information about the zeromq-dev mailing list