[zeromq-dev] feedback on proposed Context API addition

Calvin de Vries devries.calvin at gmail.com
Wed Feb 8 20:26:09 CET 2012


Hi all,

I'm the user from IRC (calvin) that spoke with Chuck about this idea. The
only modification I have (that we just discussed) would be that #3 would
not let the scheduler just use ANY cpu, but the list of CPUs provided in
the bitmask that is pased to zmq_init_with_affinity.

For example:

If the user says they want 3 I/O threads and the bitmask is comprised of
CPUs 0 and 1, the function would pin thread #1 to cpu 0, thread #2 to cpu1
and then thread #3 would be allowed to use either CPU 0 OR 1.

I think this is a good improvement because in this case, the extra thread
is not allowed to touch other threads in the application and cause
performance issues.

Thanks,
Calvin de Vries


On Wed, Feb 8, 2012 at 1:56 PM, Chuck Remes <cremes.devlist at mac.com> wrote:

> A user on irc (calvin) may post here (or on this thread) later. He popped
> into the channel asking about pinning a socket to a specific CPU. I pointed
> him at ZMQ_AFFINITY which is available through zmq_setsockopt(). However,
> as he pointed out, that only controls socket affinity with I/O threads
> which may still be scheduled on any CPU.
>
> After a little back-and-forth, here's a proposal for an addition to the C
> API.
>
>        void *zmq_init_with_affinity (int io_threads, char*
> cpu_bitmask_buffer, size_t bitmask_len);
>
> Such a change would allow a programmer to create a context and specify
> which specific CPUs should have I/O threads pinned to them. We need to use
> a byte buffer to contain the bitmask and pass a length since systems with
> more than 64 CPUs and/or cores are already available.
>
> For those cases where +io_threads+ is a number larger than the number of
> bits set in the bitmask, the function has three reasonable ways of treating
> it.
>
> 1. Round-robin assign I/O threads to the CPUs listed in the bitmask.
>
> 2. Pin the "excess" I/O threads to the last CPU listed in the bitmask.
>
> 3. Skip pinning the "excess" I/O threads to any CPU and let the scheduler
> float them around as needed.
>
>
> This addition would not break existing code. Furthermore, we could
> implement zmq_init() internally with a call to zmq_init_with_affinity().
> Passing a zero-length buffer would skip the CPU-affinity pinning
> functionality (much like option 3 above).
>
> Feedback?
>
> cr
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20120208/9e2791f5/attachment.htm>


More information about the zeromq-dev mailing list