[zeromq-dev] Thread Safe sockets
Gleb Peregud
gleber.p at gmail.com
Fri Feb 3 21:13:42 CET 2012
On Fri, Feb 3, 2012 at 18:19, john skaller
<skaller at users.sourceforge.net> wrote:
>
> On 04/02/2012, at 3:52 AM, Chuck Remes wrote:
>
>> In my personal opinion, it sounds like an OK patch. If people don't use the new api call to allocate their context, they pay no penalty. Assuming that's the case, then I actually kind of like it.
>
> There is a penalty. I believe it is small: two tests which could be reduced to one.
> Compared to the actual operation being done, that seems irrelevant.
> But I don't know: someone should probably measure it against some
> benchmark.
>
> Also, before the code is reverted away, it may be fun to run a couple
> of benchmarks with and without the locking, so see exactly how
> much the locks cost.
>
> I can't do this, I don't have any code to test.
>
>> Kind of odd that you wrote the patch if you don't even need it. I think I'm back to confusion about your goals in this message thread regarding libzmq & Felix.
>
> When I come to a community like this I am taking something which, commercially,
> would be quite expensive. And which people put a lot of work into.
>
> So I like to give something back. It's partly for the "good feeling" but
> also just plain sense in an environment based on free trade
> of ideas (not to mention actual code patches :)
I have a slightly bad feeling about adding additional overhead to
zeromq core, which is supposed to be as performant as possible.
In Erlang erlzmq2 bindings it has been solved in the following way:
- so called NIFs are used.
--- Those are C-coded functions which are running directly
in the Erlang processes' scheduler threads, hence they
are able to block whole VM completely
- there one additional "polling" pthread started for each
zeromq context created by Erlang code
- polling process handles few things:
--- does polling on all zeromq sockets attached to the context
--- does polling on inproc pull socket
--- handles "control" commands sent to it via inproc socket
- when Erlang process (or fiber as they are called in Felix) does a
blocking zmq_recv the "recv control message" is sent to polling
process
- Erlang process gets blocked in erlang "receive" block (hence
allowing Erlang scheduler to proceed further)
- polling thread adds appropriate socket into the polling list
- when that socket receives any zmq message, polling thread sends an
Erlang message to blocked Erlang process, so it can unblock
Polling thread socket is protected with mutex, since Erlang VM can
throw Erlang process between schedulers arbitrary.
It solves the problem of multiple threads (in this case Erlang
scheduler threads) accessing the same socket in a non-intrusive way,
although with an overhead in Erlang binding (due to indirection), but
keeps zeromq core clean and tidy.
Is similar structure possible to implement in Felix?
More information about the zeromq-dev
mailing list