[zeromq-dev] Async access to a single socket - send and recv without reducing throughput
Amir Taaki
zgenjix at yahoo.com
Tue Dec 31 07:11:27 CET 2013
Hi!
I have a bit of a design issue. I want to achieve this:
- Software receives a request on a DEALER socket from a ROUTER socket (think the worker in paranoid pirate pattern).
- Request is executed asynchronously in the software. Result is ready inside another thread.
- Send it back over the same socket as the receive.
In an ideal world, I'd be polling the socket to receive in one thread, and then performing the sends from another. But we cannot use sockets from multiple threads - sockets can only be used from one thread.
Currently I have a multi-producer single-consumer lockless queue for sends, and then there's an std::condition_variable that's waiting for a signal to wake-up and batch process the send queue. Also there's a polling loop for receives. All the options I can think of have downsides:
* Use a separate receive and send socket. I'd need some mechanism so that the broker (ppqueue) is aware of receive/send socket pairs. Maybe a special message for the broker that isn't relayed, but indicates the identity of the receive socket.
* Polling then sending reduces the throughput of sends. I've benchmarked performance and it's a significant hit. You're essentially penalising sends by T microsecs. Sleeping for 0 is not a good idea since CPU usage hits 100%.
* Using the same socket but synchronising access from the send/recv threads - ZMQ crashes infrequently when I do this because of triggered asserts. It'd be great if I know how to do this safely (maybe by calling some method on the socket).
How do I achieve this scenario of 50% random receives and 50% random sends? It's not like the classic scenario of a receive, followed by some synchronous code path, and then a send (within the same thread). It's not an option to wait for requests to finish as I dropped Thrift to use ZMQ because of its async ability.
Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20131230/4e15f5a1/attachment.htm>
More information about the zeromq-dev
mailing list