[zeromq-dev] Incoming data on multiple sockets and fairness
sustrik at 250bpm.com
Tue Mar 9 16:08:53 CET 2010
The first thing to think about IMO is why do you need to handle multiple
0MQ sockets. Each 0MQ sockets can handle unlimited amount of underlying
BSD sockets so in most cases there shouldn't be a need for that kind of
thing (unless you want to handle different socket types at the same time
in a single thread).
Can you possibly explain the problem you are solving?
> This is yet another request for your insights...
> I will have a process that may be receiving incoming data on multiple
> sockets (probably a couple of Upstream sockets, at least one Sub
> socket). I see two designs to handle this, each with different
> implications on fairness; by "fairness" I am thinking about processing
> incoming messages from all sockets in a way that none of them waits "too
> long" to be processed.
> 1. Have many threads; each thread waits on a zmq_recv() call and, upon
> receiving a message, processes it right away and goes back to calling
> zmq_recv(). Of course this means I would have to use mutual exclusion
> for processing the messages (supposing this processing will need to
> touch shared structures). In this scenario, I would be trusting the
> thread scheduling mechanism (however goo or bad) to ensure fairness.
> 2. Have a single thread doing a zmq_poll() on all sockets; upon being
> signaled, process the messages on all signaled sockets. But in which
> order? What if socket 1 has 100 pending messages and socket 2 has 3
> pending messages? Which ones do I process first, and how do I know the
> pending queue length for each socket? Related question: if a socket
> receives 10 messages, causing zmq_poll() to wake up, and I only read 2
> of those messages, will zmq_poll() wake up right away if I call it
> again, or do I have to process all 10 messages before counting on this?
> (I believe this is called "edge-triggering" vs. "level-triggering").
It's strictly level-triggered.
> 3. I could also have a hybrid: many threads, as in #1, but they simply
> put all incoming messages on a single queue. Then have one thread (or
> more) fetching elements off this queue and processing them. It adds more
> latency and it forces me to use mutual exclusion for pushing elements on
> the queue tail and reading them off the queue head, but looks "saner" to
> me and I might even use the queue to handle retries.
If you want to send messages from multiple threads to a single thread,
simply bind a SUB socket to inproc endpoint in the receiving thread and
connect a PUB socket to the endpoint from each sending thread. That way
you'll need no mutexes.
More information about the zeromq-dev