[zeromq-dev] "first time" missing messages?
Matthew Woehlke
matthew.woehlke at kitware.com
Tue Aug 27 19:06:24 CEST 2013
We have a client-server application that runs queries against a database
using 0MQ as a transport.
The client is transport-agnostic; there is only a plugin that knows
about 0MQ. For each query, we create a new query session, which creates
new 0MQ contexts/sockets/etc. The server I believe has persistent 0MQ
sockets.
The client runs a query by sending a message over socket A to the
server. The server then sends back results, which may involve tens of
thousands of messages at a few hundred per second.
On Linux this seems to be working okay. However on Windows, the *first*
query seems to be losing messages. This in and of itself is perhaps not
shocking, but what is really puzzling us is that if the client then runs
the same query a second time, no loss occurs.
Does anyone know why this might be happening? Is there some sort of
network tuning possibly happening that would be preventing loss the
second time? Is there a recommended way to adjust things to avoid the
initial loss?
(Note: because the client connect()'s the SUB socket on which the
responses are received before issuing the query to the server, I don't
believe we are losing initial messages due to connect lag. Also, the
first and last messages are special/marked and AFAIK we are getting both.)
--
Matthew
More information about the zeromq-dev
mailing list