[zeromq-dev] zmq cpp performance compared to the latency reported by `qperf`

Ernest Zed kreuzerkrieg at gmail.com
Sun Dec 2 14:52:46 CET 2018


Thanks for reply, Doron.
I have one client and two server implementations, all taken from ZMQ
documentation.
The client uses REQ socket. The simplest server implemented as REP and
single-threaded - the receiving loop sends the answers immediately. Classic
req/rep scheme, I do not expect any performance here, but I do expect low
latency. The second implementation employs ROUTER for receiving socket,
DEALER for dispatcher and REP for worker. in this case I dont expect low
latency, but I do expect good throughput. In both cases I get nor low
latency neither high throughput. What is the message rate one would expect
given 512 bytes message size? Is 250k of messages per server core is a
reasonable number? What is the acceptable latency overhead one can expect
from ZMQ? Say, 50% in case of TCP latency of 50us?

The code
static void ZMQReferenceServer()
{
std::cout << "Starting server" << std::endl;
zmq::context_t context(1);
zmq::socket_t socket(context, ZMQ_REP);

socket.bind("tcp://*:5555");

while (true) {
zmq::message_t request;

//  Wait for next request from client
socket.recv(&request);

//  Send reply back to client
zmq::message_t reply(5);
memcpy(reply.data(), "World", 5);
socket.send(reply);
}
}

static void ZMQReferenceMTServer(size_t threads)
{
std::cout << "Starting server" << std::endl;

zmq::context_t context(8);
zmq::socket_t clients(context, ZMQ_ROUTER);
clients.bind("tcp://*:5555");
zmq::socket_t workers(context, ZMQ_DEALER);
workers.bind("inproc://workers");
std::vector<std::thread> workerThreads;
//  Launch pool of worker threads
for (auto i = 0ul; i < threads; ++i) {
workerThreads.emplace_back([&]() {
zmq::socket_t socket(context, ZMQ_REP);
socket.connect("inproc://workers");

while (true) {
//  Wait for next request from client
zmq::message_t request;
socket.recv(&request);
//  Send reply back to client
zmq::message_t reply(6);
memcpy(reply.data(), "World", 6);
socket.send(reply);
}
});
}
//  Connect work threads to client threads via a queue
zmq::proxy(clients, workers, NULL);
}

static void ZMQReferenceClient(const std::string& address)
{
zmq::context_t context(1);
zmq::socket_t socket(context, ZMQ_REQ);

std::cout << "Connecting to hello world server…" << std::endl;
socket.connect("tcp://" + address + ":5555");

const size_t cycles = 10'000;
double throughput = 0;

zmq::message_t reply;

auto start = std::chrono::high_resolution_clock::now();
std::vector<uint8_t> buff(MessageSize, 0);
for (auto i = 0ul; i < cycles; ++i) {
zmq::message_t request(MessageSize);
memcpy(request.data(), buff.data(), MessageSize);
throughput += request.size();
socket.send(request);
//  Get the reply.
socket.recv(&reply);
}
auto end = std::chrono::high_resolution_clock::now();
auto us = std::chrono::duration_cast<std::chrono::microseconds>(end -
start).count();
std::cout << "Latency: " << us / cycles << "us." << std::endl;
std::cout << "Througput: " << std::fixed << throughput / us * 1'000'000 /
1024 / 1024 << "MiB/s." << std::endl;
}

On Sun, Dec 2, 2018 at 2:44 PM Doron Somech <somdoron at gmail.com> wrote:

> This is known, when you do request-response measure, because of zeromq
> internal thread signaling. Actually on high load you will get better
> results. ZeroMQ excel at high number of messages and asynchronous
> communication pattern.
>
> In case of request-response between client and server you will still enjoy
> the benefits on the server side if you use a router and answer requests
> asynchronously. However, if you do pure request-response, zeromq add some
> jitter.
>
> On Nov 28, 2018 02:53, "Ernest Zed" <kreuzerkrieg at gmail.com> wrote:
>
> I'm trying to figure out why the qperf reports latency (one way) x and
> REQ/REP (roundtrip) reports something like 4x. Any particular socket
> tweaking I have to do? Because if I just open socket, set TCP_NODELAY
> (which set in ZMQ by default) I get latency very close (for 1k buffers) to
> the number reported by qperf. However the ZMQ is lagging behind these
> numbers about 4-5 times
> The ZMQ server
>
> zmq::context_t context;
> zmq::socket_t socket(context, ZMQ_REP);
>
> socket.bind("tcp://*:5555");
> while (true) {
>     zmq::message_t request;
>
>     //  Wait for next request from client
>     socket.recv(&request);
>
>     //  Send reply back to client
>     zmq::message_t reply(5);
>     memcpy(reply.data(), "World", 5);
>     socket.send(reply);}
>
> The ZMQ client
>
> zmq::context_t context;
> zmq::socket_t socket(context, ZMQ_REQ);
>
> std::cout << "Connecting to hello world server…" << std::endl;
> socket.connect("tcp://my.host:5555");
> const size_t cycles = 100'000;double throughput = 0;
>
> zmq::message_t reply;
> auto start = std::chrono::high_resolution_clock::now();vector<uint8_t> buff(MessageSize, 0);for (auto i = 0ul; i < cycles; ++i) {
>     zmq::message_t request(MessageSize);
>     memcpy(request.data(), buff.data(), MessageSize);
>     throughput += request.size();
>     socket.send(request);
>     //  Get the reply.
>     socket.recv(&reply);}auto end = std::chrono::high_resolution_clock::now();auto us = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
> std::cout << "Latency: " << us / cycles << "us." << std::endl;
> std::cout << "Througput: " << std::fixed << throughput / us * 1'000'000 / 1024 / 1024 << "MiB/s." << std::endl;
>
> Both are essentially ZMQ examples provided here
> http://zguide.zeromq.org/cpp:hwclient Some background, Linux, Ubuntu
> 18.04, GCC 7.3, static library provided by vcpkg, built locally, looks
> like they pull the master from GitHub.
>
>
> Original question on stack overflow
>
>
> https://stackoverflow.com/questions/53506353/zmq-cpp-performance-compared-to-the-latency-reported-by-qperf
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20181202/e19a119b/attachment.htm>


More information about the zeromq-dev mailing list