[zeromq-dev] nitro
Steven McCoy
steven.mccoy at miru.hk
Fri Aug 23 20:37:52 CEST 2013
On 23 August 2013 13:20, Michael Haberler <mail17 at mah.priv.at> wrote:
> http://gonitro.io/
Copy & pasting:
Q: How is this different than ZeroMQ?
Nitro is in many ways a reaction to ZeroMQ (ZMQ), and is heavily inspired
by it (especially czmq). The team at bu.mp (that wrote Nitro) uses ZeroMQ
extensively, including shipping it on tens of millions of smartphones, and
there's a lot to love about ZMQ.
So much, in fact, that the deficiencies of ZMQ we encountered motivated
Nitro--because the basic communication model that ZMQ pioneered is mostly
wonderful to use and develop against.
However ZeroMQ was designed for relatively low-concurrency, very high
throughput (especially multicast PGM) communication on private, trusted,
low-latency networks... not for large scale public Internet services with
high connection counts, fickle clients and wobbly links.
These are some of the design decisions Nitro made that differ from ZeroMQ:
- Nitro provides more transparency about the socket/queue state (client
X is connected, queue count is Y) for monitoring reasons and because
clients quite often never come back in public networks, so state needs to
be cleared, etc.
- Nitro does not commit messages to a particular socket at send() time,
but does send() on a general queue and lets peers "work steal" stripes of
frames as soon as they have room on their TCP buffer. This makes for a lot
more transparency about the "true" high-water mark for a socket, it
constrains the total number of messages that may be lost due to a client
disconnect, and it can minimize mean latency of receipt of any general
message (vs. naive round robin).
- Nitro clears private queues for dead peer sockets instead of leaving
them around indefintely in case they return. This fixes one of the biggest
issues with doing high-concurrency work in ZeroMQ: an unavoidable memory
leak in ROUTER sockets when there is pending data for clients who will
never return.
- ZeroMQ (esp 2.X) had some more or less hard coded peer limits (1024),
which is far less than a good C daemon on epoll can handle; nitro has no
such restrictions.
- ZeroMQ does not have any crypto story, so we had to roll our own
awkardly using NaCl. With Nitro, we built NaCl/libsodium in, including a
key exchange step, so you don't need to ship keys with every frame.
- ZeroMQ's heritage of private networks has bit us and others with
things that are assert()s instead of closes-and-logs. On the public
internet, sometimes people with your socket with a random HTTP request. It
is also not clear how much attention ZeroMQ has paid to invalid data in the
form of attacks on inbound TCP streams, integer overflows, etc. Nitro tries
to be paranoid and it shuts down bad peers instead of crashing.
- ZeroMQ ROUTER sockets also have some O(n) algorithms, where n is the
number of connected peers on a socket; nitro is all O(1). This doesn't
matter much when you have 5 or 10 or 50 big server systems pushing loads to
each other on a private network, but it sucks when you have 50,000
mostly-idle clients on high-latency Internet links.
- In practice we found the "typed socket" paradigm (REQ/REP/PUSH) more
of a hindrance than a help. We often ended up with hybrid schemes, like
"REQ/maybe REP", or "REQ/multi REP". Also, if you want REQ/REP with
multiple clients where you do some processing to produce the REP result,
you'll need to chain together ROUTER/DEALER. REQ/REP stacks and make sure
you carefully track the address frames. Nitro lets you create any topology
you want, and the routing is relatively abstracted from the application
developer--you don't need to worry how deep or shallow you are, for example.
- We found having the ZMQ's routing information in special MORE frames
that have implict rules that differ on the basis of socket types (DEALER
will generate it, REQ will not) cumbersome.
- ZMQ Socket options have documented rules about when they take effect
and when not, but these rules are not enforced by the API so they can bite
you. Nitro separates things that must be decided at construction time from
those you can modify on the fly (_sub and _unsub, etc).
- Pub/sub based on message body was limiting for us in practice.
Oftentimes we wanted a separation of the "channel name" and the message
body.
- ZMQ sockets are not thread safe. So the way to make a multicore
exploiting RPC service is to chain tcp/ROUTER frontends to an array of
inproc backends, each running in a separate pthread. This is a layer of
complexity nitro removes by just having sockets be thread afe.
- ZMQ_FD is edge triggered. It's much harder to integrate an
edge-triggered interface into other event loops. Though it has a
theoretical performance benefit, Nitro uses a level-triggered activity fd
to make integration easier for 3rd party binding developers.
- Nitro is written in C. We prefer C, and we don't like linking against
libstdc++ :-) .
As always, though, there are pros and cons:
- On very small (<40 byte) TCP messages and inproc messages, ZeroMQ is
faster (30-40%) than Nitro; Nitro's use of mutex/cond on socket queues
probably costs it there.
- Nitro is somewhat faster, though, on frames > 50-70 bytes).
- Nitro has no equivalent of PGM support, nor will it ever. It doesn't
fit the project's goals. Nitro's target users don't usually control the
switching hardware to a degree to use PGM. So if you're on a network where
very high performance multicast is key, ZMQ is probably a better fit.
- Nitro does not have multi-host connect. If that topology is critical
to you, ZeroMQ can help (or HAproxy, but this is not as clean or as fast.)
- Nitro is very young, and does not have nearly the language support
ZeroMQ has. Chances are if you want to use Nitro in your language of
choice, you're going to have to make it happen. Unless your language is C,
Python, or Haskell
- ZeroMQ is ported to work on Windows and lots of other places. Nitro
has not yet been ported to anything but Linux and Mac OS X.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20130823/18deb95f/attachment.htm>
More information about the zeromq-dev
mailing list