[zeromq-dev] Greetings & questions about inproc PUB/SUB inside Twisted.
David J W
zeromq-dev-subscribe at ominian.net
Sat Jan 26 21:35:45 CET 2013
Started finding answers on my own, so the latest version in Github for
PUB mechanism does know who's subscribed to what
and it further looks like it only sends matching messages to
subscribed SUB client's.
https://github.com/zeromq/libzmq/blob/master/src/xpub.cpp#L121
Also in sub there's an additional sanity check to ensure that the
client is actually subscribed to messages it receives, very nice.
Also got to say it's pretty clever how 0MQ builds up complexity by
re-using it's own messaging infrastructure for (un)subscriptions.
---------- Forwarded message ----------
From: David J W <zeromq-dev-subscribe at ominian.net>
Date: Sat, Jan 26, 2013 at 12:56 PM
Subject: Greetings & questions about inproc PUB/SUB inside Twisted.
To:
Hello,
My name is David W, I am a professional code monkey that was first
introduced to 0MQ via Zed Shaw's talk at PyCon ( believe it was 2010 )
but only now have gotten the chance to sit down and start learning the
library. I am on a sabbatical until ideal after PyCon since I got
laid off and taking the time out to hit a bucket list ( including
learning 0MQ ).
On that note, a few years back as a re-invent the wheel project to
learn Twisted and COMET I started writing a web MUD engine and
centered the architecture around two message pipelines: User action's
were locked stepped ( User A moved left, tell server, wait for it to
say yes/no), broadcast to other User's that User A moved left,
broadcast to all of User A's group they moved left. NPC's were just
headless User's driven by a behavior time/tick subprocess that hooked
upto the same pipelines. I set that project aside because I
realized I needed a message queue of some sort and really didn't want
to setup Rabbit or anything super industrial.
Now along arrives 0MQ and since this is a personal project the
priority is more about understanding how 0MQ works then accomplishing
the actual project. In the above example I can imagine using 0mq's
inproc socket's where client's are SUB types ( subscribe to
map/domain, subscribe to group chat ) and their is a master process
that has a router socket for incoming work and a pub socket for
products [ User A in map 1 moved left] ).
So here's my questions:
For PUB/SUB the impression is that the actual queue sit's on the
client socket. PUB pushes a message to all client's [ regardless of
setsockopt(zmq.SUBSCRIBE ) ] and the act of reading the socket
filter's/clears the queue down to what the client is subscribed to.
Is this correct or is the subscription more intelligent ( PUB keep's a
subscription roster, see's no one is subscribed, drop's the message OR
client receives a message, isn't subscribed so it drops the message ).
Has anyone had any experience running multiple SUB based client's
inside of one process and are their any severe consequences. I
imagine a SUB socket is going to instantiate the needed structures to
hold a queue, the actual socket, and other house keeping structures
but so far small tests (1-10 sockets) hasn't show much memory use.
Additionally, if I do get past digging through 0MQ's mechanics, I
was thinking it would be best to spin off the PUB side to it's only
process. Which leads me to wonder if 0MQ inproc PUB/SUB actually
relies on some clever memory mapping. eg Push a message on an inproc
PUB socket which goes to a shared/mutex locked list and client's just
read from this one list.
Apologies if some of these questions seem naive, I haven't gotten
the chance to read 0mq's C source code yet.
Thanks,
Dave
More information about the zeromq-dev
mailing list