[zeromq-dev] Design for a Pub/Sub system

Yanone post at yanone.de
Wed Nov 11 15:27:26 CET 2020


Sorry, I forgot the repo: https://github.com/typeworld/messagequeue-docker


> Am 11.11.2020 um 15:02 schrieb Yanone <post at yanone.de>:
> 
> Hi everyone,
> 
> Remember me?
> I'm making some progress, but now I'm stuck and thought some of you may have been there already.
> 
> So instead of Kubernetes I ended up with Google Compute Engine, a single VM instance that allows for a static IP address.
> Here's my docker image, which is based on another image I created that has libzmq and pyzmq compiled with draft support, to allow for the radio-dish pattern. In server.py, it publishes to that hard-wired static IP address.
> 
> Listening currently requires the IP as an argument, so that would be "python receiver.py 35.196.237.158"
> At http://35.196.237.158:8080/ the message send gets triggered, which is very close to my later setup where I have to trigger that somehow, so why not an HTTP call?
> 
> In Docker locally on my computer everything works. But once it's online in Compute Engine nothing comes through.
> I'm expecting a basic setup problem, such as networking/NAT, or something like that.
> 
> I created firewall rules to allow for TCP and UDP traffic on port 5556, both incoming and outgoing, to make sure. Yet... silence.
> Does anyone have an idea?
> 
> Your help is most appreciated.
> Jan
> 
> 
> 
> 
>> Am 03.11.2020 um 16:06 schrieb Bill Torpey <wallstprog at gmail.com>:
>> 
>> 
>>> In any case, one question of mine has remained unanswered so far:
>>> How does one realize horizontal scaling? So far everyone seems to assume a single machine/node.
>>> 
>>> Regardless of the socket type used, I will run into some limit at some point. Then I need to add nodes to the cluster. How is the load balanced between them?
>> 
>> Maybe not … it sounds like you’re sending a small number of small messages infrequently, but to a VERY large number of recipients.
>> 
>> With a point-to-point protocol like TCP you’re correct — you’ll hit some limit on what a single machine can handle, so will need to scale horizontally, which can be quite tricky.
>> 
>> Which is the beauty of multicast — you’re leveraging the network itself to do the fan-out.  So multicast may enable you to avoid scalability problems, but only if multicast works in your use case.
>> 
>> If you need to go point-to-point, then you will likely need to do some kind of “sharding”, and like anything else simpler solutions are, well, simpler.  For instance, when you exceed what a single machine can handle (quite a lot depending on the machine), then you go to two machines, then four, etc.  At each step you need some kind of maintenance window to re-balance traffic across servers, and some criteria to decide which recipients belong in which “shard”.
>> 
>> You can also make the sharding dynamic, but that is much more work — the advantage in your case is that it sounds like the traffic is not “transactional”, so you don’t care if a message gets delivered more than once, which makes the dynamic approach more feasible (but still tricky).
>> 
>> 
>> 
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> 
> 
> 
> --
> 
> Yanone
> Dipl.-Designer,
> Master of Design in Type & Media
> https://yanone.de
> Twitter/Instagram: @yanone
> +49 176 24023268 (Germany)
> 
> Type.World project:
> https://type.world
> 
> Crowd funding on:
> https://www.patreon.com/typeWorld
> 
> 



--

Yanone
Dipl.-Designer,
Master of Design in Type & Media
https://yanone.de
Twitter/Instagram: @yanone
+49 176 24023268 (Germany)

Type.World project:
https://type.world

Crowd funding on:
https://www.patreon.com/typeWorld




More information about the zeromq-dev mailing list